Need help upgrading PC

DestinedPika

Commendable
Jul 15, 2016
25
0
1,530
I need help upgrading my PC. I want to be able to run a 4K monitor + games in 4K with good settings and resolution. My specs are:

CPU: AMD FX-6300
RAM: HyperX Fury 8.00GB DDR3
Motherboard: MSI 760GMA-P34
GPU: MSI GeForce GTX 960 2GB
Storage: 117GB SanDisk SSD(I boot off this)
Storage: 1TB Seagate Hard Drive(I store files and games on this)
Case: Corsair Carbide Series SPEC-02 Mid Tower Gaming Case
PSU: Corsair RM550x 550W

What do I need to upgrade? Or, if I need to build a new computer, what should the specs be?
Any help is much appreciated. Thanks in advance!


-Pika
 
This should get you in the ball park of 4K.
PCPartPicker part list / Price breakdown by merchant

CPU: AMD - Ryzen 5 1600 3.2GHz 6-Core Processor ($199.99 @ SuperBiiz)
Motherboard: Gigabyte - GA-AX370-Gaming K5 ATX AM4 Motherboard ($131.98 @ Newegg)
Memory: G.Skill - Trident Z 16GB (2 x 8GB) DDR4-3200 Memory ($129.99 @ Newegg)
Storage: Crucial - MX300 275GB 2.5" Solid State Drive ($97.88 @ OutletPC)
Video Card: Asus - GeForce GTX 1080 Ti 11GB Founders Edition Video Card ($704.98 @ Newegg)
Case: NZXT - S340 (Black) ATX Mid Tower Case ($69.99 @ B&H)
Power Supply: Antec - High Current Gamer 620W 80+ Bronze Certified Semi-Modular ATX Power Supply ($76.50 @ Newegg)
Total: $1411.31
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2017-06-30 22:11 EDT-0400
 

TheFluffyDog

Honorable
Oct 22, 2013
469
0
10,960
Recommend you replace Motherboard, CPU,<(these will require DDR4 RAM) and GPU.

6-core FX builds are awesome for the $$$ but they suffer from intermittent FPS drop in CPU intensive scenes or games with lots of NPC's or physics. IF you are going 4k, then obviously the GPU is the more important upgrade, but in modern games the bang for your buck with FX is starting to roll off.

 

TheFluffyDog

Honorable
Oct 22, 2013
469
0
10,960


honestly though, unless you are trying to do ULTRA everything, there is no need for a 1080ti. I can play Just Cause 3 on Max detail, 4k FXAA SSAO and all i have to do is turn down the shadow and water detail and i get a min 45 (which is actually really good at any resolution in that game) and i average around 60-70 @4k. I use a 980ti that i picked up for 300$ last november. currently the 1070 is on par with the 980ti, but current pricing is bad. Jsut go with a 1080 (minus the ti) and go with a 4 core Ryzen. The extra 2 cores do nothing for games.

Its 2017, you can easily play high settings in 4k for 1k, no need to spend 50% more for 20% performance increase.
 

TheFluffyDog

Honorable
Oct 22, 2013
469
0
10,960


not even close. once again you are only talking about the highest detail settings. If you can see the difference between FXAA and SMAA in 4k then we wouldnt need telescopes. Post processing effects, which are EXTREMELY hard in 4k are less noticeable in 4k due to the large number of pixels.

YOU ARE CORRECT in saying that one could purposely turn on effects that aren't perceivable and require more GPU horsepower to do so, but by saying a 1080ti is the middle ground is a flat out misconception.
 

TheFluffyDog

Honorable
Oct 22, 2013
469
0
10,960
also i can run pretty much every game i own in 4k on a 980ti. Not every game is maxed out, but it will get me by for another 6months to a year. im just waiting for the 1080ti's to be 300$ like my 980ti was :p i don't liek to burn money :)

I even got Revodrive 3 480gb with a 1GB R/W rate back in 2015 for 240$. Only last year did the top of the line drives reach my speeds, and they were more expensive! The market for PC parts takes advantage of those who only read the reviews. meaning, this guy is right in saying that you wont max out every game. But back in 2013 people were struggling with 1080p, but thats why we upgrade occasionally. Top of the line products are always a bad investment.
 
Na just games are really taxing at 4K and many think 30fps is ok when 60fps is the playable fps. This is with a higher end CPU for gaming. Secondly these are benchmarks which are best possible. IE actual game play at the same resoultion is worse. Witcher 3 the had to turn things down.
aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9SLzIvNjU4MDQ2L29yaWdpbmFsL2d0YXYtNGstZnBzLnBuZw==

1080-ti-fcp-4k-100712312-orig.jpg

85946.png

85948.png

85952.png

85950.png
 

DestinedPika

Commendable
Jul 15, 2016
25
0
1,530
As I am going to be waiting for a price drop/sale in GPUs before I upgrade, I am going for the GTX 1080 TI. Other than that, the pcpartpicker list that TheFluffyDog mentioned sounds good.
 

TheFluffyDog

Honorable
Oct 22, 2013
469
0
10,960


thank you for proving my point. All of those benchmarks, that you have taken the time to find for me so i dont have too, are at the ULTRA detail preset. Once again i say that post processing is the hardest thing to do in 4k, and once again, post processing was developed to get rid of low resolution artifacts. And once more, we are talking about 4x the resolution of 1080p. SO once again, the only reason to get the 1080 ti is, as i said before, if you for some reason will be analyzing a still image in a game for post processing effects. setting things like Anisotropy to 4x (all you really need opposed to 16x) AA to FXAA or off, and turning down shadow details and maybe taking textures to one step down from max should yield 45+ FPS on a 980ti or 1070 and a smooth 60FPS on a 1080.

I will say however, that GTA V is a special case. The highest detail texture settings are definitely noticeable at all resolutions and to take full advantage of the textures and 4k you will definitely require some of the top of the line products.

And lastly, yes i read the same benchmarks as you. But i have the pleasure of being the builder in my group of PC friends. In fact the guy that actually go t me into building no longer builds. I have used, and adjusted settings with 2 different 980ti builds, 1 - 1080 build, and had the pleasure of even using R9 - 290 based chips for 4k and 1440p resolution targets. I do all of my testing with the person who will be using the equipment, so they were sitting next to me as i turned down individual settings and they would let me know if the setting was noticeable. For different people we went through different settings and i showed them how they effect both the visuals and the framerate. Everyone had different requirements for image quality vs frame rate.

I will say that not a single person could notice changes in AA settings at 4k. And in still images where we could walk up to my 65" SONY 930D and try to point of differences, no one could tell the difference between FXAA and 'quality' AA's like MSAA or SMAA. just switching from 'quality' AA to FXAA gains a significant boost at 4k.
 
The thing is it takes a highend i7 for those results. You picked a low end R5 1400 with a motherboard that doesn't overclock. The R5 1400 at stock isn't any faster than the FX level CPU's. Without post processing due to the slow CPU you still will only get about 40fps at best. This is if the bottleneck doesn't take greater effect at 4k. All FX level CPU's bottleneck even the 1060 so I would fear what problems the 1080 will have.
https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+5+1400&id=2992

The OP should go for a R5 1600 with a B350 so its actually an upgrade.
https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+5+1600&id=2984

I've been build PC for the past 20 years and teaching PC tech. Before that I was a program for a fortune 500 company.
 

TheFluffyDog

Honorable
Oct 22, 2013
469
0
10,960


A 6-core Ryzen vs a 4-core Ryzen at the same clock rates will be the least of his worries at 4k resolutions. and also your processor has absolutely nothing to do with GPU performance. a CPU doesnt 'bottleneck' a GPU it bottlenecks the overall performance, only if the processor is taking more time than the GPU to finish a single frame. Also, the other way you can 'bottleneck' a GPU is by not being able to supply information from system RAM to VRAM, which is more about bus speed and memory speed. Ryzen's bus is still behind Intel's, but it is far from bottlenecking a GPU by limiting information transfer across the PCIe bus. Since you apparently are well educated on computer task schedulers, program engines and Graphics API's and how they effect the way a computer processes tasks (you claim to teach it) then lets look at it from a single task/frame perspective:

you need to remember the way tasks are scheduled in a PC. The processor and the GPU are working linearly on tasks (in games that is, i realize in computational programs the task sequence varies greatly). The processor does its job and then the GPU does its job. If it takes an equal time for each frame to finish its CPU bound tasks and GPU bound tasks, then you can get 1/2 of the relative performance increase of either upgrade. <and that the theoretical performance increase of those components. WE must also realize that games run on game engines which limit the ability of PC resources to be leveraged, mainly CPU resources.

however, if it is taking 5x longer to finish GPU bound tasks than it is to finish CPU bound tasks than that means that 1/6 of the frame time is the CPU bound task and 5/6 of the frame time is GPU bound. For this case the theoretical performance increase of a processor upgrade is 1/6 the relative performance of the new CPU to the old one to perform engine specific tasks.

but in this same case 5/6 of the frame time is related to GPU bound tasks. So if we upgrade the GPU we can achieve 5/6 of the relative performance of the GPU. <this is once again theoretical increase based on the API, but because graphics API's in games have much better resource scaling than the game engines do for the CPU, we see a greater correlation.

Now lets look to the future. As game engines have advanced we saw a shift from 2 core to 4 core utilization and we should be seeing better SHT support beyond that, so we should finally see the processor getting used more fully. Next we need to look at the part of the GPU bound tasks that rely on the CPU for task scheduling and RAM bus. The DX 12 API relies less on the processor for GPU bound tasks making the GPU scaling even better regardless of installed CPU;s because there are less calls to the host processor. Which means going forward, the i7 vs Ryzen argument will become less apparent.

there are NOT 2 ways to look at this. Computers do what they are told, and right now, for gaming, they are being told to use the GPU. Not to mention, Nvidia has already developed a way to move physics acceleration onto the GPU through PhysX.

And in closing, this is game specific. CPU benchmarks are designed to test the difference's between CPU's in tasks that are performed through the CPU. This will account for a very small fraction of PC utilization in a game rendering at 4k.
 

TheFluffyDog

Honorable
Oct 22, 2013
469
0
10,960


so wait, you are apparently some well educated programmer, and you don't even know how a game engine works? What i posted was not a spin, and the chart you linked is laughable at best. But, I would not make a statement and expect you to blindly listen to me, as you have tried with your chart. So I will give another example:

Lets say I am playing a game like Borderlands. And lets say i have a 144hz monitor (which i do so this example speaks to me).The reason I bring up borderlands is because its a called a CPU bound game in all corners of the internet. Now, i love this game, and it benefits greatly from FPS above 100hz and i have a monitor capable of that frame rate. Heres the thing, it is bottlenecked by my 4670k. i have a 980ti and my 4670K is bottlenecking the performance in this game for me. SO i have 2 choices, 1 upgrade the CPU (i will use a heavy overclock to simulate an upgrade) or turn down the graphics settings. since the CPU is the bottleneck, naturally the CPU OC is the preferred upgrade. So lets look at what i have actually done to balance my frame rate....

My target was a minimum 120hz and i was getting 90hz

So first things first, I did what i needed to do and OC the processor to 4.6Ghz. and by golly, there it was an average FPS of like 120FPS, but i noticed i was dipping closer to 110FPS. this made me sad. Knowing that the CPU was at its OC limits and that the game was still CPU bound by definition (i.e. 90% CPU utilization) according to your chart turning down the Graphics settings would have gotten me little gain because it was CPU bottlenecked.

But unlike you, i know how computers work and understand that there are 2 parts to each frame, CPU and GPU bound tasks. So i dropped Anisotropy to 4x and i turned bullet decals down and turned off AA (i was never a fan of it). And what do you know the frame rate shot up to a 140hz averagre and but still had its dips to around 115 at the minimum.

The best part is, the GPU utilization actually goes down in this case because the graphics cards isnt even working as hard as it could be..... that's right, 90% CPU utilization and actually dropping the utilization of the GPU netted me a gain in frame rate. by turning those settings down the GPU was completing its part of the task faster, which meant that it was spending more time waiting for CPU information, however, even with added down time the overall frame rate goes up.

My point is, bottlenecking in a video games is something many people don't understand. When you are explaining this to a community of people without a deeper understanding it makes sense to call a game with 90% CPU utilization and 75% GPU utilization a "CPU bottleneck", but the reality is that if the next processor up gains an improvement of lets say 7% (like most generation changes actually have in games) you will see a "90% utilization" of the 7% increase (so like a 6% increase).
so lets say the the old GPU at 75% utilization but the GPU upgrade has a 30% theoretical performance increase,. then the over all performance increase is theoretically the utilization x the performance of the new card minus the utilization x the perfromance of the old card:

(75% x 130%) - (75% x 100%) = 22.5% increase

but as we both know utilization of the new GPU will be lower as well so lets account for that by knocking off 5% utilization. The reason for that is because we will increase downtime, but we are not decreasing the amount of information the CPU can deliver, because that CPU is still the same.

(70% x 130%) - (75% x 100%) = 16% increase.


so basically that chart that claims a 4670k will bottleneck a 1080 in some CPU intensive games is definitely true, but i will still gain more performance by upgrading my 980ti to a 1080ti next year than i will if i upgrade my 4670k to a 7700k. I have one friend happily running a 2500k with a 1070 and playing games at 4k better than me with a 980ti because, bottlenecks are game dependent and CPU performance means so little in the majority of games.

And dont take this the wrong way, those charts are a great guide for people who are building a new PC and dont have time to learn, but in reality we can make much better use of our time and money by gaining a better understanding of the products we use.

 
Spin in your theory doesn't mean anything over actuall testing. The link has actually testing. Programmers can try but never get near perfect. This is why the tests mean more than what you think. This is why games have to be tested over and over. Updates to gain a few FPS. The tests are absolute. Now I know nothing about game engines but great at making CSGO maps in Hammer editor, paint.net for images, and using blender to make models. Thats just a small hobby tho.