[SOLVED] What is the best GPU I can pair to my pc.

Worlr

Reputable
Nov 21, 2019
31
1
4,545
hi.
im a video editor using davincnhi resovle studio.
im looking to make an upgrade for my gpu while not change any part in my pc. (only psu)
what is the best gpu with at least 8gb ram and atleast 256 bit memory bus i can pair with my system?
i preffer nvidia one since its better with resolve but any better performance wwil be welcomed.
older generations is good too.

my build:
Ryzen 3400g
Asus prime b450m-a
2X8 gb 3200 ram
Rx570 4gb ram mini itx
psu 450w 80+(white)
Generic case
1tb m.2ssd
 
Solution
oh so bottlenecking is not a thing in video editing?
Absolutely not. Gpus and cpus are based on workload. If all you do is websurf, anything more than a GT1030 is overkill, if you use the gpu for encoding a 3090ti is king, regardless of the actual cpu.

Bottlenecking is a term applied by ppl who don't understand the difference, making assumptions based on only a small corridor of info. A cpu is never a bottleneck, it creates the 3d gaming frames, so is what it is. If it's fast or slow is immaterial. The assumption is the cpu 'holds back' a stronger gpu, which is false, the cpu doesn't hold anything back, the gpu just has extra room for ability, which changes on a game per game basis. Massive difference between what's required...
It can be, but it's different than gaming bottlenecks. I'm heading off to work right now, but in short the more GPU cores you have the better. The CPU only needs to be strong enough to handle running everything. Its the GPU that's doing the work. So the more cores the better.
 
oh so bottlenecking is not a thing in video editing?
Absolutely not. Gpus and cpus are based on workload. If all you do is websurf, anything more than a GT1030 is overkill, if you use the gpu for encoding a 3090ti is king, regardless of the actual cpu.

Bottlenecking is a term applied by ppl who don't understand the difference, making assumptions based on only a small corridor of info. A cpu is never a bottleneck, it creates the 3d gaming frames, so is what it is. If it's fast or slow is immaterial. The assumption is the cpu 'holds back' a stronger gpu, which is false, the cpu doesn't hold anything back, the gpu just has extra room for ability, which changes on a game per game basis. Massive difference between what's required for CSGO ultra and Cyberpunk 2077 ultra.

And that's before considering monitor resolution.

If the workload is gpu intensive, the stronger the gpu, the less time the workload takes. You can video edit on a potato, it'll just take a lot more time to accomplish as load times with small ram are far longer, encoding and compiling takes far longer etc.

AVC / H.264 and HEVC / H.265 also makes a difference, as most older gpus/cpus are not H.265 capable.

Basically, as said above, get the biggest, baddest, fastest, strongest gpu that'll physically fit in your case and budget, you are looking for computational power not gaming ability.
 
  • Like
Reactions: KyaraM
Solution
Absolutely not. Gpus and cpus are based on workload. If all you do is websurf, anything more than a GT1030 is overkill, if you use the gpu for encoding a 3090ti is king, regardless of the actual cpu.

Bottlenecking is a term applied by ppl who don't understand the difference, making assumptions based on only a small corridor of info. A cpu is never a bottleneck, it creates the 3d gaming frames, so is what it is. If it's fast or slow is immaterial. The assumption is the cpu 'holds back' a stronger gpu, which is false, the cpu doesn't hold anything back, the gpu just has extra room for ability, which changes on a game per game basis. Massive difference between what's required for CSGO ultra and Cyberpunk 2077 ultra.

And that's before considering monitor resolution.

If the workload is gpu intensive, the stronger the gpu, the less time the workload takes. You can video edit on a potato, it'll just take a lot more time to accomplish as load times with small ram are far longer, encoding and compiling takes far longer etc.

AVC / H.264 and HEVC / H.265 also makes a difference, as most older gpus/cpus are not H.265 capable.

Basically, as said above, get the biggest, baddest, fastest, strongest gpu that'll physically fit in your case and budget, you are looking for computational power not gaming ability.
wow thanks, great answer and thats what i needed to hear.

i guess most people ony comment when it comes to gaming and so i was misleaded.
thanks alot.
i guess ill get a 3080
 
wow thanks, great answer and thats what i needed to hear.

i guess most people ony comment when it comes to gaming and so i was misleaded.
thanks alot.
i guess ill get a 3080
Exactly. If you used the cpu to do the encoding (which many do) then the 12900k/5950x is boss, even if you had nothing but a GT1030 for looking at whatever was onscreen, but DR is all gpu/storage speed, so a NVMe and strong gpu is far more important than the cpu, especially when dealing with 4k content.
 
Absolutely not. Gpus and cpus are based on workload. If all you do is websurf, anything more than a GT1030 is overkill, if you use the gpu for encoding a 3090ti is king, regardless of the actual cpu.

Bottlenecking is a term applied by ppl who don't understand the difference, making assumptions based on only a small corridor of info. A cpu is never a bottleneck, it creates the 3d gaming frames, so is what it is. If it's fast or slow is immaterial. The assumption is the cpu 'holds back' a stronger gpu, which is false, the cpu doesn't hold anything back, the gpu just has extra room for ability, which changes on a game per game basis. Massive difference between what's required for CSGO ultra and Cyberpunk 2077 ultra.

And that's before considering monitor resolution.

If the workload is gpu intensive, the stronger the gpu, the less time the workload takes. You can video edit on a potato, it'll just take a lot more time to accomplish as load times with small ram are far longer, encoding and compiling takes far longer etc.

AVC / H.264 and HEVC / H.265 also makes a difference, as most older gpus/cpus are not H.265 capable.

Basically, as said above, get the biggest, baddest, fastest, strongest gpu that'll physically fit in your case and budget, you are looking for computational power not gaming ability.
A bottleneck is simply the part that limits performance. There is always a bottleneck present when individual parts are not perfectly balanced to a workload, which they never are. The question shouldn't really be what the bottleneck is, but how bad of a performance imbalance is it causing. When paring an old GTX 970 GPU to the latest HEDT chip the imbalance is the biggest issue, not the fact that the 970 is the limiter. That imbalance would be just as glaring by putting a 3090ti on an I3. The actual workload is very important as well. For video editing and processing intensive stuff like transcoding core count is king because the loads are easily split into multiple thread operations. In game loads where latency is very important clock frequency tends to be the most important. Regardless there is always a bottleneck. People good at building understand that good pairings of components keep performance imbalance to a minimum and design to a budget. So if you want a 3090ti this needs paired with a HEDT chip to feed it properly. If you have an I3 budget integrated graphics are a good option.
 
Define "best"
I am no expert on photoshop, but Puget systems is.
Here is an article on davinci resolve hardware:
https://www.pugetsystems.com/recomm...-DaVinci-Resolve-187/Hardware-Recommendations

I think your other hardware is going to me more limiting than the gpu.

Here is an article on comparing 3070,3080, 3090 cards for photoshop:
My take is that once you have a card for photoshop, the particular unit does not matter much.

You may also want to read their recommendations on photoshop hardware:
https://www.pugetsystems.com/recomm...-Adobe-Photoshop-139/Hardware-Recommendations
 
Bottlenecking is a term applied by ppl who don't understand the difference, making assumptions based on only a small corridor of info. A cpu is never a bottleneck, it creates the 3d gaming frames, so is what it is. If it's fast or slow is immaterial. The assumption is the cpu 'holds back' a stronger gpu, which is false, the cpu doesn't hold anything back, the gpu just has extra room for ability, which changes on a game per game basis. Massive difference between what's required for CSGO ultra and Cyberpunk 2077 ultra.
I understand what you are saying here. Because the CPU is the start of the whole process it can't really hold back anything, but the simplified idea of a CPU bottleneck is very easy to understand and is definitely equatable.
I will still use it in instances where the GPU has more headroom and is just 'waiting' for the CPU to hand the next frame off to the GPU.
 
Bottlenecking is a term applied by ppl who don't understand the difference, making assumptions based on only a small corridor of info. A cpu is never a bottleneck, it creates the 3d gaming frames, so is what it is. If it's fast or slow is immaterial. The assumption is the cpu 'holds back' a stronger gpu, which is false, the cpu doesn't hold anything back, the gpu just has extra room for ability, which changes on a game per game basis. Massive difference between what's required for CSGO ultra and Cyberpunk 2077 ultra.
If you're not getting the performance you want (performance in this case defined by frame rate) and you can determine it is indeed the CPU not providing the performance to get there, then the CPU is the bottleneck, because it's what's keeping you from getting the performance you want.

The problem is people think bottlenecks are a problem when it's not.
 
Regardless there is always a bottleneck.
Actually, there's never a bottleneck. Ever. In anything. There's only a perception of impeaded performance. As @hotaru.hino said, 'if you're not getting the performance you want..... '. That's got nothing to do with what you actually have. The cpu puts out as many frames as it can. Doesn't matter if that's 100fps with an old i3 or 500fps with a new i9, it is what it is. Even pairing a 3090ti with the i3, the cpu does not limit the performance of the card in any way. The card will perform to the best of its ability with whatever its given. If that's less than it can handle, so be it, still doesn't limit anything. Same as adding a stronger gpu does not make fps go up, the fps is set by the cpu, a stronger gpu can just do more, so shows more.

And workload is The most important factor. Gaming is one type of workload, transcoding/encoding/compiling using gpu not cpu is another. You do not need a HEDT cpu unless you Need the cores and ram capability etc. CSGO uses just 2 threads, that's it. You telling me that to play CSGO at 4k I need a HEDT cpu to go with the 3090ti? And must use a i3/i5 for a gtx970 at 1080p? Rediculous.

Far too many see imbalance as a limit, regardless of the fact you should never see 100% utilization of either cpu or gpu. There's Always an imbalance, you never hit a perfect balance. And then you change workloads and get a different result.

But at no time does any of those components actually limit performance. Hardware performance is defined by software, not other hardware. Ppl only think it does, backed by a million other posts by the clueless. Doesn't make them right.
 
Last edited:
My 2 cents:
There is no such thing as "bottlenecking"
If, by that, you mean that upgrading a cpu or graphics card can
somehow lower your performance or FPS.
A better term might be limiting factor.
That is where adding more cpu or gpu becomes increasingly
less effective.
You will find during a game that it will be limited by the cpu part of the time, and by graphics part of the time.
You really do not want to see 100% utilization of either. Such utilization does not provide a reserve capability for peak needs.
 
Actually, there's never a bottleneck. Ever. In anything. There's only a perception of impeaded performance. As @hotaru.hino said, 'if you're not getting the performance you want..... '. That's got nothing to do with what you actually have. The cpu puts out as many frames as it can. Doesn't matter if that's 100fps with an old i3 or 500fps with a new i9, it is what it is. Even pairing a 3090ti with the i3, the cpu does not limit the performance of the card in any way. The card will perform to the best of its ability with whatever its given. If that's less than it can handle, so be it, still doesn't limit anything. Same as adding a stronger gpu does not make fps go up, the fps is set by the cpu, a stronger gpu can just do more, so shows more.

And workload is The most important factor. Gaming is one type of workload, transcoding/encoding/compiling using gpu not cpu is another. You do not need a HEDT cpu unless you Need the cores and ram capability etc. CSGO uses just 2 threads, that's it. You telling me that to play CSGO at 4k I need a HEDT cpu to go with the 3090ti? And must use a i3/i5 for a gtx970 at 1080p? Rediculous.

Far too many see imbalance as a limit, regardless of the fact you should never see 100% utilization of either cpu or gpu. There's Always an imbalance, you never hit a perfect balance. And then you change workloads and get a different result.

But at no time does any of those components actually limit performance. Hardware performance is defined by software, not other hardware. Ppl only think it does, backed by a million other posts by the clueless. Doesn't make them right.
I am sorry but I have to completely disagree. The second definition of bottleneck that google throws "a narrow section of road or a junction that impedes traffic flow." Is exactly what happens. In the case of video game frame rate a slow cpu will not render enough pre-cache rendered frames to feed the GPU. While the GPU isn't technically unable to process the same number of flops that it could with at better cpu the end result is reduced overall frame rate because the cpu isn't feeding enough to keep up. In the case of the GPU being too weak frame rate will be limited as image fidelity is added because it can't keep up. Both result in the traffic of frames being impeded. I would say that the definition holds up pretty well.

The severity of the bottleneck is again relative to the amount of imbalance. One part or the other will always bottleneck. So a well balanced machine you may find only a few potential frames are being lost, in a poorly balanced machine it can be pretty severe. This isn't limited to CPU and GPU. Virtually every part in the build will have some overall effect from RAM to motherboard and software, especially the OS if it is inefficiently scheduling threads. Claiming that there are no bottlenecks because the individual part specs don't change neglects the fact the parts in a computer actually work together synergistically to create the end result of the user experience. This is definitely measurable in the form of benchmarks. The variability of benchmarks between similar machines is proof that bottlenecks do indeed exist. The bottleneck is on a system level, not a per part spec level.

As to the statement that hardware performance is defined by software please just stop. Efficient hardware utilization requires well written software, but nothing that software does has anything to do with the bare metal of hardware performance period. The maximum number of flops that a chip capable of is static in a given setup to a given clock. You may throw some argument regarding overclocking because it is set at the bios level, however clock speed doesn't change IPC in any way and throwing LN2 on the chip just allows the chip to do what it is capable of with that level of cooling. On that level the cooling solution on the chip is also a bottleneck. Software may utilize the chip better or worse but the chip remains unchanged regardless of what any software does with it. Software exists under the hardware layer. Your statement implies that software exists above the hardware layer and is not only incorrect but misleading. Software doesn't exist without hardware, but hardware can and does exist without software.
 
  • Like
Reactions: KyaraM
Lets not argue over what bottleneck means please. There are some on this forum who believe it doesn't exist at all. I personally believe EVERY system has a bottleneck. It might be the RAM amount, it could be the resolution. I've been told that's wrong. We shouldn't really fight over the definition. We've informed him he can run the best GPU he can. Lets move on and help him with any other questions ok?
 
The second definition of bottleneck that google throws "a narrow section of road or a junction that impedes traffic flow."
Ever seen LA at rush hour? 8 lane highway in both directions, exactly nothing narrow about that by anyone's standards, yet traffic reduced to a crawl by the amount of vehicles. Basically the workload is the impedance, not the road nor any of the cars in particular.
Efficient hardware utilization requires well written software, but nothing that software does has anything to do with the bare metal of hardware performance period.
Might want to rethink that statement. It's software that tells the cpu how many threads to use. Like CSGO. 2 threads, no rollover. You get almost identical performance from an i3 as an i9 because both threads are generally close to 100% utilized while every other thread is doin nothing. And when rethinking, throw in some effort on the word Efficient. There's more than it's fair share of badly optimized, badly written software that has a profound impact on hardware, papyrus scripts are a major culprit. And then of course there is Spectre and Meltdown, that's software too.
end result is reduced overall frame rate because the cpu isn't feeding enough to keep up.
Backwards. The gpu only processes what the cpu sends, the gpu isn't trying to process more and the cpu not sending enough. It's a demand system, same as a psu might be capable of 650w, doesn't mean the supply is 650w, the supply is only as much as the demand. If a cpu kicks out 100 frames, that's what the gpu has to work with, within the confines of detail levels and resolution. Doesn't matter if the gpu is capable of 200 frames or 1000 frames, it's only got 100, so that's what it works on.
The synchronous nature of workflow dictates that something has to complete before another component can start.
Nope. Gpus work only in parallel, hyperthreading, dual rank ram, raid, any transmission that uses multiple frequencies concurrent, monitors, there's a ton of stuff that's not done serial, 1 end 1 start.
The variability of benchmarks between similar machines is proof that bottlenecks do indeed exist.
No. The variability of benchmarks between similar machines is because they are similar, not identical. No 2 machines can be identical, any machine, because there's Always differences. Whether it's the silicon, the boost, the voltages, temps, there's always variables that cannot be ignored or discounted or made exact, so you end up with margins of error. Can't even get every core on the same cpu to be exact same temp, even if every core was running the same exact load, differences in the composition of the silicon, differences in thickness of TIM, differences in cooler pressure on the IHS, almost guarantee you'll get @ 3-5°C difference up/down from center. Which especially on a Ryzen will change fps as boost clocks are affected.

The differences don't show a bottleneck, they show a difference, which is termed 'margin of error' because even running the exact same load for the exact same time on the exact same machine can have variations in results due to changes in temps, changes in transmission, changes in ram timings, storage output etc that cannot be controlled by the tester to any repeatable degree.

@4745454b ok, you wrote that when I did, overlapped. Delete if you wish. 😊
 
Last edited:
If you get a newer GPU, it sounds like you are interested in getting a rtx 3080 type card, you will also need a new power supply for it. NVIDIA recommend at least 750Watt power supply ... see this page: Best PSU for RTX 3080


{GoofyOne's 2c worth .... which may, or may not be, actually worth 2c}
I would go with at least 800 to 850W honestly. That's what the card manufacturers usually recommend and when in doubt, I would always get the bigger one.
 
Ever seen LA at rush hour? 8 lane highway in both directions, exactly nothing narrow about that by anyone's standards, yet traffic reduced to a crawl by the amount of vehicles. Basically the workload is the impedance, not the road nor any of the cars in particular.

Might want to rethink that statement. It's software that tells the cpu how many threads to use. Like CSGO. 2 threads, no rollover. You get almost identical performance from an i3 as an i9 because both threads are generally close to 100% utilized while every other thread is doin nothing. And when rethinking, throw in some effort on the word Efficient. There's more than it's fair share of badly optimized, badly written software that has a profound impact on hardware, papyrus scripts are a major culprit. And then of course there is Spectre and Meltdown, that's software too.

Backwards. The gpu only processes what the cpu sends, the gpu isn't trying to process more and the cpu not sending enough. It's a demand system, same as a psu might be capable of 650w, doesn't mean the supply is 650w, the supply is only as much as the demand. If a cpu kicks out 100 frames, that's what the gpu has to work with, within the confines of detail levels and resolution. Doesn't matter if the gpu is capable of 200 frames or 1000 frames, it's only got 100, so that's what it works on.

Nope. Gpus work only in parallel, hyperthreading, dual rank ram, raid, any transmission that uses multiple frequencies concurrent, monitors, there's a ton of stuff that's not done serial, 1 end 1 start.

No. The variability of benchmarks between similar machines is because they are similar, not identical. No 2 machines can be identical, any machine, because there's Always differences. Whether it's the silicon, the boost, the voltages, temps, there's always variables that cannot be ignored or discounted or made exact, so you end up with margins of error. Can't even get every core on the same cpu to be exact same temp, even if every core was running the same exact load, differences in the composition of the silicon, differences in thickness of TIM, differences in cooler pressure on the IHS, almost guarantee you'll get @ 3-5°C difference up/down from center. Which especially on a Ryzen will change fps as boost clocks are affected.

The differences don't show a bottleneck, they show a difference, which is termed 'margin of error' because even running the exact same load for the exact same time on the exact same machine can have variations in results due to changes in temps, changes in transmission, changes in ram timings, storage output etc that cannot be controlled by the tester to any repeatable degree.

@4745454b ok, you wrote that when I did, overlapped. Delete if you wish. 😊
This is obviously going nowhere but here is my rebuttal.

1. Yes I have been in LA traffic in rush hour. Comparing this to a computer chip is not valid because while the ability for cars to make speed on the highway changes with amount of traffic the clock speed and speed of electrons do not change with load on a computer chip. On an LA highway once you are on the road in heavy traffic you sit there on the highway trying to jockey for position to get back off. With a processor the load doesn't impede its ability to perform at all, only loading and unloading information to other components.

2. My statement stands. The mistake you are making here is what a thread is. A thread is a software concept that was developed to parallelize programming as multiple cores became available. While the threads are organized by the OS scheduler and fed to the CPU the number of cores that a chip has and its abilities to process with them is unchanged by software. Poor organization does lead to poor utilization but just like your rush hour analogy the highway remains unchanged no matter how many cars are on it.

3. You called me backwards and just proved my point. Reduced overall framerate because the cpu didn't feed enough to max out GPU.

4. All workloads are synchronous and linear at some level. That is being said that they have a beginning and and end. Modern systems utilize parallelism but it only works well in areas where the workload can be split apart and then recombined for an end result. This works well for GPUs because the end result of a given frame is the composite created from a huge number of polygon interactions, but each frame enters and exits in a liner fashion. This is exemplified in multi-card systems that have to dump whole calculated frames when they get out of order. GPUs in particular work well to this because video processing can be parallelized relatively easily. On the game level however the decision making still comes down to user input and that choice of input creates the synchronous flow.

5. Again just arguing to argue. Bottleneck is an observable effect on the end performance result. The imbalances you are describing are absolutely real and cause a bottleneck effect.

From what I can see you don't seem to be refuting any of the causes but are at war with a word that describes their net effect on the system. If you want to term it as "Reduction of overall system performance due to hardware imbalance and poor software utilization compared to the sum of gross spec processing power of all individual components in the system." go for it. I will just just use the term bottleneck. It is easier.