GPU specs and clock speed, what's the difference?

esprade

Commendable
Oct 27, 2016
81
0
1,630
So I've got a 1050Ti that I overclock and I get good performance from this card, my overclock is stable at 1963mhz and obviously the RAM is a 4GB card. I'm soon going to buy a GTX 1060 6GB, but other than memory, I really don't I understand what the difference is in clock speed at all. If my 1050Ti is stable floating around 2000mhz, then exactly what benefit is a 1060 gonna give me at base clock speeds? And if I overclock to 2100mhz which seems to be the 1060's stable point, exactly what benefit am I gonna get other than 2GB extra RAM? What makes the 1060 such a better and more expensive card when really the clock speeds aren't much different?
 
Solution
that maths is essentially correct, and gaming is rendering, cuda cores are the parallel processors that do all of the work in the GPU (latest nvidia's a slightly different). A 900 series cuda core will be less effective than a 1000 series, but better than a 700 series. So you can't compare between generations, but you can compare within a generation.

Not sure why you think 30%, just going to 2000 to 2100 gets you a 5% bump, but 2,688,000/1,536,000 is 1.75, so the 1060 is 75% faster, like I said just short of being twice as fast.
The benefit you could get is the 1060 6gb will be almost 50% better performance and really, performance is the only thing that matters, not the clocks or any spec. I'd suggest staying away from the spec sheet when you don't know what you are looking at. If you want to learn there is a lot more than clocks to the performance equation. The more obvious ones just off the spec sheet is cuda cores and memory bit width.

But also going from a 1050ti to a 1060 is a waste of money. While it is a noticeable increase, it's not worth the money. If you are selling the 1050ti to make up the difference, then that could help but it depends how much you'd get for it.
 

esprade

Commendable
Oct 27, 2016
81
0
1,630
I don't really see it as a waste of money. I see €500 for a 1070Ti and another €500 for a 1440p display a waste of money as the extra money that it costs is better spent on other things that I actually need the money for, rather than 1080p gaming which I honestly don't need anything more than a 1060 6GB to maximise. The GPU upgrade, if the "performance" is so much better as you say it is, will be more than justified to get more juice out of 2019 such as better lighting, shadow effects, and fps in games. If that's not the case and it really is a waste of money, then my original question still kind of stands unanswered. What's the big difference?
 

esprade

Commendable
Oct 27, 2016
81
0
1,630
So the card has a set number of CUDA cores, and those cores are pushed at a particular clock speed, and that's what determines the gaming performance? Is this relevant to gaming or just GPU rendering? I was never clear exactly what CUDA's function was.

Let's do some random meaningless maths.

768 (cores) x 2000 (mhz) = 1,536,000
1280 (cores) x 2100 (mhz) = 2,688,000

So that's about a 60% performance increase based purely on clock speed and overclocking alone? Not including the obvious extra 2GB of memory and the memory overclock as well which I haven't discussed?
 
that maths is essentially correct, and gaming is rendering, cuda cores are the parallel processors that do all of the work in the GPU (latest nvidia's a slightly different). A 900 series cuda core will be less effective than a 1000 series, but better than a 700 series. So you can't compare between generations, but you can compare within a generation.

Not sure why you think 30%, just going to 2000 to 2100 gets you a 5% bump, but 2,688,000/1,536,000 is 1.75, so the 1060 is 75% faster, like I said just short of being twice as fast.
 
Solution
I said cores and memory width being the more obvious reasons. But the spec sheet is pointless like doing the math. It's not just about cores and clocks. Cores don't scale nearly that well which is one reason why you only see a 50% increase. Cores will also scale differently from gens because of architectural differences and efficiency. Performance per core hasn't always increased per gen although the last 3 did. It's not that simple of math and why real world benchmarks is better. You don't need to 1+1=2. When a benchmark tells you the answer is actually 5 because you simplified the equation. Actual performance matters not calculated numbers or numbers on a spec sheet.
 

esprade

Commendable
Oct 27, 2016
81
0
1,630
It's a better way of understanding as I've seen benchmarks of the card running at 2000mhz and 3000MB which is exactly what my 1050Ti runs at so that information means absolutely nothing. Looking at fps alone doesn't explain to me how, why and by how much the card has a performance increase.
 
The 3gb version has lower specs than the 6gb version but if one gpu does 40 fps and another does 60 fps, that shows by how much. I understand that it can help to understand but it's a simplified estimate. You said 60%, he said 75% but real world is closer to 45%. That's quite a bit lower. How much would you need to know to estimate an accurate decrease from real world inefficiencies?

This why and how is a complicated matter and the specs is only a part of it since different games will vary largely as well from 30-60%. You can't go by just cores and clocks as explained. You'd have to guesstimate at core scaling efficiency which is nearly impossible. Cores aren't the only part of the sm so also look at tmu and ro count. You also need to look at the games' vram usage, res and how memory bandwidth can affect performance related to this which is affected by mem clock speed and bit width. How about cache amount and how that affects performance? You could only look at clocks when it's the same gpu to estimate % of oc to performance % increase but that's not going to scale well and a benchmark could show more. There's also temps of different gpu models and how that affects boost speed it'll run at so you can't even look at specs sheets which don't say actual boost speeds and the listed spec is not incremental of actual. There's other factors as well but the simplified version was way off so how helpful is that wrong number when trying to figure out performance increase when looking at upgrading a gpu?