[SOLVED] Overclocking to reduce bottleneck?

Solution
I'm sorry. I'm just going to have to laugh at that. There's absolutely no such thing as a cpu being 15% too little for a gpu. You got that from a bottleneck calculator? Have a little advertisement for a 3800x from Amazon or Newegg on the bottom?

A cpu pre-renders every frame according to the game code. It'll do so at 100% of its ability (usage is entirely different). It takes time to place every object, associate every touchable object, give everything shape, form, shadow etc. The amount of times it can fully complete that task in 1 second is your fps cap.

That info gets sent to the gpu, which finish renders the info, giving color, dimension, texture, shading etc and paints that picture onscreen. It'll do so to 100% of its ability...
I'm sorry. I'm just going to have to laugh at that. There's absolutely no such thing as a cpu being 15% too little for a gpu. You got that from a bottleneck calculator? Have a little advertisement for a 3800x from Amazon or Newegg on the bottom?

A cpu pre-renders every frame according to the game code. It'll do so at 100% of its ability (usage is entirely different). It takes time to place every object, associate every touchable object, give everything shape, form, shadow etc. The amount of times it can fully complete that task in 1 second is your fps cap.

That info gets sent to the gpu, which finish renders the info, giving color, dimension, texture, shading etc and paints that picture onscreen. It'll do so to 100% of its ability (again, usage is different) and does so according to detail settings, resolution, post processing affects. The amount of painted pictures it can complete in one second is the fps you see on a counter.

Cpu doesn't affect how a gpu works, gpu doest affect how a cpu works, they are intirely independent of each other.

So let's say a pretty complex and detailed game code like gta5 allows for 100fps on your cpu. It sends 100 pre-rendered frames to the gpu every second. The gpu then tries to paint all 100 frames on screen every second. Your resolution is 1080p and details are low, so a 1080ti will have absolutely no issues painting all 100 and has room left for more. Bump that to ultra, add hairworks and now the 1080ti gets 90fps output. Drop the hairworks, back to 100. Regardless of whether the gpu was capable of 150fps or not. You are cpu capped at 100.

Change resolution to 1440p, throw out the above, the cap is now on the gpu, struggling to get 60fps at ultra, lowering detail levels to medium would get 120fps, but still capped at 100fps because resolution doesn't affect the cpu. That's a gpu aspect.

Change games, different stories, different fps cap, different fps onscreen, different affects from post processing.

With a 1080ti, you have the full ability to run any graphics settings you wish, any post processing from physX to hairworks and STILL get maximum fps, especially at 1080p and most games at 1440p. Most cannot.

The cpu doesn't affect the gpu or vice-versa, so how a cpu calculator can tell beyond a doubt that a cpu is 15% too small is beyond my comprehension because there's simply far too many variables, changes, differences, OC, gpu OC, ram speeds, settings, preferences, affects, GAMES to make a definitive interpretation.

That's like a salesman telling you that you must buy a new higher performance car because even though the speed limit is 70mph on the highway, you take 9 seconds at full gas to get there and his new car only takes 7 seconds and has a top speed of 130mph.

So what?

View: https://youtu.be/3BqKkoFAdoA


All graphics settings the same, ignore the benchmark fps numbers, see if you can physically see ANY difference in picture quality or ability.
 
Last edited:
  • Like
Reactions: Schlachtwolf
Solution
Haha, that's funny....

Bought a i7-3770k and paired it with a gtx970. On a whim, checked a calculator like that one. It said I was awesome, no bottleneck, doin great, only thing better would have been a gtx980 (at the time).

Few days after the release of the i7-4790k, I checked again, just curious. 20% bottleneck! My cpu was way slow, needed to upgrade to at least the i5-4690k or I would see serious slowdowns and inability to play games to their full extent. Even offered pricing on the i5, the i7 and a 980ti, with a discount coupon for a combo.

Really?

I'm so glad that calculator knows exactly what games I play, what settings, what resolution, what ram, what storage, multi-player or single player, just so it can tell me I need to upgrade and offer me links to its sponsors who will be more than happy to take my money for the 3-5fps gains that upgrade would get me.

Wait! Did I say that at 4.9GHz on cpu and 128% OC on that 970 I can hit 300fps in CSGO? OMG, I'm missing out on 15% performance!

Oh. 60Hz monitors....

The games run smooth, at the graphics settings I want. That calculator deserves nothing but my middle finger.
 
Perfectly said Karadjgne, every bottleneck calculator or cpu benchmark gives me bad scores and literally 100% bottleneck or more.

Im running perfectly fine still with first generation i7 processors paired with whatever i want, 970, 980 or 980ti. 200-300 fps csgo, 240fps fortnite on 1080p.

Actually im maxing out my gpu usage even at 1980x1080 resolution with example 980. This is with 4.0 to 4.4ghz overclock on the cpu.

That overclock on the i7-3770k sounds insane, 4.9ghz? Damn i would need probably 1.6vcore for that lol. That probably beats some 4th-5th gen cpus easily.
 
I've actually clocked that cpu to 5.0GHz at 1.42v, but dropped it to 4.9GHz at 1.328v, which it sat at for 6 years. Then the fan bearings died on the nzxt Kraken X61 (yes, the FANS you aio pump haters!) after 6 years, 24/7/364 usage and I got a good deal on a Cryorig R1 Ultimate that was brand new, open box for $40. Consequently, it was of course considerably louder than the AIO (yes I said LOUDER), so ended up dropping the OC to 4.6GHz at 1.19v.

And the cpu is not delidded, and has @ 6°C between hottest and coldest cores.

Whomever decided it was a waste of time to keep batch records after Ivy-Bridge was a complete tool.

Imho, Intel dropped the ball massively after Ivy-Bridge. The little 5% tweaks in IPC from successive generations over the last 8 years are chump change after the giant leap from 775 to 1156 to 1155. 4 platforms, 8 cpu lines later and they are just now getting solid 5.0GHz numbers that ppl got 8 years ago, the hard way, by themselves.
 
I actually need around 1.42v to reach 4.2ghz on x3470 but thes are older generation cpu:s. I had no problems running vcore at around 1.4v for 5+ years with i7-870 or i7-875k, there is no performance loss even for 24/7 use.

One thing iam testing now is higher ram speed and nb frequency, usually i have kept vtt voltage below 1.25v since 1.21v is max intel recommends for lynnfield. I can get upto 2500mhz / 205 blck with decent timings on the ram 11-11-11-30, but this needs about 1.325v vtt voltage, im not sure will this kill the memory controller inside the cpu in the long run.

Delidding will drop about 10 celsius on all cores for x3470 , i7-870 etc.
 
Just bought x58 motherboard (westmere) and a xeon cpu x5675, total price for both 60. Im already overclocked from 3.06 ghz to 4.25ghz on all cores with just 1.30v vcore. This is same generation than lynnfield but seems to handle overclocking alot better and this is 6 core 12 thread cpu.

I got to 225 Blck from stock 133, Uncore frequency at 3.75ghz. Memory is hitting ceiling at 2000mhz.

What benchmark do you use maybe we could compare? Maybe i can finally get close to your score with previous generation processor 😀