Discussion Is Sandy/Ivy Bridge still worth while in gaming with a RTX 2080 Ti? (1440P/4K)

atomicWAR

Judicious
Herald
So I was forced into a GPU upgrade as one of my two GTX 1080s took a long walk of a short pier. I was very upset as my performance was good enough with my old X79 platform @4K when SLI was supported and I had heard what a "huge" bottleneck older CPU's (even newish AMD 2000 series) are on the RTX 2080 Ti and non Ti. While I had my doubts from experience as a 4K with HDR gamer that such rumors were true and/or applied to me at such a high resolution, I was still hesitant to make the purchase all the same. I mean let's face it the pricing of RTX cards is a little insane. After picking up one of MSI's newest Seahawk variants the RTX 2080 Ti EK (non x...ie a new SKU with a standard 300A stock OC chip 1350mhz base/1635mhz boost) and chucking it my rig, my fears quickly evaporated.

I was very unhappy to see MSI cripples the newest Seahawk variant by imposing a power limit of only 110% on a custom water loop cooled card but seeing my old x79 platform with an OC'd Xeon E5-1680 V2 (see signture for full rig details) did indeed still have very solid gaming chops when gaming at 4K a smile still crept onto my face dispite my displeasure with MSI. Comparing my benchmark runs with other websites I was within a frame or two per second with even the fastest CPU's on the market @4K showing just how GPU bottlenecked the resolution still is today. Now when I dialed things back to 1440P out of curiosity, my GPU's performance numbers tended to fall somewhere between an AMD Ryzen 1700X and 1800X a commendable feat in itself but the first sign of it's older CPU architecture holding back my GPU even if just a little. Now there are some caveats to this whole scenerio. The biggest one being I do have to disable spectre and meltdown updates to achieve this performance using Inspectre. If i don't, my numbers drop by around 10-15%. Now I can also just disable hyper-threading but my numbers where still a little low in the range of about 3%. Not a huge number but one worth noting all the same. Inspectre is easy to use and while it does require a restart to kick in. I generally restart before playing games if i have been using my PC for other things anyways to ensure my rig is running at peak performance. Nothing worse then a zombie task eating up CPU cycles or a memory leak hogging your ram!

So what does all this mean? Well most importantly it means I have a viable CPU platform when gaming for another couple of years but it also tells a story about gaming on PC in general. People frequently think higher resolutions require more CPU horse power to run. This simply is not true. The higher the resolution the less work your CPU does because your GPU can't keep up. For this reason when someone tells me they need to upgrade their CPU for better frame rates because of very high resolutions, I tell them to check there CPU and GPU usage numbers. Something sadly many new or less knowledgeable PC gamers so frequently do not do before buying parts they "think" they need. If you see your CPU usage at < 85% (per thread and as whole) with >95% GPU usage...getting a new CPU will likely not change much in your gaming experience. Now you see near 100% CPU usage in a single thread or worse on your CPU as a whole now it really is time to talk about a CPU upgrade. Of course much of this only applies to 60-80hz gamer's. Once you start clearing the 90FPS mark into high refresh rate gaming your CPU and GPU start to get on more even footing. A place where both need to be the best to achieve the highest frame rates possible. At the end of the day though there are a lot of folks out there in the same boat as me gaming on older CPU platforms but wanting upgrade their GPU and displays. Gaming on PC for 20+ years I was fairly certain how my GPU purchase would turn out. I do, however, remember a time when such gaming knowledge was new to me and gray hair was something only "old" people had. I hope my latest upgrade experience can help others make informed decisions when purchasing new hardware or when not to ; )
 
Last edited:

Karadjgne

Titan
Herald
I'm still gaming on an i7-3770K and gtx970. Just saying.

4k is all gpu. Resolutions have nothing to do with cpu output. 4k started out years ago at 30Hz limits, made it to 60Hz for the vast majority of monitors in use now, and have just now finally released 4k @ 120/144Hz.

The cpu pre-renders the frames, ships them to the gpu which puts those frames on screen according to detail settings and resolution. It's upto the gpu to either live upto that fps cap, or not. If the cpu can ship 100fps to the gpu, at 1080p a 2080ti can easily put that onscreen at max details, and have room to spare. This is where you get the common misconception of bottleneck, the cpu holding back the gpu.

However, bump that resolution to 4k and even a 2080ti can struggle with ultra settings and might only be able to hit 60fps. Does that mean the gpu is now the bottleneck, since the cpu is capable of 100fps?

On a standard 4k 60Hz monitor, my i7-3770K wouldn't act any differently than a i7-9700, as frame output from the cpu won't be much of a factor, but fps output from the gpu will.

For instance, vanilla skyrim sees @ 180fps on my pc. With over 170 scripted mods, that's cut to @ 60fps because of cpu being stomped by the mods. My 970 is at ultra for both and still has room. Drop the 1080p to a 4k DSR and gpu usage goes to 99%, and fps drops to 50. Now it's gpu killing that extra 10fps, cpu is at 60, but would still be around 50ish with vanilla skyrim due to gpu restrictions.

Sandy/Ivy is still viable for 4k, as long as that 4k is still 60Hz. They'll suffer at 144Hz except in a few of the simpler games like CSGO etc.
 
Last edited:

Phaaze88

Admirable
Herald
Might want to change the title to '@1440p/4k'. 1080p will definitely bottleneck.
Before you say, "2080Ti @ 1080p?! That's CRAZY!", I'm telling you now, I've seen at least 15 threads on this site alone with numbskulls having done this(I tend to hop around to different hardware sites).

Sorry to hear that about your Seahawk. That is a drag. All that potential OC headroom gone. It's just an overpriced Founder's Edition at that point.

Is SLI something you will continue to do - will you SLI 2080Ti later on? Or is it a one and done deal?
 

atomicWAR

Judicious
Herald
For now I am done with SLI. Which is saying a lot because I have had SLI running on my primary PC since Nvidia dropped it with the Geforce 6 series cards way back in the day. While game coverage was never great for SLI in the past it justified itself to the heavy AA (me) and high resolution crowd (me again). Games that needed SLI to be maxed out at launch got it, those games who didn't need SLI power to be maxed didn't support it...for the most part (nvidia inspector anyone?...or wait till a single GPU card that can do it).

I do believe multi GPU will make a comeback when Intel enter's the discrete GPU market, until then I likely will not go multi GPU. By Intel's "Xe" name alone I highly suspect multi GPU will be a big part of their offerings. You math geeks out there know what I mean ; ) !! And Intel has bank to spend on making adoption happen seamlessly. That coupled with their compute express link (cache coherent interconnect) addition to PCIe 5.0 and everything points to new birth in multi-GPU setups. I know a lot of that is for scientific research/big data/AI but the potential for moving past the short comings of things like SLI is hard to miss. And while DX 12 tried to do this it put the burden on Dev's to code for it. In fact I beleive this is the reason we saw support plummet in recent years, that and new rendering techniques that need cache/ram coherency which won't work with SLI set-ups like AFR.

I believe Intel's multiple GPU solution will be very plug and play experience for everybody involved. Dev's won't have to alter a thing and multi-GPU will just work, for once. Users can just add another card and not have to reload/DDU drivers/hack SLI profiles/etc. Can't tell you the times I hacked together my own SLI profiles over the years. Point being Nvidia and AMD will quickly follow suit or perish. Hopefully multiple GPU's will then finally live up to there true potential. Until this happens or we start seeing support pick back up like when nvidia only needed to release a driver and SLI profile...I am out. Titles like Darksiders 3 (ok you can hack one together with inpsector) and Batman Arkham Knight where users got use to using SLI only to be left out in the cold on the third game (and there are a ton of titles like this). So for now I run a single GPU for the first time since 2004.
 
Last edited:

Karadjgne

Titan
Herald
DX12 and windows 10 is setup already for mgpu. The problem is that most games nowadays are online/dl online and the mgpu code is quite sizable, so places like Origin and Steam really do not want the extra workload to support it on the servers. So game devs aren't really overly ambitious to include it in new games, especially when part of their pay comes from nvidia and amd, who'd prefer to sell more expensive single gpus to handle the workload vrs 2 or 3 smaller, cheaper gpus that'd amount to the same thing.
 
DX12 and windows 10 is setup already for mgpu. The problem is that most games nowadays are online/dl online and the mgpu code is quite sizable, so places like Origin and Steam really do not want the extra workload to support it on the servers. So game devs aren't really overly ambitious to include it in new games, especially when part of their pay comes from nvidia and amd, who'd prefer to sell more expensive single gpus to handle the workload vrs 2 or 3 smaller, cheaper gpus that'd amount to the same thing.
The problem with mGPU now is that it's much harder to support; there's a reason why SLI/CF had strict limitations on what GPUs you could pair together (usually identical ones). In addition, SLI/CF could be forced in drivers if a game didn't offer support.

DX12/Vulkan changed all this. Now any arbitrary two GPUs could be paired. While nice in concept, you immediately run into significant load balancing problems that, speaking as a Software Engineer, are damn near impossible to deal with. In addition, responsibility for proper implementation of mGPU is now entirely on the developer; you can't force it via drivers anymore.

So now imagine you are a developer, often overworked, underpaid, and the game you are working on is due out in a month. You have two options:

Option A: Implement and test mGPU capability using any combination of NVIDIA/AMD/Intel GPUs and ensure performance works as expected for all use-cases.

Option B: Forget mGPU; you have enough things you need to get working before the game goes gold.

Obviously, Option B won the day.
 

Karadjgne

Titan
Herald
Lol. Who isn't overworked and underpaid except those whom already have money.

And yes, it's perfectly understandable why option B wins, because there's always someone else you end up waiting on, then rushing to make up lost time and still meet deadlines. Which kinda fits in with 'not being overly ambitious' since that's just extra work for no real return.
 

atomicWAR

Judicious
Herald
The problem with mGPU now is that it's much harder to support; there's a reason why SLI/CF had strict limitations on what GPUs you could pair together (usually identical ones). In addition, SLI/CF could be forced in drivers if a game didn't offer support.

DX12/Vulkan changed all this. Now any arbitrary two GPUs could be paired. While nice in concept, you immediately run into significant load balancing problems that, speaking as a Software Engineer, are damn near impossible to deal with. In addition, responsibility for proper implementation of mGPU is now entirely on the developer; you can't force it via drivers anymore.

So now imagine you are a developer, often overworked, underpaid, and the game you are working on is due out in a month. You have two options:

Option A: Implement and test mGPU capability using any combination of NVIDIA/AMD/Intel GPUs and ensure performance works as expected for all use-cases.

Option B: Forget mGPU; you have enough things you need to get working before the game goes gold.

Obviously, Option B won the day.

Yeah that was exactly what I was talking about in DX12. It was the one of the last nails in SLI/Crossfire's coffin. I do hope Intel steps up the multi GPU game again. I suspect well see a similar deal with GPUs needing to be identical or very close for frame rendering but with cache/ram coherancy who knows... I believe what ever they do it will need to be seemless and require little to no work on the dev side of things if it is to succeed.

Intel is also talking about mGPU as a way to use their iGPUs in tandem with dedicated ones but use cases more akin to Physx or AI type loads in game. Regardless new blood in the GPU space can only be a good thing for the customers. And who knows exactly what ARM's plan is in this space as well. I know they plan to enter it but the details are fairly sparse beyond a couple new's statements, at least of what I have personally read. Regardless AMD and Nvidia have let there respective multi GPU brands slip into obscurity.
 

Karadjgne

Titan
Herald
Well Intel did kinda sorta do the mgpu thing with Lucid, and for me it worked quite well, I did notice a difference, even with a gtx970 @ 1080p. The paired igpu softened a lot of edging, so I was able to move the gpu slider more towards the center and not full on quality. It just wasn't stable from one boot to the next and didn't support many of the games I play.
 

atomicWAR

Judicious
Herald
Sorry to hear that about your Seahawk. That is a drag. All that potential OC headroom gone. It's just an overpriced Founder's Edition at that point.
Yeah pretty much. With my average boost being 2145mhz in benchmarking (after full OC), my gpu does tend to fall in line with a Founders Edition. From what I have read 2145mhz is fairly standard average boost for those cards once fully overclocked. As well as hitting +700mhz for a ram OC.

My biggest complaint regarding the 110% power limit is the fact the card comes with two 8 pin PCIe connectors and one 6 pin. Which is right inline with their Trio, Trio X and Seahawk EK X. I get they need to segment their GPU's but 110% is a little brutal for card with a water block. At least at the clocks it was released with. I was expecting at least 115-120% power limit with the hardware installed. Running two eight and one six pin is complete overkill with such a low limit. I know I can flash the higher power limit (trio X) or higher boost frequency with the same 110% of the EK X bios as those cards share the exact same PCB with 17 phases. So yeah I could flash one of those stock bios or the slightly more custom bios leaked for a OC competition (least that's the rumor)with a power limit of 135%. But when I just dropped over a grand on a card as much as I like to tinker I am not quite yet ready to completely void my warranty. If my frame rate starts hurting though I may be convinced....or if I get REALLY BORED. Which is usually when warranties become an endangered species. Speaking of those Bios can be found here...as well as most other higher power limit/frequency RTX 2080 Ti bios for nearly every manufacturer.

https://www.overclock.net/forum/69-nvidia/1706276-official-nvidia-rtx-2080-ti-owner-s-club.html

The fact I even found this yet shows how pissed and....mmmK...BORED I was. Anyways I hope it helps some poor schmuck who like myself is left shaking their head at the ridiculous power limits imposed on the generation of cards. Remember when we could actually control the voltage? Those were the dayz....
 
Last edited:

ASK THE COMMUNITY

TRENDING THREADS