overclocking the Asus GTX 970 Strix

Status
Not open for further replies.

BBQ_King

Reputable
Aug 22, 2014
83
0
4,630
Will the gtx 970 strix reach 1500+Mhz eventhough it only has 1x8pin socket?

Also will the performance boost be huge?
 
Solution
Several cards have already demonstrated hitting 1500 boost clock.....Asus wasn't one of them.

http://www.guru3d.com/articles_pages/msi_geforce_gtx_970_gaming_review,26.html
http://www.guru3d.com/articles_pages/gigabyte_geforce_gtx_970_g1_gaming_review,26.html
http://www.guru3d.com/articles_pages/asus_geforce_gtx_970_strix_review,26.html

http://www.bit-tech.net/hardware/graphics/2014/09/19/nvidia-geforce-gtx-970-review/2

ASUS has also trimmed the standard 2 x 6-pin PCI-E power connections down to a single 8-pin one, which has an LED to tell you when your cable is correctly connected and working. This design makes cable management easier, but there's a chance it could negatively impact the card's overclocking potential.
...
http://www.techpowerup.com/reviews/ASUS/GTX_970_STRIX_OC/28.html

GPU: 1290MHz

Memory: 1990MHz

I don't recommend pushing things to the limit but you can experiment and get an optimal balance of stability, fan noise, and performance. Every card will be slightly different as well.

*Battlefield 3 improved by 12.5% but then it was already at about 115FPS so if you use VSYNC to cap at 60FPS that would be pointless for this game at least at these settings.

Personally I'd go with 1200MHz and 1900MHz for the settings but I don't have the card to experiment.


 
Several cards have already demonstrated hitting 1500 boost clock.....Asus wasn't one of them.

http://www.guru3d.com/articles_pages/msi_geforce_gtx_970_gaming_review,26.html
http://www.guru3d.com/articles_pages/gigabyte_geforce_gtx_970_g1_gaming_review,26.html
http://www.guru3d.com/articles_pages/asus_geforce_gtx_970_strix_review,26.html

http://www.bit-tech.net/hardware/graphics/2014/09/19/nvidia-geforce-gtx-970-review/2

ASUS has also trimmed the standard 2 x 6-pin PCI-E power connections down to a single 8-pin one, which has an LED to tell you when your cable is correctly connected and working. This design makes cable management easier, but there's a chance it could negatively impact the card's overclocking potential.

Other concerns from that article....

On its custom PCB, ASUS places all eight Samsung memory chips on the front side, meaning that the backplate does not directly cool any of them. In fact, the chips are left without any contact plate or heatsink touching them, relying solely on air from the fans to cool them.

We also find a 6-phase power delivery system for the GPU, a 50 percent upgrade from stock specifications. It also uses ASUS's DIGI+ VRM controller for precise, digital voltages, as well as high quality Super Alloy Power components for buzz-free choke operation, longer capacitor lifespan and MOSFETs with a 30 percent higher voltage threshold than standard. Sadly, the memory has not been granted the same treatment. It is fed by a single phase found at the other side of the PCB, and this one does not use any special components.

The MOSFETs of the DIGI+ power phases are cooled by a small heatsink, but the VRM controller and the MOSFETs for the memory power phase are left, like the memory chips, to fend for themselves without direct cooling




 
Solution


No, not at those temps. Besides nVidia has locked down what you can do to these cards both physically and legally with their partners that it is almost impossible to damage them. We have been running all our cards overclocked to the gills, (25+%) into the 80s since the 500 series and none have suffered any damage. Of course, last thing ya wanted to do with the 500 series was use a reference PCB (i.e. EVGA SC series) as those were pretty easy to damage. All the 970s I have seen so far use a custom PCB, even EVGA, though they didn't exactly go hi end on the components or component cooling....so that one ya might be wise to be careful with.

I have seen 0 reports of throttling at Boost Clocks of 1500 Hz. They can hit 1275 - 1350+ Boost clocks at stock settings. + 150 Mhz is really "no bigga deal" for these cards.



 


I'm referring to fan noise when pushing the cards to the maximum such as 1450MHz+.

And don't forget many people have internal case temperatures that are HOTTER than what a reviewer has so the fan will have to ramp up even more.
 
So am I. Again, reviewers have pushed them them to 1500 Boost Clock....no reported noise issues so far and they used a temp target of 80C. When ya start out at 29 dbA, ..... ya have a lotta room before you get annoying

fannoise_load.gif


By comparison, the H100i breaks 68 dBA ... that's 16 times louder....the Noctua DH-15 is 33 dbA

b2.jpg


 


Using GPU Tweak and enabling its “Overclocking Range Enhancement” option will get you a lot higher available settings :)

 


Hey,
Do you feel that 1500+2000 is optimal overall considering you can get the GPU higher but would then have to lower the VRAM?

I know you can get VRAM bottlenecking though that's heavily dependent on resolution and other factors. The GTX980 for example scales slightly better at higher resolutions such as 4K at times because the increased ROP count makes texture compression faster.

(BTW, people still complain about this texture compression for some reason. It's lossless, fast, and keeps the price down a bit so I guess people complain about anything.)

Back to the potential VRAM bottleneck issue. Again, I guess it's going to really vary but it would be interesting to know if you've done any testing at all to compare:

a) 1500 + 2000, vs
b) 1600 + 1866

It's worth pointing out that some game benchmarks like Metro 2033 and the Batman Arkham series don't compare to average gameplay, and as well sometimes certain bottlenecks aren't common but can affect you at the worst possible moment.

*We're talking about a theoretical best case advantage of under 7% though (1600/1500) with a completely GPU bound environment so may be hard to test. The worst-case is also about the same if VRAM bottlenecked (2000/1866). So chances are you're looking at maybe 3% or so benefit depending on the game.

Hope you're enjoying your card!

(I'm holding off as my Asus GTX680 TOP is playing most games at full and I've got a ridiculous back catalogue of Steam games so I decided to just quit buying for another year or two then get the Witcher 3 as my first game. I never buy games new.. )

Cheers.
 
tetsuya23,

Thanks for the time so I'll respond. Not trying to hijack the thread but I think the main question was answered and this info is still relevant. Much LONGER than I meant but it's a bit complicated.


#1 - BUS WIDTH:
This is almost a non-issue. There is minimal latency to encode, and it's lossless compression. Most importantly, if the bus width was insufficient it would cause a severe bottleneck. For the most part I'm not seeing a problem in games since increasing the GPU frequency often produced a linear improvement in frame rate. If the memory was causing a massive bottleneck this couldn't happen. I'm sure we'll find areas we get a SLIGHT bottleneck but on the other hand it's my understanding that the method they used also saves money so from a VALUE point of view the GTX970 probably makes more sense.

Thus, your main differences in performance between other cards is the GPU, and/or how efficient SLI is working on new vs old cards. I'd expect the newer cards to have more room for improvement in some titles with driver updates.

#2 - Bus Width Part 2:
An exception is the GTX980 vs GTX970 at very high resolutions such as 4K. The GTX980 has more ROP's and those are used to assist in the texture compression so you can lose a few percent relatively speaking by using a GTX970 at high resolutions.

Generally not too significant but worth mentioning.

#3: your comment->
"I had same scores for 1601 Core +1800 MemClock and 1601 Core and 2000 MemClock . Not actually sure why. "

If you get the same score and the only difference is the video memory speed then that's simply because there's no bottleneck at 1600MHz (x4 that effectively) so increasing even further in this case doesn't matter.

There's ALWAYS a bottleneck in a system during every second of usage. It might be the CPU, the GPU, System RAM, Video RAM or even just an artificial cap like VSYNC to limit frame rate. Bottlenecks can and often do move even during the same game depending on how well balanced the system is.

#4: NVLINK:
I believe this can also work to connect multiple GPU's on the same card as well. If fast enough, I think there's some hope to create a "virtual" single GPU rather than using two or more GPU's in the traditional SLI alternate frame rendering method.

I have to read more, but if this approach was successful it would appear as a single GPU to a game so not need the driver optimizations that SLI (AFR) would require and also allowing not only more cost-efficient solutions since cost can increase disproportionately with die size. There's also a limit on die size anyway which you could bypass by having multiple GPU's.

I also question WHEN this would replace PCIe since that would need a new motherboard. I think this would be a really hard sell until PCIe v3 gets saturated which is quite a ways off, and by then maybe we'll see PCIe v4 anyway? Also, not clear on whether it would support AMD cards so may end up a hard sell or even provide a source of backlash over "proprietary" technology.

Maybe we'll just see this in Servers and individual graphics cards but not gaming motherboards. We'll see I guess.

#5: 8GB limit?
There will be 8GB GTX980's for sure.

*While it's true that there's a relationship between the amount of Video memory and video bus speed (or more specifically effective speed since we have texture compression) it's not necessarily a direct relationship.

Let's say you had a 4GB card but more video data than this. Ideally, this would get stored on the System RAM and moved over as needed. This happens in most or possibly all games.

With 8GB you can store more data without having to swap over. But... what's the point since you can just mover over from the System RAM as needed you might ask?

In theory you could, however we're already seeing indications that some games aren't going to be well coded. WATCH DOGS for example needed a 4GB card on launch at Ultra settings or the frame rate plummeted when it had to sort out buffering the new textures. An issue with 2GB or even 3GB cards but likely not 4GB.

So I would say the main advantage of 8GB vs 4GB is going to be to prevent periodic stutters in poorly optimized console ports. Maybe it will be a rare occurrence but with the new consoles only about a YEAR OLD it's hard to make predictions. I made the incorrect prediction that 2GB on PC would last at least another years since they should be able to stream texture data more efficiently.

DX12 should support a tiled streaming method that hopefully ends up in game engines and works efficiently. That's for efficiently streaming from System to Video memory without any obvious stutter to reduce the VRAM requirement. Maybe that will work well, but if history is a good guide we'll see poorly optimized games on PC that need faster processors and more memory to handle what efficient coding should solve.

#6: G-SYNC:
You didn't mention it, but you talk about wanting to upgrade. IMO having a 2xGTX970 setup should really be sufficient for many years. If you want to upgrade anything then get a high-res G-Sync monitor.

To mean the "perfect" monitor for gaming is a 4K, 27" to 30", G-Sync monitor that has the color fidelity of an IPS panel but with 2ms or lower pixel response times to minimize ghosting issues. Also with 120Hz or higher refresh rate and lightboost or similar tech working at the same time as G-Sync.

I'd like to see this for under $500 as well. Or close to it. Also, I've heard good things about slightly curved MONITORS (not TV's) though I'd have to experience it for myself. I find 16x9 for a large high-res monitor is already sufficiently wide but it's possible a curved monitor might make a 21x9 resolution work better such as 5040x2160.

Finally, I'd like to say that while 4K gaming (i.e. 3840x2160) has been in the news a lot you can set the resolution lower to 2560x1440 and get almost exactly the same experience for DOUBLE the frame rate (i.e. 60FPS instead of 30FPS). In terms of buying a graphics card that can mean getting almost exactly the same experience with a 1xGTX970 as having 2xGTX980's.

Since SLI rarely scales perfectly you need better than a GTX970 just to compensate for that and of course you have to compensate for the 2x frame rate disparity; put another way you'd spend about $1200 on graphics cards to get nearly identical experience to spending $350 to run at 4K over 2560x1440. I guess my point is SPEND YOUR MONEY WISELY!!

(The ONLY time 4K makes sense to me for gaming is if you already run at a high frame rate, max quality settings, and this just helps with anti-aliasing more than applying a higher AA could do at 2560x1440).

CHeers.

 


Sorry you felt I was insulting you. It was just meant as a discussion.

I'm curious WHERE exactly you though I'd insulted you? You posed a couple questions like "bus width" which I answered and the other stuff I brought up myself.

Nowhere do I say you don't know what you are talking about or anything remotely insulting that I can find.

I'm not sure what question I asked that I answered myself either. Anyway, peace out.
 
I can reach 1575mhz core clock and 7600 mem clock with my strix. Fully stable in game and sythetic tests like firestrike. Still working on stock voltages. (1.210 i believe) don't like pushing memory clock since it is the easiest way to get your card to the grave.
 
Status
Not open for further replies.