[SOLVED] MSI Afterburner showing wrong memory clock

Sep 14, 2020
20
1
25
0
Hello, I have got a little problem with MSI Afterburner.

When I click the 'Reset' button, it sets the Core clock to 810 Mhz which is the real default value. But the memory clock is set to 2010 Mhz when the real value is 1005 Mhz.

I used Techpowerup GPU-Z to check my default clocks.

When the memory clock is set to 2010 Mhz in Afterburner, GPU-Z shows 1005 Mhz as the current memory clock.. Look at the following comparisons, please. See the highlighted parts...





So, the clock values are different in the two software and I don't know which one is correct because both are trusted by the community.

By the way, I download EVGA precision X and it showed the memory clock as 2010 Mhz too... However, it didn't work, I cannot even change the fan speed, so uninstalled it. The Nvidia System Monitor is a piece of crap. It doesn't even load, just throw a memory error...

Here are my specs.
Graphics Card: Nvidia Geforce GTX 560
Processor: i5-2400
RAM: 4GB
 

Barty1884

Retired Moderator
Sorry, but I still don't understand. Don't know why they decided to confuse people that way...
I'm not sure who the 'they' is you're referencing.... It's the same thing, measured in a different way. There's no 'right' way, just different.

So, what is my default memory clock in MSI Afterburner's terms? Is it safe to set it to 2010 Mhz? I don't wanna mess it up by overclocking..
2010MHz, if you reset AB and that's what's showing.

Yes, 2010MHz is 'safe', because it's also 1050MHz by another metric.

At the end of the day, it doesn't matter which you work based on, as long as you're consistent. If overclocking, it would be smart to work on AB or PX1, as that's where you'll implement changes.

As you increase the effective in AB or PX1, everything will scale. If you do +50MHz in AB or PX1, it'll work back to +25 or +12MHz as measured in GPU-Z.

As I mentioned, I can't fully explain the ins & outs - I'll see if I can get some assistance to explain a bit better.
 
Reactions: Ben_MacTavish

Barty1884

Retired Moderator
I won't claim to know the full ins & outs, but it's something to do with 'true' vs 'effective'.

DDR = Double Data, so numbers may have to be doubled, depending on how an application reports.
Similar to applications that report 1600MHz DDR3 when another will report 800MHz. Same thing, different metric.

GDDR differs further, has something to do with bandwidth. GDDR5 should be 4x (IIRC), meaning an effective 2010MHz should be showing ~500MHz in some applications - So one of your GPU-Z screenshots look correct.


As an example. AB reports 7000MHz on my 2070Super, GPU-Z = 1750MHz.
 
Sep 14, 2020
20
1
25
0
I won't claim to know the full ins & outs, but it's something to do with 'true' vs 'effective'.

DDR = Double Data, so numbers may have to be doubled, depending on how an application reports.
Similar to applications that report 1600MHz DDR3 when another will report 800MHz. Same thing, different metric.

GDDR differs further, has something to do with bandwidth. GDDR5 should be 4x (IIRC), meaning an effective 2010MHz should be showing ~500MHz in some applications - So one of your GPU-Z screenshots look correct.


As an example. AB reports 7000MHz on my 2070Super, GPU-Z = 1750MHz.
Sorry, but I still don't understand. Don't know why they decided to confuse people that way...

So, what is my default memory clock in MSI Afterburner's terms? Is it safe to set it to 2010 Mhz? I don't wanna mess it up by overclocking..

Thanks!
 

Barty1884

Retired Moderator
Sorry, but I still don't understand. Don't know why they decided to confuse people that way...
I'm not sure who the 'they' is you're referencing.... It's the same thing, measured in a different way. There's no 'right' way, just different.

So, what is my default memory clock in MSI Afterburner's terms? Is it safe to set it to 2010 Mhz? I don't wanna mess it up by overclocking..
2010MHz, if you reset AB and that's what's showing.

Yes, 2010MHz is 'safe', because it's also 1050MHz by another metric.

At the end of the day, it doesn't matter which you work based on, as long as you're consistent. If overclocking, it would be smart to work on AB or PX1, as that's where you'll implement changes.

As you increase the effective in AB or PX1, everything will scale. If you do +50MHz in AB or PX1, it'll work back to +25 or +12MHz as measured in GPU-Z.

As I mentioned, I can't fully explain the ins & outs - I'll see if I can get some assistance to explain a bit better.
 
Reactions: Ben_MacTavish
Sep 14, 2020
20
1
25
0
I'm not sure who the 'they' is you're referencing... It's the same thing, measured in a different way. There's no 'right' way, just different.



2010MHz, if you reset AB and that's what's showing.

Yes, 2010MHz is 'safe', because it's also 1050MHz by another metric.

At the end of the day, it doesn't matter which you work based on, as long as you're consistent. If overclocking, it would be smart to work on AB or PX1, as that's where you'll implement changes.

As you increase the effective in AB or PX1, everything will scale. If you do +50MHz in AB or PX1, it'll work back to +25 or +12MHz as measured in GPU-Z.

As I mentioned, I can't fully explain the ins & outs - I'll see if I can get some assistance to explain a bit better.
Thanks a lot for the help. I think I understand it now...
 
Last edited:

anort3

Titan
Moderator
DDR = double data rate. DDR moves one bit of information on the rising edge and one bit of information on the falling edge of the clock cycle. So for each clock cycle it does 2 processes. Programs tend to read either the real speed of DDR in megahertz ( Mhz ) or effective speed in mega transfers per second ( MT/s ).

Now for a graphics card using GDDR 5 the Graphics( G )DDR RAM is 'quad pumped'. It can move 2 bits on the rising edge and 2 on the falling. So that can add to the confusion. Real speed might be 1000Mhz but 'effective' speed is 4000MT/s.
 
Sep 14, 2020
20
1
25
0
DDR = double data rate. DDR moves one bit of information on the rising edge and one bit of information on the falling edge of the clock cycle. So for each clock cycle it does 2 processes. Programs tend to read either the real speed of DDR in megahertz ( Mhz ) or effective speed in mega transfers per second ( MT/s ).

Now for a graphics card using GDDR 5 the Graphics( G )DDR RAM is 'quad pumped'. It can move 2 bits on the rising edge and 2 on the falling. So that can add to the confusion. Real speed might be 1000Mhz but 'effective' speed is 4000MT/s.
Thanks for the explanation!
 

ASK THE COMMUNITY

TRENDING THREADS