[SOLVED] can't udnerstand that vram memory frequency

Solution
There's a reference clock source, which is 875 MHz in this case. This is doubled to form the actual 'memory clock', so 1750 MHz. This is then doubled again to form the write clock (WCK). GDDR6 is quad data rate, so the data rate is four times WCK. So there four different clock speeds that could be listed, all of which would be correct.

Techpowerup said:
Another issue is that Wattman reports memory clocks as "875 MHz".. it's actually 1750 MHz—twice that. What Wattman reports is the frequency of the memory controller's internal clock source, not the actual clock the chips get driven at. How did nobody notice this? My solution for GPU-Z is to simply multiply all memory clocks on Navi by two, which avoids confusing the hell out of people as they may...
I don't know what software that is, the cards in question, the drivers in use, etc. But my guess would be a simple calculation error somewhere. If the card is performing close to what it should be, then everything is likely fine.

Not sure what your second question is. Both cards use GDDR6. So the actual clock speed would be 7000Mhz, with communication happening on both the rising and falling edge of the clock cycle, thus the term 'effective' memory speed which you seem to be familiar with.
 
There's a reference clock source, which is 875 MHz in this case. This is doubled to form the actual 'memory clock', so 1750 MHz. This is then doubled again to form the write clock (WCK). GDDR6 is quad data rate, so the data rate is four times WCK. So there four different clock speeds that could be listed, all of which would be correct.

Techpowerup said:
Another issue is that Wattman reports memory clocks as "875 MHz".. it's actually 1750 MHz—twice that. What Wattman reports is the frequency of the memory controller's internal clock source, not the actual clock the chips get driven at. How did nobody notice this? My solution for GPU-Z is to simply multiply all memory clocks on Navi by two, which avoids confusing the hell out of people as they may otherwise wonder whether their card is broken.
Link

Can also look at figure 5 here

What you actually see depends on which of these value(s) the GPU provides, and how the monitoring utility presents it.

If you just want to know the effective data rates for the cards, they're all listed on the Wikipedia pages for RTX 20 and RX 5000 series.
 
Last edited:
Solution