I'm not having problems, but I had a technical question which I couldn't articulate with the proper syntax to get any results on the internet. I was wondering if anybody knew exactly why there's inconstant real time clock speeds with processors, and whether this comes down to architecture, voltage, sensors, or software. I've seen a lot of entertainment and professional coverage of processors and overclocking, and these inconsistencies are sometimes mentioned but were never explained.
For example, my Intel i7 8700k is overclocked to 4.8GHz six-core stable. The real time clock speed is often 4800 MHz, but also varies between 4797.6-4802.4 with an average of ~4799.6 MHz. It's a stable fluctuation across the six cores evenly. I'm noticing a similar trend with Uncore clocking, and different sensor programs show the same results. Windows 10 Task Manager is not even close with the clock speed, but I'm not worried about that since literally everything else (like HWiNFO64/CPU-Z/CoreTemp/BIOS) is displaying it correctly.
This doesn't actually matter overall, and the only impact I've seen (and why I noticed it in the first place) is with CoreTemp, since it doesn't use a dynamic clock speed reading, so when you start the program it just reads your static clock speed from when the instance was initialized. This means if it was ~4801.15 MHz at launch, the program will read out 4801.15 MHz an hour later even if it's actively running at 4800mhz.
It's just a little weird, and I was wondering if anybody knew the reason why this happens with processors.
For example, my Intel i7 8700k is overclocked to 4.8GHz six-core stable. The real time clock speed is often 4800 MHz, but also varies between 4797.6-4802.4 with an average of ~4799.6 MHz. It's a stable fluctuation across the six cores evenly. I'm noticing a similar trend with Uncore clocking, and different sensor programs show the same results. Windows 10 Task Manager is not even close with the clock speed, but I'm not worried about that since literally everything else (like HWiNFO64/CPU-Z/CoreTemp/BIOS) is displaying it correctly.
This doesn't actually matter overall, and the only impact I've seen (and why I noticed it in the first place) is with CoreTemp, since it doesn't use a dynamic clock speed reading, so when you start the program it just reads your static clock speed from when the instance was initialized. This means if it was ~4801.15 MHz at launch, the program will read out 4801.15 MHz an hour later even if it's actively running at 4800mhz.
It's just a little weird, and I was wondering if anybody knew the reason why this happens with processors.