News Increased Linux kernel timer frequency delivers big boost in AI workloads

The article said:
Elsewhere, the differences might be considered within the margin of error of system benchmarking.
No. Margin of error should be like < 1%. Several other benchmarks had differences were well above that.
  • nginx had a case that benefited from the 1kHz tick rate by 8.6%.
  • Darktable had a case where 250 Hz was 7.5% faster.
  • A few others in the 2-3% range.

I've seen no compelling explanation for performance changes of such a magnitude. It's generally accepted that a faster tick rate just adds overhead, which is why the default is now 250 Hz and used to be even lower.

The article said:
Let's recap the patch statement shared by Google engineer Qais Yousef
You missed a key part of the explanation, which was right in the first paragraph:

"The larger TICK means we have to be overly aggressive in going into higher frequencies if we want to ensure perf is not impacted. But if the task didn't consume all of its slice, we lost an opportunity to use a lower frequency and save power. Lower TICK value allows us to be smarter about our resource allocation to balance perf and power."

Source: https://www.phoronix.com/news/Linux-2025-Proposal-1000Hz
 
To my knowledge, current default configuration is for the kernel to be tickless on all but the boot CPU core - meaning that this setting should have very little impact on power consumption as only one core would be affected, and it's the busiest one anyway.
Still, that default would change the system wide interrupt being sent once every 3 000 000 cycles - that's equivalent to a Pentium III 750 running a 250 Hz tick.