1) 5600MHz with CAS latency of 28 which is 10.0 nanoseconds, or 2) 7800MHz with CAS latency of 36 which is 9.23 nanoseconds.
I'm not a memory expert but I'd argue the Latency (measured in nanoseconds) does not change hugely between 5600MT/s where it's 10.0ns and at 7800MT/s where it's 9.23ns. If you run the Memory Latency test in Aida64 at various SPD frequencies, you see roughly similar results.
As an electronics engineer, I'm using the Wiki definition of latency as the
time between setting up a request for data and that data being made available on the bus,
not the number of clock cycles in the CL setting.
To quote from Wiki:
Latency should not be confused with memory bandwidth, which measures the throughput of memory.
https://en.wikipedia.org/wiki/Memory_latency
CL and many of the other timings held in SPD are measured in clock cycles, not nanoseconds. There is a direct equivalence between the DDR memory clock frequency (MHz) plus the number of clock cycles specified by CL(CAS)
and the latency in nanoseconds.
Latency has an absolute value measured in units of time, in this case nanoseconds.
CL(CAS) is merely a number of clock cycles, but without knowing the clock frequency you cannot guess the latency from just the CL(CAS) value. Of course we normally know the clock frequency as well as CL, but without both values, we don't have the true latency in nanoseconds.
You need more clock cycles at a higher frequency to maintain the same latency, but fewer clock cycles at a lower frequency (hence the automatic changes in CL and other timings).
CL(CAS) is set in the BIOS by the SPD to 28 clock cycles at a true memory clock of 2800MHz (half of 5600MT/s Double Data Rate) and 10.0ns latency.
Similarly, CL(CAS) is set in the BIOS by the SPD to 36 clock cycles at a true memory clock of 3900MHz (half of 7800MT/s Double Data Rate) and 9.23ns latency.
Note: The latency doesn't change a great deal. For your DIMMs, I'd expect latency to sit around 9 to 10ns. Try measuring latency yourself at different XMP settings with Aida64.
What does change appreciably when you increase the clock rate is the
overall memory bandwidth and this is far more important than worrying about latency.
To quote from Wiki:
Memory bandwidth is usually expressed in units of bytes/second
https://en.wikipedia.org/wiki/Memory_bandwidth
Increased bandwidth is usually beneficial. Whether or not your apps make good use of extra bandwith is different matter. The graph below shows the effect of increasing memory speed (and hence bandwidth) on a Ryzen 7950X and an Intel i9-13900K.
It sounds like if I did run it at 5600, the SPD would force a lower latency number in synch with the relaxed frequency.
True. When you change the XMP memory clock rate in the BIOS, it looks up the tables in SPD and chooses what it thinks are the most appropriate setting for CL and all the other Primary, Secondary and Tertiary settings, of which there are many.
As you increase the XMP speed, the BIOS will select larger values of CL, etc. The main disadvantage of running high XMP frquencies is potential system instability, more heat and possible long term effects of increasing the voltage on the RAM and the CPU's Integrated Memory Controllers. In addition, electromigration effects increase when you overclock CPU or RAM which can lead to deterioration in critical components. It might take years before you notice any negative effexts,
https://en.wikipedia.org/wiki/Electromigration
I find Aida64 handy for viewing the SPD tables, or you can use CPUID's CPU-Z.
Best of luck with your overclocking adventures.