News Intel's Workstation Refresh Chips Brace For Threadripper 7000

I was hoping some of the higher cache seen in the 5th gen server parts would propagate down, but it seems that isn't the case. These CPUs logical viability will definitely come down to where Intel is willing to price them. I know I'd consider the lowest unlocked one if it was going to cost $700-750.
 
  • Like
Reactions: bit_user
Yeah, the 14-core W5-2555X is somewhat of a natural choice, given that it's the first one which can reach 4.8 GHz.

To justify that, you should use AVX-512 or need the PCIe lanes. Otherwise, a i9-13900K or i9-14900K is probably of comparable performance.
I mostly just want the PCIe lanes (with proper bifurcation) as I tend to sit on my PCs for as long as they work for me so I'm willing to spend more, but not the current pricing more. That being said I've already kicked the can down the road on upgrading multiple times so as long as my current system works I'm not adverse to waiting.

Will also be waiting to see what the overclocks look like as I wouldn't touch it if I couldn't get min 2 cores to 5.5ghz.
 
  • Like
Reactions: bit_user

P.Amini

Reputable
Jan 20, 2021
26
27
4,560
I have a question (in fact it's in my mind for so many years!). Why don't hi-end mainstream CPUs (for example i9s) support quad channel memory instead of dual channel?
Enthusiasts spend a lot of money on CPU and Motherboard so they don't care if they spend a bit more on 4 memory modules instead of 2 and gain performance.
 
I have a question (in fact it's in my mind for so many years!). Why don't hi-end mainstream CPUs (for example i9s) support quad channel memory instead of dual channel?
For the most part desktop CPUs aren't memory bandwidth limited (now that we have DDR5) though moving to quad channel across the board would be better for consumers (IGP memory bandwidth, and potentially being able to optimize latency without losing bandwidth). This does cost more to make boards wise and adds memory controller space to the CPU which is likely the reason why it has never happened (keep in mind even Apple's base M series is 128 bit).
 
  • Like
Reactions: P.Amini

bit_user

Titan
Ambassador
For the most part desktop CPUs aren't memory bandwidth limited (now that we have DDR5)
Hmmm... the best way to test this would be to look at memory speed scaling benchmarks. Memory latency doesn't change much, with respect to speed. The number of nanoseconds is generally about the same, making it a pretty good test for bandwidth bottlenecks.

After a few minutes of searching, here's the best example of DDR5 scaling I found:

DDR5_Speed_PR.png


Source: https://www.pugetsystems.com/labs/a...-on-content-creation-performance-2023-update/

That shows a 13.5% advantage provided by using 45.5% faster memory, on an i9-13900K. It's a pretty clear indication memory can be a bottleneck. I don't know if that benchmark is fully multi-threaded, but if it's indeed running 32 threads, then half of those threads will be at a competitive disadvantage, since the E-core clusters have only a single bus stop for each 4 E-cores. Therefore, perhaps a better interconnect topology would show even better scaling on the same workload.

moving to quad channel across the board would be better for consumers (IGP memory bandwidth, and potentially being able to optimize latency without losing bandwidth). This does cost more to make boards wise and adds memory controller space to the CPU which is likely the reason why it has never happened
@P.Amini , Intel used to have a HEDT-tier of non-Xeon processors, but that's now gone. As of today, stepping up to a quad-channel configuration is possible with the Xeon W 2400 series, but too expensive for the sake of memory bandwidth alone.
 
Hmmm... the best way to test this would be to look at memory speed scaling benchmarks. Memory latency doesn't change much, with respect to speed. The number of nanoseconds is generally about the same, making it a pretty good test for bandwidth bottlenecks.

That shows a 13.5% advantage provided by using 45.5% faster memory, on an i9-13900K. It's a pretty clear indication memory can be a bottleneck. I don't know if that benchmark is fully multi-threaded, but if it's indeed running 32 threads, then half of those threads will be at a competitive disadvantage, since the E-core clusters have only a single bus stop for each 4 E-cores. Therefore, perhaps a better interconnect topology would show even better scaling on the same workload.
It's a well multithreaded benchmark, and you can really take all of the memory up until 6000 together as they're running JEDEC B timings. Subtimings are of course a question, but I have zero clue about those as I'm not going to try to dive through memory specs. What I'd mostly like to know with that graph is about the jump from 5600 to 6000 being higher than 4400 to 5600.

Most of the other tests are all over the place which is unfortunately about what I've come to expect from memory on Intel Golden Cove platforms. The only time I've seen memory consistently at the top was the memory scaling video HUB did and they got optimal Intel timings and subtimings from Buildzoid. It seems like some of the subtimings really matter which wasn't so much the case prior to DDR5.
 
  • Like
Reactions: P.Amini

bit_user

Titan
Ambassador
What I'd mostly like to know with that graph is about the jump from 5600 to 6000 being higher than 4400 to 5600.
Here's the CAS time, converted from cycles to nanoseconds:

Speed (MHz)CAS (cycles)CAS (ns)
4400​
36​
8.18​
4800​
40​
8.33​
5200​
42​
8.08​
5600​
46​
8.21​
6000​
50​
8.33​
6400​
32​
5.00​

That value for 6400 is the outlier, but it arose from enabling XMP. What if they messed up and accidentally enabled XMP for 6000, as well? It seems plausible, given 5600 is the highest specified speed of the Raptor Lake CPU they used. So, once they push above that, perhaps the BIOS went into some kind of auto-overclocking mode.
 
  • Like
Reactions: P.Amini