News Ryzen 7 7800X3D Smashes Core i9-14900K in Factorio Gaming Benchmark

Status
Not open for further replies.

vertuallinsanity

Prominent
May 11, 2022
34
14
535
So Computerbase has the older 8c16t AMD part beating the new Intel part ~64%(!) in a single gaming benchmark.

Dexterio (?) Has the newer 16c32t AMD "marginally" losing to the new Intel part in a 25 test run with at least 10 games but no info on the other benchmarks.

Both are leaks from the last 4 days. Both are written as AMD leaning. How does anyone make sense of this?
 
A program which loves cache and takes advantage of the 3D cache of the 7800X3D beats a chip with far less cache?

Here's another spoiler: A program which takes advantage of multiple cores effectively will perform better on an EPYC 96 core than a Xeon 56 core.
 

bit_user

Titan
Ambassador
So Intel has finally made it to 10 nm? Cool. (I haven't looked at their stuff in years...)
That part surprised me. Intel calls it Intel 7, in order to align themselves with how TSMC and Samsung call their roughly comparable nodes.

However, the very same node currently known as Intel 7 was once called 10 nm ESF (Enhanced SuperFin). So, the author isn't wrong to say "10 nm".
 

bit_user

Titan
Ambassador
A program which loves cache and takes advantage of the 3D cache of the 7800X3D beats a chip with far less cache?
I guess the real news is that Factorio is such a program.

For me, what's a little surprising is that its sweet spot lands so squarely within the additional capacity provided by the 3D VCache. Had its working set been just a bit larger or smaller, maybe the X3D models' advantage would've evaporated.

I'll bet the game's developers probably weren't aware of this. Given a bit of time, they could probably optimize it to work a lot better on non-X3D models.

Here's another spoiler: A program which takes advantage of multiple cores effectively will perform better on an EPYC 96 core than a Xeon 56 core.
I don't know about you, but I'd be pretty surprised to see a game which scaled well to so many cores. That would also be newsworthy!
 
  • Like
Reactions: JamesJones44
Oct 14, 2023
2
8
15
A program which loves cache and takes advantage of the 3D cache of the 7800X3D beats a chip with far less cache?

Here's another spoiler: A program which takes advantage of multiple cores effectively will perform better on an EPYC 96 core than a Xeon 56 core.
It's not the fact that it's faster that is noteworthy, it's the margin by which. 64% is extreme.
 

waltc3

Honorable
Aug 4, 2019
454
252
11,060
CPUs running programs that cannot feed the onboard processing units fast enough to max out CPU performance are simply slower, as this bench illustrates so well. And I don't even know if these AMD x3D CPUs are actually maxed out by Factorio...;) Cache has been King in CPU design as long as I can remember.
 

bit_user

Titan
Ambassador
Cache has been King in CPU design as long as I can remember.
Not all software is equally sensitive to the X3D models' cache size increase. It actually varies quite a lot. Some are hurt more by the loss of clock speed than they're helped by the additional L3 cache. I think that makes it more of an unreliable Prince?

We saw further examples of this, when Bergamo launched. That provided some nice 3-way comparisons and you could see that certain things liked having 128 cores @ half-L3 more than having 96 cores @ triple-L3. It really helps that the underlying microarchitecture is virtually the same, across all 3.
 
  • Like
Reactions: waltc3

waltc3

Honorable
Aug 4, 2019
454
252
11,060
Not all software is equally sensitive to the X3D models' cache size increase. It actually varies quite a lot. Some are hurt more by the loss of clock speed than they're helped by the additional L3 cache.

We saw further examples of this, when Bergamo launched. That provided some nice 3-way comparisons and you could see that certain things liked having 128 cores @ half-L3 more than having 96 cores @ triple-L3. It really helps that the underlying microarchitecture is virtually the same, across all 3.
Yes, absolutely. Not all software is optimized to run fastest on the user's hardware. But it doesn't change the fact that with software optimized to make the most out of the hardware in a given system, that cache is king in the CPU, provided your processing units can run as fast as the cache can feed them the data...;) So, both are needed--better/faster/larger cache coupled with better/faster processing units--lest anyone think it's "just" the cache differences. This is why these results are so starkly in favor of the x3d chip designs. If the processing units on the CPU could not keep up with the cache, then it would scarcely matter. Slower processing cores could make little use of much faster/larger cache. And it's not just the amount of cache but the speed of it, as well. It shows how powerful the AMD x3d processing cores are. You would get the same result running any other CPU-optimized software like Factorio. Also, I remember (vaguely) reading some blurb the other day in which Intel stated that it would have to emulate AMD's x3d cache approach, and had plans to do so. Was it fake news? I doubt it, but these days, who knows?...;) Makes perfect sense to me, however.
 
  • Like
Reactions: NinoPino

bit_user

Titan
Ambassador
Yes, absolutely. Not all software is optimized to run fastest on the user's hardware. But it doesn't change the fact that with software optimized to make the most out of the hardware in a given system, that cache is king in the CPU, provided your processing units can run as fast as the cache can feed them the data...;)
You're still being too simplistic. Not all programs have the same size working set. In some cases (certain cryptocurrency algorithms come to mind), the working set is so large that no feasible amount of cache will make a difference. In other cases, the working set is so small that you can get away with a tiny amount or even no L3. It's fundamentally dependent on what you're doing.

The other thing you might have missed is that CPUs now do speculative prefetching. That relieves some of the burden from cache and helps in certain places (e.g. stream processing) where cache can't.
 

waltc3

Honorable
Aug 4, 2019
454
252
11,060
I like to keep things simple...;) As I said, lots of software is not optimized to run with maximum performance on the user's hardware. Maybe that is simple, but it is also true. I just see little advantage to buying slower hardware simply because some software, limited in its hardware support or overall design, won't run very fast with it. Your point seems to me to be that I should buy a slower/less capable CPU because my software might not be able to take advantage of a much faster one. Now, if I was considering a business situation where such limited software would be all that I might run, then maybe that would make sense. But it would only make sense if said slower cpu was cheaper and a better buy, I think. And then, suppose the software is revamped to take advantage of hardware the slower CPU simply doesn't have? For me, it's best bang for the buck, always. So we'll have to agree to disagree, I suppose.
 

JamesJones44

Reputable
Jan 22, 2021
867
809
5,760
that cache is king in the CPU
Cache isn't aways king is the point bit_user is trying to make. Depending on the program a cache miss on a large cache will actually pay a much larger penalty than a smaller cache. I just depends on what you are doing, what the program does and how it's being used.

For game, sure, cache is king is a far argument. For sparse model rendering where there are very few repeated parts cache won't be king, clock speed will be.
 
It's not the fact that it's faster that is noteworthy, it's the margin by which. 64% is extreme.
Not really, the 14900K has 36MB L3 cache, the 7800X3D has 96MB, so for a program that's very cache dependent, a CPU with 62.5% more L3 cache performs basically that much faster, just like programs which aren't as cache dependent will perform worse than a CPU which is able to compute faster due to higher IPC and/or clock speeds, as every reputable review site and even AMD will tell you. It's also why Intel makes some Xeon processors with HBM for programs which take advantage of it.
 

bit_user

Titan
Ambassador
Your point seems to me to be that I should buy a slower/less capable CPU because my software might not be able to take advantage of a much faster one.
I have no idea where you got that idea. The only point I tried to make is that the benefit of having more cache is workload-dependent. Again, I'd direct you to those Phoronix benchmarks I previously linked, or better yet these:

If you take the time to look at them one-by-one, you'll see a lot of variation in how different workloads respond to the increase in L3 cache.

I did make another key point, which is that you can often do things to optimize data access patterns to be more cache-friendly. Often, but not always. Sometimes, as in the case of the old Ethereum proof-of-work algorithm, it's designed to be cache-unfriendly. Or, you just have a naturally-occurring situation involving random access with little spatial or temporal coherence.
 
Last edited:
IIRC wasn't there a leak about intel in future doing their own cache chips in 15th or so gen?

AMD's shown chiplet & cache are great and they already adopting chiplets.


but this type of result isnt anything new or unheard of...remember the 5800x3d? it beat both intel & amd's flagships in stuff that used the cache...almost like thats the point of these specialized chips.
 

bit_user

Titan
Ambassador
IIRC wasn't there a leak about intel in future doing their own cache chips in 15th or so gen?
There was some speculation about system-level cache (L4), in Meteor Lake. I still have yet to go through all the launch coverage, but I guess it didn't happen?

but this type of result isnt anything new or unheard of...remember the 5800x3d? it beat both intel & amd's flagships in stuff that used the cache...almost like thats the point of these specialized chips.
Indeed. A close look at the data in this article will reveal that the 5800X3D even beat AMD's own 7700X - both 8-core CPUs!
 
Last edited:

bit_user

Titan
Ambassador
Depending on the program a cache miss on a large cache will actually pay a much larger penalty than a smaller cache.
There's some truth to that. However, the penalty from increasing cache size should even apply to cache hits.

"Zen 1 to Zen 2 doubled the size of each slice and only increased the latency cost by about 5 cycles. Similarly, Zen 3 to Zen 3 with V-Cache tripled the slice size but only added 3 to 4 cycles to the latency where as the move from Zen 2 to Zen 3 which unified the 2 16MB L3s on a CCD into a single 32MB L3 added 7 to 8 cycles to the L3 latency; now how much of that extra latency was due to AMD preplanning for V-Cache, we simply can not know."

Source: https://chipsandcheese.com/2022/01/21/deep-diving-zen-3-v-cache/
 

bit_user

Titan
Ambassador
the 14900K has 36MB L3 cache, the 7800X3D has 96MB, so for a program that's very cache dependent, a CPU with 62.5% more L3 cache performs basically that much faster,
That's a coincidence - nothing more. We could construct a scenario where the benefit of the V-Cache will be even greater.

It's also why Intel makes some Xeon processors with HBM for programs which take advantage of it.
HBM is sort of the opposite. You get higher best-case latency, but much more bandwidth.

HBM and large caches aren't an either-or scenario, although if you're starting with a HBM-equipped CPU, enlarging the L3 cache won't provide as much benefit as a CPU with less memory bandwidth, on average. However, because they tend to benefit different scenarios, they could be somewhat complementary.

I'm sure AMD (and probably Intel) will have CPUs with both, in the near future.
 
Allyn Malventano was doing a video (I think it was in June) with Wendell for Level 1 Techs and savaged doing Factorio benches. The benchmark isn't representative of the actual gameplay (let alone anything else) and everything can basically be stored in cache (I think he said it required around 80MB).
 

ilukey77

Reputable
Jan 30, 2021
833
339
5,290
Not all software is equally sensitive to the X3D models' cache size increase. It actually varies quite a lot. Some are hurt more by the loss of clock speed than they're helped by the additional L3 cache. I think that makes it more of an unreliable Prince?

We saw further examples of this, when Bergamo launched. That provided some nice 3-way comparisons and you could see that certain things liked having 128 cores @ half-L3 more than having 96 cores @ triple-L3. It really helps that the underlying microarchitecture is virtually the same, across all 3.
Thats why while ive got both the 5800x3d and 7800x3d ( and they are both great cpu's ) they are in essence a bit over hyped !!

So sure they are great in cache sensitive games they seem to lose quickly to the raw powered over heated 13900k ..
Fingers crossed IF the 8950x3d can bring its A game with the more cores and 3d v cache sorted properly i think Intel will take a few gens to catch up ..

Thats been the fail with the 7950x3d ( and a reason i decided against buying it )was it doesnt do much particularly great 7800x3d is the better cheaper cpu for games and the 13900k is the better over all CPU for both production and games !!
 
  • Like
Reactions: bit_user

watzupken

Reputable
Mar 16, 2020
1,181
663
6,070
A program which loves cache and takes advantage of the 3D cache of the 7800X3D beats a chip with far less cache?

Here's another spoiler: A program which takes advantage of multiple cores effectively will perform better on an EPYC 96 core than a Xeon 56 core.
But isn't it the whole point to have software and hardware that augment each other? What is the point of spamming cores, when some software don't fully utilize it? So it is time someone thinks outside the box and not just focus on increasing cores, and/or, clockspeed.
 
  • Like
Reactions: bit_user
Status
Not open for further replies.