According to leaked specs and other leaked benchmarks, the i9-14900K seemed similar to the i9-13900KS. The benchmarks quoted in this article show a margin of victory over the latter that looks suspiciously large, even for a KF variant.In other new water has been found to be wet.
Instruction footprints simply means how well it fits into caches. It's not rocket science.Soruce? What are you referring to by "instruction footprint"?
"It's like buying a ton of manure and saying "it doesn't smell that bad actually!"According to leaked specs and other leaked benchmarks, the i9-14900K seemed similar to the i9-13900KS. The benchmarks quoted in this article show a margin of victory over the latter that looks suspiciously large, even for a KF variant.
So, it's more like putting your hand in a bucket of water and finding that it's hot, when you expected it merely to be warm.
I would like to know which one you meantYou do understand though that intel wouldn't be using actual x3d right!?
They have their own technologies like CPU/GPU max with Gbs of cache...I didn't hear any mention of those components losing any clocks.
Also intel had clock offset for AVX before, so it's not unprecedented for them to reduce clocks.
Also finally intel's claim are the accelerators and things like thread scheduler that makes working on CPUs smarter instead of having to brute force everything.
The reason they won't do any of that for desktop anytime soon is that they don't need to.
The only thing the competition has is a server CPU that does kinda reasonably for desktop work so they can only sell it for its efficiency in brute force.
You claim that there are cases where it won't even fit in the 3D V-Cache, though? 96 MiB per CCD? Obviously, not all of it can be used for code, but that's a pretty stunning claim, IMO. Do you have any evidence of this?Instruction footprints simply means how well it fits into caches. It's not rocket science.
I'm saying it might not. Not that it won't!You claim that there are cases where it won't even fit in the 3D V-Cache, though? 96 MiB per CCD? Obviously, not all of it can be used for code, but that's a pretty stunning claim, IMO. Do you have any evidence of this?
The only way that seems particularly likely is if you're doing heavy runtime code-generation - maybe compiling a large graph or something like that. Back in the 90's, people would even do things like compiling a bitmapped image into a series of string instructions, just to blit it to the screen a little faster. Unless you're doing crazy stuff like that, CPU cores typically spend most of their time executing just a few hotspots.
Anyway, with modern CPUs doing so much speculative execution, it's not nearly as big of a win as it used to be, even to unroll your loops. That's a common way code would tend to bloat.
Finally, your response to the question about Geek Bench benefiting from 3D V-Cache focused only on code, but what about data? FWIW, my impression is that the cache hierarchy is generally a lot more utilized for caching data than code.
My guess is that the working set for GeekBench is simply too small to gain much benefit from the additional cache, so the reduction in clockspeed ends up having a greater impact on its performance.