News 7-Zip Benchmark: Intel Core i9-13900K 60% Faster Compared To 12900K

AgentBirdnest

Respectable
Jun 8, 2022
271
269
2,370
Honest question: What real-world usage does 7-zip translate to?
Mostly just using 7-zip to compress/decompress files. :p
If I remember right, 7-zip was the 12900K's biggest weak spot among all benchmarks I ever saw. Like really surprisingly super-weak, ranking below other Alder Lake SKUs. So, the fact that they fixed/improved whatever problem the 12900K had with 7-zip doesn't say much to me.

(Someone correct me if I'm wrong, my memory is spotty.)
 
  • Like
Reactions: artk2219
Honest question: What real-world usage does 7-zip translate to?
Mostly just using 7-zip to compress/decompress files. :p
If I remember right, 7-zip was the 12900K's biggest weak spot among all benchmarks I ever saw. Like really surprisingly super-weak, ranking below other Alder Lake SKUs. So, the fact that they fixed/improved whatever problem the 12900K had with 7-zip doesn't say much to me.

(Someone correct me if I'm wrong, my memory is spotty.)
None at all, 7-zip runs an internal benchmark workload so no real files, no I/O, no lanes, no nothing, just number-crunching which can only be compared to distributed computing maybe.
It also has a default dictionary size (cache) size of 32Mb so anything with less cache than that will have a huge disadvantage in running that bench, hence why older intel CPUs give a lower result compared to ryzen, now they increased cache so it gets higher numbers.
 
  • Like
Reactions: RodroX
Honest question: What real-world usage does 7-zip translate to?
Most games package files with some form of compression and encryption/hashing. Also, Windows and Linux implement compression mechanisms (not necessarily 7zip's) that depend on INT and memory bandwidth for files in the file system. And other several, albeit smaller, uses here and there. Mostly related to moving files to and from RAM/VRAM.

That being said, I don't know how representative 7zip is for those, but I'd imagine it's a tad more valid than testing WinRAR or WinZip, haha.

Regards.
 
  • Like
Reactions: RodroX

AgentBirdnest

Respectable
Jun 8, 2022
271
269
2,370
None at all, 7-zip runs an internal benchmark workload so no real files, no I/O, no lanes, no nothing, just number-crunching which can only be compared to distributed computing maybe.
It also has a default dictionary size (cache) size of 32Mb so anything with less cache than that will have a huge disadvantage in running that bench, hence why older intel CPUs give a lower result compared to ryzen, now they increased cache so it gets higher numbers.
Oh, I didn't know that about 7-zip benchmarks... well... that makes it even more useless to me, which I thought was impossible.
 

salgado18

Distinguished
Feb 12, 2007
925
363
19,370
Honest question: What real-world usage does 7-zip translate to?
The system I work on saves the received data, compresses it and stores it in the clowd. When some data is needed, the files have to be decompressed to be useful. While the files are not really large, this is a use case where decompression speed can affect a system responsiveness, and, with many users, can reduce the overall usage of hardware resources.

Also, I believe there's an effect on compressed communication in the web (gzip header in http requests). Also very small, but can be significant with many requests and heavy content, and also may affect battery performance.

Funny thing is, until you asked that question and I decided to answer, I never thought it was useful :p
 
The system I work on saves the received data, compresses it and stores it in the clowd. When some data is needed, the files have to be decompressed to be useful. While the files are not really large, this is a use case where decompression speed can affect a system responsiveness, and, with many users, can reduce the overall usage of hardware resources.

Also, I believe there's an effect on compressed communication in the web (gzip header in http requests). Also very small, but can be significant with many requests and heavy content, and also may affect battery performance.

Funny thing is, until you asked that question and I decided to answer, I never thought it was useful :p
Systems that need it that much will have a hardware solution only for that usage, usually storage cards do come with on the fly de/compression in hardware.
Nobody uses the OS plus 3rd party tool for this, if they don't use specialized hardware they are going to use an OS build-in feature like widows has compact.exe or now win32 api
https://github.com/IridiumIO/CompactGUI
 

PiranhaTech

Commendable
Mar 20, 2021
132
85
1,660
Honest question: What real-world usage does 7-zip translate to?
It's an indication of multi-core performance and perhaps CPU cache, but you should wait until more independent benchmarks come out. It's a bad idea to go full Apple or Cell Processor based on a single benchmark
 
I have corporate code that incorporates 7zip compress/decompress to save network bandwidth and storage of very large databases (multi TB). This is very relevant to my interests.
Similar to Salgado (and TerryLaze's response) though, are you using consumer-tier CPUs to do this task? Obviously with the "pro-sumer" tier of CPUs being swallowed by the "consumer" tier, it blurs the line a bit. To be fair to TH, I like that they [Aaron] pointed out that the improvement is largely a by-product of thread/core count between the two CPUs being compared. So obviously that's even less useful to conclude how much better the Raptor Lake architecture actually is. Sure if your system is constantly compressing/decompressing data the differences are important, but how much beyond "moar coarz = better"?
 

spongiemaster

Admirable
Dec 12, 2019
2,273
1,277
7,560
To be fair to TH, I like that they [Aaron] pointed out that the improvement is largely a by-product of thread/core count between the two CPUs being compared. So obviously that's even less useful to conclude how much better the Raptor Lake architecture actually is. Sure if your system is constantly compressing/decompressing data the differences are important, but how much beyond "moar coarz = better"?
There's no way a doubling of the eCores from 8 to 16, or 33% increase in thread count from 24 to 32, is the primary reason for a 60-70% increase in performance for this benchmark.
 
  • Like
Reactions: bit_user

DougMcC

Commendable
Sep 16, 2021
111
76
1,660
Similar to Salgado (and TerryLaze's response) though, are you using consumer-tier CPUs to do this task? Obviously with the "pro-sumer" tier of CPUs being swallowed by the "consumer" tier, it blurs the line a bit. To be fair to TH, I like that they [Aaron] pointed out that the improvement is largely a by-product of thread/core count between the two CPUs being compared. So obviously that's even less useful to conclude how much better the Raptor Lake architecture actually is. Sure if your system is constantly compressing/decompressing data the differences are important, but how much beyond "moar coarz = better"?

An end user often takes a subset of the resulting files onto a local laptop to work with. So if performance on end user hardware like a lenovo laptop is n% faster that has a direct impact on developer productivity. Presumably this hardware will also make it into AWS at some point, so we may want to think about prioritizing a shift to new instances when they are available.
 

neonred

Honorable
Nov 12, 2016
2
0
10,510
8 Performance cores ---pffff weak.
Until Intel offers 16 performance cores, AMD is leader.
Intel's thinking is half-az. Those efficient cores would mean something if they could match the performance core count of AMD. Until they do, they will be recognized as crap cores. All the Intel benchmarks you see, none of them are sustained performance over hours which means they hide the the crap cores true performance and overall give Intel a higher score then it deserves. This is not so much a problem for most games though, but it does matter if you are a power user constantly multi-tasking. They should call those performance cores levellers as in they prevent huge drops in performance, but they don't actual perform that well. Something tells me core count increase will be stagnant as they bump other things like hertz and FSB to match mem speeds; so expect consumer core count to remain at 16 until 2024-2026. Which for me means I'll be on a cpu buying strike until 24 performance cores are reached. Also 60% faster decompression rate is statistically irrelevant as it already decompresses a gigabyte rate.
 

escksu

Reputable
BANNED
Aug 8, 2019
878
353
5,260
As the author mentioned, 13900K has 8P and 16E cores vs 8P and 8E for 12900K. So, quite a lot of that increase comes from the extra cores.
 

escksu

Reputable
BANNED
Aug 8, 2019
878
353
5,260
8 Performance cores ---pffff weak.
Until Intel offers 16 performance cores, AMD is leader.
Intel's thinking is half-az. Those efficient cores would mean something if they could match the performance core count of AMD. Until they do, they will be recognized as crap cores. All the Intel benchmarks you see, none of them are sustained performance over hours which means they hide the the crap cores true performance and overall give Intel a higher score then it deserves. This is not so much a problem for most games though, but it does matter if you are a power user constantly multi-tasking. They should call those performance cores levellers as in they prevent huge drops in performance, but they don't actual perform that well. Something tells me core count increase will be stagnant as they bump other things like hertz and FSB to match mem speeds; so expect consumer core count to remain at 16 until 2024-2026. Which for me means I'll be on a cpu buying strike until 24 performance cores are reached. Also 60% faster decompression rate is statistically irrelevant as it already decompresses a gigabyte rate.

99.99% of the users do not need anything more than 8 cores.

If you belong to that 0.01%, then you need to purchase workstation CPUs instead.
 
  • Like
Reactions: KyaraM
8 Performance cores ---pffff weak.
Until Intel offers 16 performance cores, AMD is leader.
Intel's thinking is half-az. Those efficient cores would mean something if they could match the performance core count of AMD. Until they do, they will be recognized as crap cores. All the Intel benchmarks you see, none of them are sustained performance over hours which means they hide the the crap cores true performance and overall give Intel a higher score then it deserves. This is not so much a problem for most games though, but it does matter if you are a power user constantly multi-tasking. They should call those performance cores levellers as in they prevent huge drops in performance, but they don't actual perform that well. Something tells me core count increase will be stagnant as they bump other things like hertz and FSB to match mem speeds; so expect consumer core count to remain at 16 until 2024-2026. Which for me means I'll be on a cpu buying strike until 24 performance cores are reached. Also 60% faster decompression rate is statistically irrelevant as it already decompresses a gigabyte rate.
If you are actually multi-tasking some of your tasks are dependent on single thread and then 16 full cores are horrible because if they are all used they run 25% slower than single thread because you just can't supply enough power to the CPU, so these tasks will run 25% slower than they should.
With the 12900k these thread sensitive tasks have up to 8 full cores to keep running at full single thread speed while the e-cores take care of less demanding threads.
https://www.anandtech.com/show/1621...e-review-5950x-5900x-5800x-and-5700x-tested/8
PerCore-1-5950X_575px.png



12900k 8core=1511:8cores=188=6% less performance than single thread.
(which can be compensated for with a bit more power which we can see form the O/C numbers)
New zen 4 (might not be accurate) 1329:8=166=30% less performance.
(And that's just 8 cores, not the full 16 cores difference)
https://cpu.userbenchmark.com/Compa...ed-Marketing-Devices-7600X/m1685583vsm1898605
1epa5P2.jpg





Also here is a benchmark with the 12900k locked to 125W as well as "stock" at 240W so it doesn't matter how many hours they would run they would always use that amount of power and get that amount of performance, as you can see the performance difference is 0% at single and about 10% - 20% in multithreaded while the power increase is 100%
So if you think that "All the Intel benchmarks you see, none of them are sustained performance over hours " than what they are obfuscating isn't the performance but the energy efficiency since they show it to be much worse than it really is.
https://www.hardwareluxx.de/index.p...-desktop-cpus-alder-lake-im-test.html?start=5
 
  • Like
Reactions: KyaraM
If you are actually multi-tasking some of your tasks are dependent on single thread and then 16 full cores are horrible because if they are all used they run 25% slower than single thread because you just can't supply enough power to the CPU, so these tasks will run 25% slower than they should.
With the 12900k these thread sensitive tasks have up to 8 full cores to keep running at full single thread speed while the e-cores take care of less demanding threads.
https://www.anandtech.com/show/1621...e-review-5950x-5900x-5800x-and-5700x-tested/8
PerCore-1-5950X_575px.png



12900k 8core=1511:8cores=188=6% less performance than single thread.
(which can be compensated for with a bit more power which we can see form the O/C numbers)
New zen 4 (might not be accurate) 1329:8=166=30% less performance.
(And that's just 8 cores, not the full 16 cores difference)
https://cpu.userbenchmark.com/Compa...ed-Marketing-Devices-7600X/m1685583vsm1898605
1epa5P2.jpg





Also here is a benchmark with the 12900k locked to 125W as well as "stock" at 240W so it doesn't matter how many hours they would run they would always use that amount of power and get that amount of performance, as you can see the performance difference is 0% at single and about 10% - 20% in multithreaded while the power increase is 100%
So if you think that "All the Intel benchmarks you see, none of them are sustained performance over hours " than what they are obfuscating isn't the performance but the energy efficiency since they show it to be much worse than it really is.
https://www.hardwareluxx.de/index.p...-desktop-cpus-alder-lake-im-test.html?start=5
Sorry, but please do not use UserBenchmark. They're openly biased and against AMD, so using them as reference will take credibility away from you. I mean, how can you call them "Advanced Marketing Devices" and hope to be taken seriously?

Regards.
 

KyaraM

Admirable
8 Performance cores ---pffff weak.
Until Intel offers 16 performance cores, AMD is leader.
Intel's thinking is half-az. Those efficient cores would mean something if they could match the performance core count of AMD. Until they do, they will be recognized as crap cores. All the Intel benchmarks you see, none of them are sustained performance over hours which means they hide the the crap cores true performance and overall give Intel a higher score then it deserves. This is not so much a problem for most games though, but it does matter if you are a power user constantly multi-tasking. They should call those performance cores levellers as in they prevent huge drops in performance, but they don't actual perform that well. Something tells me core count increase will be stagnant as they bump other things like hertz and FSB to match mem speeds; so expect consumer core count to remain at 16 until 2024-2026. Which for me means I'll be on a cpu buying strike until 24 performance cores are reached. Also 60% faster decompression rate is statistically irrelevant as it already decompresses a gigabyte rate.
Citation needed.

Except it's not really, since this is a pretty clear case of ignorant people being ignorant and actual data, such as provided by TerryLaze above, suggests otherwise. I really wished people would stop making clearly nonsensical claims like this and instead actually try things out for themselves... JFC
 
D

Deleted member 14196

Guest
I really don’t care about any articles relating to how fast processors are. For my purposes they’ve been fast enough for decades. All this crap is just for bragging rights. YAWN. boring!! As Homer says

I buy a computer once every 8 to 14 years and I find these articles useless and extremely boring
 

KyaraM

Admirable
I really don’t care about any articles relating to how fast processors are. For my purposes they’ve been fast enough for decades. All this crap is just for bragging rights. YAWN. boring!! As Homer says

I buy a computer once every 8 to 14 years and I find these articles useless and extremely boring
That's cool for you, but it is very much interesting for other people... besides, if you get a new computer every 8-14 years, at one point in time, it will be of interest for you again, just to see which would be the best option for your individual needs. Unless you want to buy garbage, I guess. So just ignore them if you are not interested and be glad it exists to form an opinion when needed. And be happy for others that they have something to discuss I guess.
 

PCWarrior

Distinguished
May 20, 2013
199
81
18,670
So now that Intel takes the lead in yet another AMD stronghold, it of course gets downplayed and becomes unimportant and not quite “real world” for the AMD fbs. Something that they themselves ridiculed Intel for doing. But now “what tiny percentage uses Cinema 4D? ”, “these all-core workloads can be gpu-accelerated anyway”, “I suppose 7-zip matters if you want to archive your entire p*rn library”. Ah and do you remember when AMD was quite behind in 1080p gaming in Zen, Zen+, Zen2 days? “Yeah, but who the heck buys a 700-1000 dollar card and plays in 1080p? As you can see in 1440p and 4k the gpu is the bottleneck and the cpu choice doesn’t matter. Look at that sweet productivity multithreaded performance. Look at that Cinebench scores”. But now Cinebench became irrelevant and is all like “ah look the 3D cache on the 5800X3D how it improves [… 1080p] gaming and manages to beat the 12900k [… by 3%]”. I wait to see what they will say about AVX512 and its relevance now that the roles are reversed. I am willing to bet that they will all be cheering for the few benchmarks that use it and AMD wins and start blaming the “lazy developers”, Adobe, etc for stagnation and not taking more advantage for it. All while just a year ago they were all citing Linus Torvalds’ disparaging comments on AVX512 on consumer grade cpus. There are no worse hypocrites on the planet than AMD fboys.
 
  • Like
Reactions: KyaraM

salgado18

Distinguished
Feb 12, 2007
925
363
19,370
If you are actually multi-tasking some of your tasks are dependent on single thread and then 16 full cores are horrible because if they are all used they run 25% slower than single thread because you just can't supply enough power to the CPU, so these tasks will run 25% slower than they should.
With the 12900k these thread sensitive tasks have up to 8 full cores to keep running at full single thread speed while the e-cores take care of less demanding threads.
https://www.anandtech.com/show/1621...e-review-5950x-5900x-5800x-and-5700x-tested/8
PerCore-1-5950X_575px.png
Yes, but actually no. Ryzen 7000 will have a greatly increased TDP limit, which means in fact that up to Ryzen 5000, there was a limit to the maximum power the CPU could use. So, until the 5950X, it throttles itself down to keep from overheating, which is a problem Intel doesn't have, and next gen AMD won't too. Your point is valid up until current gen, but next gen may see a very different behavior in this test. And that's important, because the 7-zip test is with the new gen Intel, so let's talk next gen AMD too.

Also, if an e-core uses up half the energy of a p-core (wild assumption), then 8p plus 16e uses up the same energy as 16p, right? So it all comes down to the overall power limit of the CPU as said above, and not that it is a hybrid architecture.

https://www.tomshardware.com/news/a...ications-pricing-benchmarks-all-we-know-specs
 
Sorry, but please do not use UserBenchmark. They're openly biased and against AMD, so using them as reference will take credibility away from you. I mean, how can you call them "Advanced Marketing Devices" and hope to be taken seriously?

Regards.
No matter how much against AMD they are their numbers have never been put in question, just the opinion they draw from it and how they choose to talk about it.
If being openly against a company disregards their numbers we should also disregard anything GamersNexus or hardware unboxed or (insert name here) puts out.
Yes, but actually no. Ryzen 7000 will have a greatly increased TDP limit, which means in fact that up to Ryzen 5000, there was a limit to the maximum power the CPU could use. So, until the 5950X, it throttles itself down to keep from overheating, which is a problem Intel doesn't have, and next gen AMD won't too. Your point is valid up until current gen, but next gen may see a very different behavior in this test. And that's important, because the 7-zip test is with the new gen Intel, so let's talk next gen AMD too.

Also, if an e-core uses up half the energy of a p-core (wild assumption), then 8p plus 16e uses up the same energy as 16p, right? So it all comes down to the overall power limit of the CPU as said above, and not that it is a hybrid architecture.

https://www.tomshardware.com/news/a...ications-pricing-benchmarks-all-we-know-specs
That's why I also included the Zen 4 userbenchmark, it might not be final, it might not even be close to reality, but for now it's the only data on it we have.
 
  • Like
Reactions: KyaraM