Intel & AMD Processor Hierarchy

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Testing processors in a private, non-disclosed 'test suite' means absolutely nothing. Your sample size is extremely small and does not account for real world differences from both a hardware and software standpoint. Additionally, by allowing the public to benchmark their own processors, you increase the amount of samples by a substantial margin, which validates the tests further.

There are plenty of tests out there to benchmark processors. Seems a bit strange that you opted for a private non-disclosed benchmark versus a public one everyone has access to.
 

M42

Reputable
Nov 5, 2020
99
48
4,560
There's other reviews, and here's a nice summary of 12th Gen Intel CPUs (sorry AMD Zen3!):

 

Awev

Reputable
Jun 4, 2020
89
19
4,535
Congratulations Until, you are starting to earn your name of Intel again. Just think what you could do if you actually kept up with AMD - you would be releasing this chip on a 7nm lithograph process instead of the 10 nm something FIN or another, and the power profile would look a whole lot better - around a peak of 150 watts instead of 240. Well, if people don't mind paying extra for those new motherboards, then extra for the proper cooling, and extra for a beefier power supply, and extra for a large enough case to hold it all, then you have managed to do something. I still think you need to downsize the chip, so you use less power, and in turn don't force the customer to spend extra on everything else to make your chip work. Well, at least you are competing on instructions per cycle even if you loose out on a bunch of other things.
 

M42

Reputable
Nov 5, 2020
99
48
4,560
Well, at least you are competing on instructions per cycle even if you loose out on a bunch of other things.

"Competing" is probably not the right word. "Dominating" might be better. AMD probably did not expect this fast and strong of a response from Intel, so it will be interesting to see what happens.

I think this is going to be a win-win for everyone as performance has gone up significantly in both brands in the last few years, and should continue as the performance battle heats up.

Well, if people don't mind paying extra for those new motherboards, then extra for the proper cooling,

In the case where someone is building a fast gaming computer, the price of some graphics cards will be more of an issue.

Just think what you could do if you actually kept up with AMD

With new technology like DDR5 and PCIe 5, isn't it AMD that has to catch up in some areas?
 

Awev

Reputable
Jun 4, 2020
89
19
4,535
Awev said:
Just think what you could do if you actually kept up with AMD
With new technology like DDR5 and PCIe 5, isn't it AMD that has to catch up in some areas?

I was referring back to my earlier statement of Intel still using 10mn tech for the chips vs 7mn for AMD. As to DDR5 and PCIe 5, well if you look at the tests Tom's Hardware published just a few days (maybe a week or so ago), DDR5 is a percent or two slower than DDR4, at the same speeds. Yes, DDR5 does have a potential higher upper end, yet at what cost? How long has it taken for more than a handful of PCIe 4 devices to hit the market? And has 4th gen PCIe really made a difference for the consumer? While not a desktop CPU, AMD has announced the EYPC Genoa 96-core, with DDR5 & PCIe 5 support, on a 5mn die will be released next year. Data centers can better justify the higher cost of early adaptation than someone with more dollars and cents than common sense.

So, yes AMD still needs to catch up in the memory and bus departments for consumers, while Until (can I still call Intel Until?) needs to adapt newer methods of producing chips. Just think, if Alder Lake was manufactured with a 7mn process how much more power efficient it would be, and less of a need to purchase an air conditioner to keep it cool, along with getting another boost to Instructions Per Cycle just because the atoms don't have as far to travel.
 

M42

Reputable
Nov 5, 2020
99
48
4,560
So, yes AMD still needs to catch up in the memory and bus departments for consumers, while Until (can I still call Intel Until?) needs to adapt newer methods of producing chips. Just think, if Alder Lake was manufactured with a 7mn process how much more power efficient it would be, and less of a need to purchase an air conditioner to keep it cool, along with getting another boost to Instructions Per Cycle just because the atoms don't have as far to travel.

Concerning new technology, of course new technology is usually going to cost more at first and the parts scarcer. That is the price of advancing technology (pun intended! :)).

Regarding the 7nm process, AMD outsources the manufacturing of their CPUs, so AMD didn't leapfrog Intel with 7nm their own tech, but with a third-party vendor's technology (TSMC). If you want to get an overview of the 7nm process, take a look here:

https://en.wikipedia.org/wiki/7_nm_process

And I have both high-end Intel and AMD systems and neither seems noticeably warmer than the other, even when gaming. We as consumers win when there is competition between vendors like Intel and AMD. Let's cheer them both on so we get bigger boosts in performance and efficiency in each chip generation!
 

Awev

Reputable
Jun 4, 2020
89
19
4,535
Edited by Awev, just to save space, and highlight what interests me.

. . . We as consumers win when there is competition between vendors like Intel and AMD. Let's cheer them both on so we get bigger boosts in performance and efficiency in each chip generation!
AGREED Yes, it is nice to have a choice. Do you want just a foot warmer (AMD's 142 watts max with consumer CPUs), or a room heater (Until's 242 watts max with Alder Lake)?

The WikiPedia article helps explain why Until wants to go with the "Our chips are equivalent to this or that process", such as AMD and Via did when manufacturing 80#86 chips years and years ago. So, how many more "+"s can be tacked onto Intel's whateverFINwhatever 10mn fabrication process?
 

M42

Reputable
Nov 5, 2020
99
48
4,560
AGREED Yes, it is nice to have a choice. Do you want just a foot warmer (AMD's 142 watts max with consumer CPUs), or a room heater (Until's 242 watts max with Alder Lake)?
That's not even close to being a room heater unless your room is REALLY small. I have a portable electric heater for my desk which I measured uses 8-10 amps at 120V, depending on the setting. That's 960-1200 watts. :)

And even 242 watts is quite a bit less than my RTX3090 consumes when fully loaded (around 400W).

Regarding the 7nm process, maybe you missed the point I was trying to make? That is, AMD has no process of their own, so they (AMD) actually didn't leapfrog Intel. They had to contract with another vendor to do that.

And, BTW, Intel is building a new fab in Arizona for future CPUs and other products. You can read about that here:

https://www.cnbc.com/2021/03/23/int...-to-build-two-new-chip-plants-in-arizona.html
 
  • Like
Reactions: Why_Me

greenmrt

Distinguished
May 19, 2015
71
18
18,565
I would love to see Tom's write a matching chart. I have a 5800X with a 3070 and game in QHD with the effects turned up. As I don't have a 3090TI hanging inside my system, I can't tell from these types of charts if a given CPU will give me more actual performance. Would be great to see some thresholds to help out folks who don't understand how to make such determinations.
 
I would love to see Tom's write a matching chart. I have a 5800X with a 3070 and game in QHD with the effects turned up. As I don't have a 3090TI hanging inside my system, I can't tell from these types of charts if a given CPU will give me more actual performance. Would be great to see some thresholds to help out folks who don't understand how to make such determinations.
It's going to be dependent on the game, as some are more demanding on the CPU than others. Realistically though, I doubt you would see any perceptible performance difference from a CPU upgrade in just about any game. On average, even with something like a 3090 running at an unrealistic 720p resolution to remove almost all graphics card limitations, a 12700K isn't going to be more than 10% faster than a 5800X on average, and a 10% difference would be extremely hard to distinguish. At a slightly more realistic 1080p, that drops to around a 5% difference on average in modern games, and at 1440p that drops further to a few percent difference, as the performance of even today's fastest graphics hardware will be the limiting factor more than anything.

And since a 3070 is typically only around 25-30% behind a 3090 in terms of gaming performance, 3070 performance at 1440p will tend to be roughly comparable to 3090 performance at 4K. Which is to say there will typically only be around a 1-2% difference in performance between a 5800X and a 12700K. Completely imperceptible and not worth considering. In fact, even something like a lower-end i5-10400F should typically perform within around 5% of a 12900K at those kinds of resolutions in most modern games with the settings turned up.

The charts here seem to exaggerate the differences somewhat, likely because they only base their results on 5 games, some of which are atypically-demanding on CPU performance. In my opinion, it's not a great resource for an accurate comparison of the performance of these CPUs in games. At least, don't read too much into the numbers. Really, all mid-range or better desktop CPUs released over the the last few years or so should perform quite similar in games, and outside perhaps a few examples, one would struggle to perceive any difference between them.
 

greenmrt

Distinguished
May 19, 2015
71
18
18,565
Exactly! I just "upgraded" my CPU from a 3700x to a 5800x (they are 'on sale') and ran some quick Benchmarks. Except for some physics tests, it really wasn't an upgrade at all in terms of gaming performance in QHD on a 3070RTX. I've been doing this long enough that I realize this, but I think many new builders come to Tom's for this resource and can be misled into thinking that the upper cards give meaningful gaming jumps with their mid-range GPU's in QHD or 4K.

Really, all mid-range or better desktop CPUs released over the the last few years or so should perform quite similar in games, and outside perhaps a few examples, one would struggle to perceive any difference between them.
 
  • Like
Reactions: phenomiix6

A2D3RS0N

Prominent
May 3, 2021
55
2
535
The chart is very biased, not including Linux for bench-marking, some people believe Microsoft makes bloatware to target AMD, because of Microsoft does a lot of business with Intel. I will always accept that Intel CPU programming is always set at a higher clock rate than AMD CPU"S, always sad how Intel just pushing their CPU's with cheap heat sinks and program their CPU's drivers at unsafe clock speeds just to beat AMD. I am sure if anyone kept track of Intel's based computers vs AMD, AMD would win the life span and reliability. I have owned two Intel computers and they did not last long. I have owned a AMD computer for almost 20 years and it still works. I am sure if anyone takes care of their system and happy with what they can do with it and upgrade it, just to stay secure, even if Linux is the only option for older systems. The truth about bench-marking charts will be nothing, because not everyone using hardware under the same conditions and programming will get the same results. The bench-marking for GPU's are even worse when comparing AMD vs NVIDIA, on Linux, Who do you think would win on Linux AMD or NVIDIA, and if Intel lowered their clock speed at a safe level to match AMD, who you think would win after price vs performance.
 
Last edited:

guru7of9

Reputable
Jun 1, 2018
58
7
4,545
It may be based on the fact the i72600k is old and may be missing instruction sets for modern games and programs that will slow performance when it has to brute force it. Kind of like older games from the nineties would run on CPU alone but if you gave them a graphics accelerator it would have a higher frame and polygon count.
Core i7 2600k runs on super slow hard drive bus compared to current low end motherboards .
Vid card also slooooow !
P67 chipset limited to PCIe 2.0 and usb 2.0 .
Sooo old and slooooow !
 

guru7of9

Reputable
Jun 1, 2018
58
7
4,545
So with all things being equal and Toms Hardware being a good honest non biased website which has confirmed through their own testing that the new AMD RYZEN 9 5800X 3DVcache cpu is currently the fastest gaming cpu money can buy.
So obviously their very own Gaming CPU Heirarchy chart will be updated like in the next few days to show The New Ryzen R9 5800X 3DVcache cpu at the top of the chart as it is currently the fastest gaming CPU money can buy .
And that is even against the overclocked Intel 12900ks so they have tested.
I would take that as a fantastic achievement by AMD RYZEN . Great job AMD !
 
Last edited:

M42

Reputable
Nov 5, 2020
99
48
4,560
So with all things being equal and Toms Hardware being a good honest non biased website which has confirmed through their own testing that the new AMD RYZEN 9 5800X 3DVcache cpu is currently the fastest gaming cpu money can buy.
So obviously their very own Gaming CPU Heirarchy chart will be updated like in the next few days to show The New Ryzen R9 5800X 3DVcache cpu at the top of the chart as it is currently the fastest gaming cou money can buy .
And that is even against the overclocked Intel 12900ks so they have tested.
I would take that as a fantastic achievement by AMD RYZEN . Great job AMD !
Actually, it's about a wash if you use the fastest RAM that both CPUs can use. AMD is faster in some games and Intel in others.

But AMD's trick is all about the extra cache in the 5800X3D. Without the cache, the 5800x3d has pretty slow single-threaded performance . Passmark shows (today) 2509 vs 4291 for the 12900ks. That makes the 12900ks about 71% faster at single-threaded performance. It would be really nice if Intel added the extra cache in their next generation CPUs. Can you imagine another 25-30% improvement in frame rates just for that, not including other improvements? It's a great time for us consumers!! :)
 

guru7of9

Reputable
Jun 1, 2018
58
7
4,545
Actually, it's about a wash if you use the fastest RAM that both CPUs can use. AMD is faster in some games and Intel in others.

But AMD's trick is all about the extra cache in the 5800X3D. Without the cache, the 5800x3d has pretty slow single-threaded performance . Passmark shows (today) 2509 vs 4291 for the 12900ks. That makes the 12900ks about 71% faster at single-threaded performance. It would be really nice if Intel added the extra cache in their next generation CPUs. Can you imagine another 25-30% improvement in frame rates just for that, not including other improvements? It's a great time for us consumers!! :)
Sure you are not talking about mutlicore threaded performance?
Depends which benchmark you use as to how much better 12900ks or whatever is faster or slower etc. In actual performance as in gaming this is really misleading. eg for a lot of games single thread is supposedly king, but i would dispute that, I don't believe 12900ks is 71% faster in single core not even for the non 3dvcache ryzen 7 5800x . Most things i have read suggest its more around 20-25% ish.
In multicore the 12900k/ks has 8 more cores (e cores) so of course it will crush the 5800x and 3dvcache but it don't make your games play any faster!

Early reports suggest that AMD Zen 5 will have big little cores too!
Starting to seem like both Intel and AMD have settled on 8 cores are all that required for gaming !
Next 6 months will be awesome for new pc tech, new CPUs and vid cards galore !
Roll on , can't wait!
 

guru7of9

Reputable
Jun 1, 2018
58
7
4,545
I'm still waiting for the Ryzen 7 5800X 3DVcache to be put at the top of this gaming list!
Proven by Tomshardware's own testing!
Last time they took forever to put Ryzen at the top with Zen 3 and Intel was up well within a week.
Seems pretty typical but .. you have to wonder about the continuous inconsistencies, nay bias !
Why is this so ?
 

M42

Reputable
Nov 5, 2020
99
48
4,560
Sure you are not talking about mutlicore threaded performance?
Depends which benchmark you use as to how much better 12900ks or whatever is faster or slower etc. In actual performance as in gaming this is really misleading. eg for a lot of games single thread is supposedly king, but i would dispute that, I don't believe 12900ks is 71% faster in single
Yes, I meant single-thread performance. Passmark single-thread performance can be found here:
https://www.cpubenchmark.net/singleThread.html

Having to read from cache instead of going out to memory will increase performance incredibly. Think about the memory clock timings. It takes the equivalent of many tens of CPU cycles to access memory and often the CPU has to wait for the memory access to complete before continuing, which effectively slows the CPU down. If you roughly even the playing field so both CPUs require mostly memory access instead of cache then you get that the 12900ks is 71% faster in single-thread performance.

Multicore, there is even a greater disparity, as the 12900ks is about 94.6% faster:
12900ks: 44624
5800X3D: 22932
 
Last edited:

guru7of9

Reputable
Jun 1, 2018
58
7
4,545
Yes, I meant single-thread performance. Passmark single-thread performance can be found here:
https://www.cpubenchmark.net/singleThread.html

Having to read from cache instead of going out to memory will increase performance incredibly. Think about the memory clock timings. It takes the equivalent of many tens of CPU cycles to access memory and often the CPU has to wait for the memory access to complete before continuing, which effectively slows the CPU down. If you roughly even the playing field so both CPUs require mostly memory access instead of cache then you get that the 12900ks is 71% faster in single-thread performance.

Multicore, there is even a greater disparity, as the 12900ks is about 94.6% faster:
12900ks: 44624
5800X3D: 22932
I see what you are saying but I think their single threaded score is total rubbish and defies logic.
Mr Spock would be up in arms at the ridiculousness of it. He would say logic dictates it to be slightly slower in single and multi threaded apps in some circumstances and faster in gaming apps but will vary from game to game how much faster.
The 5800x scores 3486 so therefore this 5800x3d will be slightly lower not 2509. Anything else is total rubbish. I think its probly an error to be honest! Its the same cpu as the 5800x with slightly lower clock speeds and much bigger L3 cache!
Ok I looked at passmark score of both 5800x and 3dcache versions. Sample size of 5800x is 4290 and 5800x3dv is just '1 '. They even say probability for margin for error is HIGH !
As for multi score i would expect 12900k/ks to be approx double seeing as it is running double the cores. This is just logical . Then if you take into account the fact that the p cores are a newer generation and faster and the e cores are a little slower it still balances out about double! Its Logicical !
But multicore vs 12900k/ks is a completely different price and product segment.
I don't know how you have come to compare 12900ks to 5800x3d for multi core ? Its just not logical.
Its all about gaming for 5800x3d
 
Last edited:

guru7of9

Reputable
Jun 1, 2018
58
7
4,545
:sleep:

Shouldn't the 5800X3D be listed at the top of the 'Intel and AMD Gaming CPU Benchmarks Hierarchy' list?
For some reason, it's not included in that list at all...?
Yes we shall wait and see when and where it ends up on the heirarchy chart .
My bet is, going by past efforts, they will take forever to put it on ( if at all), but they will not put it at the top cos it is not as fast for other programs ie non gaming apps even though it is a supposed to be a CPU GAMING HEIRARCHY LIST .
Watch this space ! 😆
I hope they prove me wrong... but i wouldnt hold my breath!
 
Last edited:
  • Like
Reactions: alceryes

M42

Reputable
Nov 5, 2020
99
48
4,560
I see what you are saying but I think their single threaded score is total rubbish and defies logic.
Mr Spock would be up in arms at the ridiculousness of it. He would say logic dictates it to be slightly slower in single and multi threaded apps in some circumstances and faster in gaming apps but will vary from game to game how much faster.
The 5800x scores 3486 so therefore this 5800x3d will be slightly lower not 2509. Anything else is total rubbish. I think its probly an error to be honest! Its the same cpu as the 5800x with slightly lower clock speeds and much bigger L3 cache!
Ok I looked at passmark score of both 5800x and 3dcache versions. Sample size of 5800x is 4290 and 5800x3dv is just '1 '. They even say probability for margin for error is HIGH !
As for multi score i would expect 12900k/ks to be approx double seeing as it is running double the cores. This is just logical . Then if you take into account the fact that the p cores are a newer generation and faster and the e cores are a little slower it still balances out about double! Its Logicical !
But multicore vs 12900k/ks is a completely different price and product segment.
I don't know how you have come to compare 12900ks to 5800x3d for multi core ? Its just not logical.
Its all about gaming for 5800x3d
If you fully read any of the recent in-depth reviews for the 5800X3D you should have seen that not only does it clock slower than the 5800X but it cannot be overclocked, so for regular applications the 5800X is faster than the 5800X3D.

Again the only reason the 5800X3D is faster in some games is that those particular games have tighter loops where the 5800X3D can access its internal cache instead of being slowed down by accessing RAM. In other games the 12900ks is much faster because the 5800X3D has fewer cache hits.

If you don't understand why this is so, then I suggest you read a little more on the topic of CPU caches.