News AMD vs Intel 2020: Who Makes the Best CPUs?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We wade into the endless debate: Who makes the best CPU, AMD or Intel?

AMD vs Intel 2020: Who Makes the Best CPUs? : Read more
"Intel still barely holds the edge in per-core performance"

The difference is insignificant at this point though. What Intel does have is an architecture that's almost a decade old. Remember that every CPU that Intel has released post-Nehalem is essentially a version of Sandy Bridge with the telltale ring-bus architecture. As an architecture becomes more mature, it also becomes more stable. Building architectures work the same way, nothing is more stable than Roman arches in the Colosseum and the aqueducts. That's Sandy Bridge.

When a CPU architecture is this mature and this stable, it can clock to the moon and that's what Intel is banking on. Do we remember the last Intel architecture that could easily clock well past 5GHz and remain stable? I do (because I'm THAT old..lol) and it was the Pentium-4. The reason that the Pentium-4 could do it is because it was still essentially the same 32-bit i386 architecture that Intel released in 1985. That's right, Intel's architecture by the time of the Pentium-4 was fifteen years old!

That's how Intel managed to keep the Pentium-4 somewhat relevant when faced with the Athlon-FX. Intel is trying the same tactic again with clock speeds. We now recognise that choosing Intel for its clock speed was the wrong choice seventeen years ago.

Are we stupid enough to make that same wrong choice a second time? I sure as hell hope not.
 
The following is from Tom's Hardware: How to Buy the Right CPU: A Guide for 2020
Should you overclock?

Overclocking, the practice of pushing a CPU to its limits by getting it to run at higher-than-specced clock speeds, is an artform that many enthusiasts enjoy practicing. But, if you're not in it for the challenge of seeing just how fast you can get your chip to go without crashing, overclocking may not be worth the time or money for the average user.

In order to make your CPU achieve significantly higher clock speeds than it is rated for out of the box, you'll likely spend extra on an enhanced cooling system and an overclocking-friendly motherboard. While nearly all recent AMD chips are overclockable to some extent, if you want to dial up an Intel chip, you'll have to pay extra for one of its K-series processors (which don't come with coolers at all). By the time you factor in all these extra costs, if you're not shopping at the top of the CPU stack already, you'd be better off budgeting another $50-$100 (£30-£70) for a CPU that comes with higher clock speeds out of the box. And remember, even if you do get all the right equipment, you could still get a chip that doesn't overclock well. Or worse if you don't know what you're doing, you could damage your CPU or shorten its lifespan by pushing too much voltage through it.
Note: Emphasis added by me, except for the section heading

Lets not forget that Tom's Hardware also released a video two years ago, Buy the Right CPU, with the subtitle of Five tips for buying the right CPU. Care to guess what tip #5 is? You guessed it, just that in the video they suggest spending $20 to $60 more, yet in either case they say step up for more performance.

So lets skip overclocking unless you are using, or reviewing, a top of the line CPU, and are willing to spend extra. Yes, it is nice to know that you can overclock a RyZen 7 1700 to get the performance of a RyZen 7 1800, yet don't do it, just purchase a RyZen 7 3800 for example.

Bottom line? Figure out what you need, and purchase accordingly. If you need a Threadripper/Xeon CPU then you should be looking at a workstation, not a personal computer, so what are you doing here? As both the article and video state, if you are purchasing a CPU today then purchase current generation and don't worry about who makes the chip, unless there is something very special (i.e, using AVX to encode a video). Don't forget to pair the CPU with a proper GPU for the task at hand (no need for a RTX 2080 Ti if you are only gaming at 1080p @ 72Hz- for a proper pairing check out PC Builds Bottleneck Calculator).
 
  • Like
Reactions: Avro Arrow
"Intel still barely holds the edge in per-core performance"

The difference is insignificant at this point though.
Excluding games,the 10900k is 3% faster than the 3900x although the latter has 20% more cores,the difference in single threaded when all cores are under load is at around 23% ,that's a huge difference and very significant.

It is not significant if you just want a gaming rig and will confine yourself to current GPUs only with further confining to 1440 and over and further confining of at least high if not ultra settings.
https://www.phoronix.com/scan.php?page=article&item=intel-10500k-10900k&num=10
embed.php
 
Excluding games,the 10900k is 3% faster than the 3900x although the latter has 20% more cores,the difference in single threaded when all cores are under load is at around 23% ,that's a huge difference and very significant.

It is not significant if you just want a gaming rig and will confine yourself to current GPUs only with further confining to 1440 and over and further confining of at least high if not ultra settings.
https://www.phoronix.com/scan.php?page=article&item=intel-10500k-10900k&num=10
embed.php
Can you show me where did you get that intel is 23% better in single core?
Last time I check ryzen IPC is higher.
 
  • Like
Reactions: Avro Arrow
Can you show me where did you get that intel is 23% better in single core?
Last time I check ryzen IPC is higher.
3% faster in a large selection of software compared to a CPU with 20% more cores so on average in this large selection of software each core of the intel CPU provided about 23% more work, what would you call that?!
 
3% faster in a large selection of software compared to a CPU with 20% more cores so on average in this large selection of software each core of the intel CPU provided about 23% more work, what would you call that?!
You just forgot the higher frequency, higher power consumption by a lot and software optimization.
Of course I don't want to make your point completely stupid.
And please don't start with average and 1% watts that you should sync with your monitor.
 
  • Like
Reactions: Avro Arrow
You just forgot the higher frequency, higher power consumption by a lot and software optimization.
I didn't forget it at all,all of this stuff is making the cores ~23% faster, yes.
And please don't start with average and 1% watts that you should sync with your monitor.
I don't even understand what this is about,software will run the same way no matter if you sync your monitor or not,whatever this is supposed to mean.
 
It is not making it 23% faster because in multi threaded benchmarks that use all the cores AMD is better and with much lower power consumption.
If you pull extra 80 watts you can't say it is better.
In multi threaded benchmarks that use all the cores AMD is better in some cases while intel is better in others,you can look at each page of the benchmark I posted to see individual programs.
It doesn't change the fact that if you use all kinds of software intel is still about 3% faster than an AMD cpu with 20% more cores.
 
  • Like
Reactions: shady28
I see this article has been updated, yet left a couple things out. The three things that I noticed that did not make it into the article are:

1. Under the CPU Lithographic heading, it still states that Intel hopes to get back into the game, size wise, in 2021. The article recently published here on Tom's Hardware, Intel's 7nm is Broken, Company Announces Delay Until 2022, 2023, shows that the 7mn chips will not be arriving for quite some time, and even the 10mn desktop chips will not be until the 2nd half of 2021 - again another Tom's Hardware article - Intel Says First 10nm Desktop CPUs Land in Second Half of 2021. So this section about when Until will "achieve parity" needs to be updated.

2. Software drivers. In your article about integrated graphics (iGPUs), in the comments section, the author noted that Until has actually back slided on at least one graphics driver, going so far as to make it unusable instead of better. You can look at the post in these forums. While performance increased maybe 25% on the tests that it did work, that is not saying a lot, because it would not work on seven - yes 7 - of the nine tests.

3. As I have previously pointed out, over clocking should not be a factor, unless you are talking about top of the line products - the iCore 9 in Until's case, or the RyZen 9 in AMD's. And it is because of an article, and related video, that was written and posted on this site. Just look at post #102 of this thread for the details (just look up, or if this appears on page six go to the second post on page 5).

It is nice to see you have updated the story with some new information about AMD products that DIY people can not purchase in normal channels. It is also great that you have left AMD as the CPU Wars winner.

Just a quick FYI - it is not a typo on my part, as Intel keeps saying it will not be UNTIL the 2H21, end of 2022/start of 2023, UNTIL they get to the 5mn chip size, UNTIL . . . Micro$oft is not much better with winDOZE.
 
In multi threaded benchmarks that use all the cores AMD is better in some cases while intel is better in others,you can look at each page of the benchmark I posted to see individual programs.
It doesn't change the fact that if you use all kinds of software intel is still about 3% faster than an AMD cpu with 20% more cores.
According to the graph you show the 3950x is about 10% stronger than the 10900k and not even 2 times stronger than the 3600 with only 6 cores although the 3950x has 16 cores.
Are you working for Intel?
I remember you claiming that the 3950x is worst than 9900k in multi threaded and no one use AVX and your weird claims about the power consumption of Intel.
I am not going to waste any more time when it is very obvious you have an agenda here.
It is not the first time you carefully choose the graphs that prove your points and mislead users on this site.
 
According to the graph you show the 3950x is about 10% stronger than the 10900k and not even 2 times stronger than the 3600 with only 6 cores although the 3950x has 16 cores.
Are you working for Intel?
Yes the more cores you have the slower you have to run them if you want a low power draw,that's why you almost never see server CPUs compared to desktop CPUs.
Threadrippers as well as intel's HEDT are not for desktops and are worse in things that need high clocks.
Are you working for AMD?
I remember you claiming that the 3950x is worst than 9900k in multi threaded and no one use AVX and your weird claims about the power consumption of Intel.
I am not going to waste any more time when it is very obvious you have an agenda here.
It is not the first time you carefully choose the graphs that prove your points and mislead users on this site.
I said it is worse in some things your brain understood it is worse..and then your brain shut off or something and didn't read the rest.
What was weird about the power consumption claims I made for Intel CPUs?In idle and at low usage intel has lower power draw and every website that doesn't hide idle and low usage power draws will show that.
I am not going to waste any more time when it is very obvious you have an agenda here.
It is not the first time you carefully choose the graphs that prove your points and mislead users on this site.
Yes please stop wasting everybody's time.
It is not the first time you carefully choose the graphs that prove your points and mislead users on this site.
Yes I carefully chose a graph that actually benchmarks a lot of stuff saying so several times,instead of showing a single graph that only shows one or two things that run well on one of the two,or even better just claiming things without any link or any graph, as so many others here do.
 
  • Like
Reactions: shady28
Yes the more cores you have the slower you have to run them if you want a low power draw,that's why you almost never see server CPUs compared to desktop CPUs.
Threadrippers as well as intel's HEDT are not for desktops and are worse in things that need high clocks.
Are you working for AMD?

I said it is worse in some things your brain understood it is worse..and then your brain shut off or something and didn't read the rest.
What was weird about the power consumption claims I made for Intel CPUs?In idle and at low usage intel has lower power draw and every website that doesn't hide idle and low usage power draws will show that.

Yes please stop wasting everybody's time.

Yes I carefully chose a graph that actually benchmarks a lot of stuff saying so several times,instead of showing a single graph that only shows one or two things that run well on one of the two,or even better just claiming things without any link or any graph, as so many others here do.
Another post full of lies and manipulative information.
It is up to the moderators not to give you and Deicidium to spread your lies about Amd not mine.
I wanted to explain to you why you were wrong but it is enough to compare the 3600x to the 3950x in these benchmarks and to see that you are a liar.
Btw the 3950x is boosting almost the same as 3600x on all cores so once again you are lying.
 
Last edited:
  • Like
Reactions: Avro Arrow
It is not significant if you just want a gaming rig and will confine yourself to current GPUs only with further confining to 1440 and over and further confining of at least high if not ultra settings.
It is INSIGNIFICANT regardless of whether or not you WANT it to be. Here's why it's true:

#1 -> I don't know anyone who prefers 1080p gaming over 1440p because that's insane. I also don't know anyone who doesn't use high or ultra settings which puts the GPU in the bottleneck position. This makes it insignificant.

#2 -> Every CPU at the R5-3600X level and higher puts out AT LEAST 122fps at 1080p in Guru3D's ENTIRE TEST SUITE so even if you have a 122Hz display, there is literally no difference because your display is the bottleneck. So much for your "Confining 1440p" attempt at an argument.
https://www.guru3d.com/articles_pages/amd_ryzen_5_3600x_review,23.html
This makes it insignificant.

#3 -> The overwhelming majority of PC displays today are 60Hz. The number of gaming displays that are 122Hz is a tiny minority. The number of gamers who are competitive gamers and use a 144Hz screen is so small as to be insignificant. This last group is the ONLY one that would seriously benefit from the current Intel CPUs and, again, they are so small as to be INSIGNIFICANT.

It's clear that you love Intel even more than Linus does, the question is, don't you realise that making weak and ill-informed arguments only makes it spectacularly evident? LOL
 
The MS Flight Sim review emphatically demonstrated Intel's two year old 9900k ($370) and 9600k ($170) outperforming AMDs current and more costly CPUs. TH didn't even bother to use the current gen 10900k and 10600k.
 
Last edited:
@Gurg - M$ Fight Simulator 2020 is DirectX11 bound, only uses four cores, and those four CPU cores are only running at 15 to 20 percent, and that is on a R9 3950x. You can see the details here. You need a RTX 2080 Ti to play at 4K @ 60 Hz on Ultra, and you will not hit it much. Right now DirectX11 and the GPUs are the bottle necks, not the CPUs. System and GPU RAM is going to play a big part. It looks like 16GB of system RAM is the minimum you should consider, and 6GB of VRAM, with 32Gb and 8GB+ being more realistic, respectively. So your comments about the CPUs are not helpful with the Micro$oft Flight Sim.
 
@Gurg - M$ Fight Simulator 2020 is DirectX11 bound, only uses four cores, and those four CPU cores are only running at 15 to 20 percent, and that is on a R9 3950x. You can see the details here. You need a RTX 2080 Ti to play at 4K @ 60 Hz on Ultra, and you will not hit it much. Right now DirectX11 and the GPUs are the bottle necks, not the CPUs. System and GPU RAM is going to play a big part. It looks like 16GB of system RAM is the minimum you should consider, and 6GB of VRAM, with 32Gb and 8GB+ being more realistic, respectively. So your comments about the CPUs are not helpful with the Micro$oft Flight Sim.

You beat around the bush a lot, but maybe if you try real hard you can simply say: The faster frequency Intel CPUs deliver better performance than comparable AMD CPUs in MS Flight Sim as well as most other computer games.

The CPU performance charts don't lie.
 
@Gurg - While I agree with your statement about performance, the value is not there, which still does not apply here, as the game/sim only uses 15% to 20% of those four cores. And I was disagreeing with your blanket statement that did not have any supporting facts.

M$ Flight Sim has not been optimized for current hardware - no mater who the vendor is. You can play this on an old CPU with only 4 cores and 4 threads (no hyper threading or whatever the marketing name is), and as long as you have enough RAM and VRAM suffer just as badly as the person with the latest and greatest hardware.
 
  • Like
Reactions: King_V
@Gurg - While I agree with your statement about performance, the value is not there, which still does not apply here, as the game/sim only uses 15% to 20% of those four cores. And I was disagreeing with your blanket statement that did not have any supporting facts.

M$ Flight Sim has not been optimized for current hardware - no mater who the vendor is. You can play this on an old CPU with only 4 cores and 4 threads (no hyper threading or whatever the marketing name is), and as long as you have enough RAM and VRAM suffer just as badly as the person with the latest and greatest hardware.
1080p and 2160p with rtx 2080 ti give the same 50 fps avg and 40 fps 1% results.
Really bad optimization.
 
Somehow through its marketing Intel has managed to make this important to people, rather than out-right delivering a superior product.
+1

You realize this is an enthusiast site, correct?
A site to which more and more "normal" people are coming to.

Intel pushed it because thats what people wanted not because they made it important.
You joined in 2007. You've been here a while. Do you remember when overclocking was getting more performance out of the budget/midrange CPU that you bought so that you got higher-end performance? When you pushed your non-flagship PII past even the mid-level PIIIs in games?

Yeah, one can't do that with Intel anymore. It's "Buy most expensive CPUs and overclock," not "buy anything and overclock, figure out how it works, lives, and breathes and then make it better."
 
You joined in 2007. You've been here a while. Do you remember when overclocking was getting more performance out of the budget/midrange CPU that you bought so that you got higher-end performance? When you pushed your non-flagship PII past even the mid-level PIIIs in games?

Yeah, one can't do that with Intel anymore. It's "Buy most expensive CPUs and overclock," not "buy anything and overclock, figure out how it works, lives, and breathes and then make it better."
You can't do that with amd either, almost any ryzen you get you can only overclock to below the single core clock which makes it pretty useless.
CPUs have come a long way since the old days and running at the limits by default is just one aspect of that.
 
Status
Not open for further replies.