Question I always had Intel CPUs. Convince me about AMD.

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Wikingking

Commendable
Nov 11, 2019
33
3
1,535
Hi there!

So I'm looking forward to build a new rig for gaming. Playing games in 1080p (21:9, 144Hz monitor) and then later in 1440p possibly will be in the focus. I'm planning to use this setup for the next 4-5 years and I am thinking of spending quite a lot of money, so high-end but not NASA-killer setups are in the talking (i5-9600k, i7-9700k, RTX2080-ish GPU).

I always had Intel processors but I am aware that AMD once again became a heavy-hitter, real-deal in the business. I am glad and everything but it is hard to let loose my initial discomfort of changing manufacturer and platform. However I am willing to do it considering that there are more than enough valid reasons behind it. Although emotions play a part and I surely can't really miss with an i7 or i5, nor can I with a Ryzen 5 (3600 and X) or Ryzen 7 (3700X).

So please, try to convince me buying a new rig with AMD architecture! But please keep in mind: framerate and longevity are the two key factors. I am not really interested in things like faster video export or faster times in WinRAR :D

Thank you :)
 

Mahisse

Distinguished
Ok, so I'm from Hungary. Country is on the expensive end of computer parts due to ridiculous VAT and other stuff. But it is what it is. My budget is limited, although I'm quite lucky as I'm still willing to pay a lot (compared to the avg. wages here at least). The reason behind the stronger GPU is that most tests and people say it is better to invest into a GPU rather than a processor. For example in my currency, i5 and Ryzen 5 are all roughly 75-80k, i7 and Ryzen 7 are 110-120k. That 40k makes the difference between a 2070s and (almost) a 2080S.

Another thing is that most people believe that 6 cores without HT will be insufficient later on - and I'm planning to hold my ground for at least 3-4 years before another major investment (maybe another 16GB Ram midtime, if I feel the urge). Therefore Intel's i5-9600K is almost out of the picture. Ryzen 5 however is not.

Although I am fairly sure that there's no real bad decision in this case, but I still want to be as objective as I can and if there are more arguments next to AMD then I am maybe willing to change. It's going to be a completely new rig after all (minus the storages).

Lastly, I can't really make up my mind on OC-ing. It just might not be my way of handling things. If all AMD comes pre-OC then it is for the better since I might not even invest into a K processor and a pricy Z370 motherboard...I just think that if I need to OC to get the framerates then the need for a new setup is around the corner anyways, so it would only likely to affect the last year of that rig.

But this answer now looks like as if it was me conviencing myself to buy an AMD :D

About the OC. I'm not too sure on the scenario today but it depends on what is available on the market at the time. For example I also have a i5 (3570K) and you have to remember, when that processor was new it was quite THE processor to have for a gaming setup. Now 5-6 years later I am still able to play most games but do encounter a bottleneck on the CPU. Overclocking the CPU evens out the bottleneck a fair but with my latest GPU. The OC capability of this particular CPU probably saved me a year more.
 
This is one of the things I don't get about the hype with AMD. AMD has been trying to come to pair with Intel's CPUs for quite some years. Now that Intel see a competition from AMD I wouldn't be surprised if Intel will release a new series that will beat AMD's recent procesessors pretty easily.
It happend with Sandybridge that had 4 core 4 thread variant that decimated the AMD Phenom Hex Cores. I'm so stoaked to see Intels response so I can build a high end Intel system, as well as 4th gen Ryzen so I can slot in a nice efficient 12 core/24 thread Ryzen chip into my b350m micro atx motherboard.
 
It would be entirely unrealistic and unreasonable for anyone to believe that stuttering is only caused by "messed up game engines" and that the solution is to artificially lower fps.

That weird quirk noticed in RDR2 is definitely not typical, but consider that it was tested for console which would never reach 120fps. There was no reason to ever fix this issue in that one game engine specified because it was only available on console until just now.

Lack of smooth game-play can be caused by a multitude of issues; not having a free thread available for off loading texture streaming, object loading, etc, is one of them.

We were talking about "best for gaming" here --- putting up with microstutter caused by any reason isn't lending itself to "best for gaming" in any way.

That's why 9700k is really the best Intel CPU for gaming, because 9600k runs into the lack of threads issue in some game engines (the RDR2 issue is not this but something else entirely), and the 9900k is a lot more money for almost no gain at all over the 9700k, especially when both OCd to 5.0ghz.

I would like to see the free memory pool on rdr2 when that stuttering happens.

If they are using managed memory like clr or .net in the engine its possible garbage collection is taking affect. This is an automatic process outside the game engines control unless they manually invoke it. With traditional languages when memory is released it immediately frees up the memory and resources. With managed engines the memory and resources are still hanging around. This has benefits in that most programs dont need to clean up memory immediately because there is usually so much these days. This leads to leas fragmenting of memory like fragging of a hdd. When the memory cleanup begins its more likely that there are adjoining dirty blocks which can be joined together. This is what causes less fragmentation.
But if things get really bad the active memory gets reshifted to compact it. This takes considerable time.
 
.. I wouldn't be surprised if Intel will release a new series that will beat AMD's recent procesessors pretty easily.

For a certainty they're working on something. But most pundits have speculated it will be a couple years at least and by then they'll be staring down the barrel of Zen 5.

But most importantly: the real threat AMD poses is not on the desktop but in the server farm. Intel has to respond there quickly or AMD's inroads will be impossible to address. And they're simply incapable of doing so, again, for two years or so.
 

Mahisse

Distinguished
Another potential "pro-AMD" is perhaps the rate of how often the CPU sockets are changed with new releases. With AMD's new architecture they paced up the speed of this as well but Intel was at a point, where it was just becoming ridicoulus - meaning you had to buy a new motherboard every time you wanted to upgrade the CPU.
 

Mahisse

Distinguished
For a certainty they're working on something. But most pundits have speculated it will be a couple years at least and by then they'll be staring down the barrel of Zen 5.

But most importantly: the real threat AMD poses is not on the desktop but in the server farm. Intel has to respond there quickly or AMD's inroads will be impossible to address. And they're simply incapable of doing so, again, for two years or so.

Yep.. Gaming is not such a serious battlefield for the two contenders. It's more about the industrial procesessors, where AMD has shown some serious teeth.
 
This is one of the things I don't get about the hype with AMD. AMD has been trying to come to pair with Intel's CPUs for quite some years. Now that Intel see a competition from AMD I wouldn't be surprised if Intel will release a new series that will beat AMD's recent procesessors pretty easily.
I'm afraid that we are very close to physical limits of technology and frequency for Intel to find a way to "pretty easily" beat itself or AMD or anybody else. All that's left now is to throw in as many cores as possible and use less power while doing it. Apart of number of cores, things to improve is power required to run CPU and more and better cache use as well as coming DDR5. I doubt Intel will be throwing much money to mainstream processors, there's bigger fish to fry, AI between others.
 

joeblowsmynose

Distinguished
It's not 100% always true that you would always be GPU bound with an RTX 2060 at 1080p. While it is true with some newer titles and even a few older titles those are outliers for now. After a quick search on google I was able to find some games that are CPU bottlenecked with an RTX 2060 at 1080p. Some very popular titles. The list includes Fortnite, GTAV, Battlefield V, COD: WWII, Assassin's Creed Unity, CSGO, Fallout 76, Assassin's Creed Origins/odyssey, Far Cry 5/New Dawn, Shadow of the tomb raider, Rainbow Six Siege, World of Warcraft, and there are many more. Sure Some others are more GPU bound but most games will be CPU bound at 1080p with an RTX 2060. About future titles yeah those will probably become more GPU bound. But if you have $700 or more for an RTX 2080 why not save money by getting say an RTX 2060 Super? But anyway... GPU bound is what we want though right? The RTX 2060 can even play a good amount of games in 4K at greater than 40 fps with some settings tweaked. At 1440p the card is good for 60 fps or somewhere just north of that at high/ ultra. And at 1080p it's pretty good for somewhere over 100 fps most of the time. I know exactly about where the card should perform.

No. Not with an r5 3600 or 9700k ... I specifically said "modern gaming cpu"

If you play a really old game, turn game and driver settings to super low, ensure AA is off, and have a pentium processor or a quad core bulldozer or old APU, then yes. But that's not a reasonable gaming rig - and if you don't have 244+hz monitor it makes no difference because your visible FPS is capped by the monitors refresh rate.

Show me sources for your finds that a 2060 will bottleneck a modern gaming CPU.

When the game is not CPU bound, only the strength of the GPU matters, so a 2080 will far outperform a 2060 ... I don't think you are quite understanding how this all works.
 
Last edited:

joeblowsmynose

Distinguished
I would like to see the free memory pool on rdr2 when that stuttering happens.

If they are using managed memory like clr or .net in the engine its possible garbage collection is taking affect. This is an automatic process outside the game engines control unless they manually invoke it. With traditional languages when memory is released it immediately frees up the memory and resources. With managed engines the memory and resources are still hanging around. This has benefits in that most programs dont need to clean up memory immediately because there is usually so much these days. This leads to leas fragmenting of memory like fragging of a hdd. When the memory cleanup begins its more likely that there are adjoining dirty blocks which can be joined together. This is what causes less fragmentation.
But if things get really bad the active memory gets reshifted to compact it. This takes considerable time.

Interesting theory. Sounds like it should be easily test-able.

Its not the only game engine I have seen with this issue. The old Homeworld2 game engine pretty much locked up at anything over 200fps - you might get a frame update every 20 seconds or so if you hit that 200fps mark. They only way to get it back to normal would be to zoom out all the way so all level elements needed drawing which slowed the FPS back under 200 and things went back to normal.

I had to turn on vsync to play it after the game had aged considerably. When they re-released it, I noticed they had fixed this issue.
 
Last edited:
Interesting theory. Sounds like it should be easily test-able.

GC's are marked in performance profillers. Unfortunately running in debug to mark this might throw the entire thing off. As managed memory engine happens on a separate thread, and all the threads are full rendering frames, then I can see how GC can cause an issue.

For more details: https://docs.microsoft.com/en-us/dotnet/standard/garbage-collection/fundamentals

From microsoft

gc-triggered.png


A similar process also occurs with virtual memory allocation.
 
  • Like
Reactions: joeblowsmynose

joeblowsmynose

Distinguished
This is one of the things I don't get about the hype with AMD. AMD has been trying to come to pair with Intel's CPUs for quite some years. Now that Intel see a competition from AMD I wouldn't be surprised if Intel will release a new series that will beat AMD's recent procesessors pretty easily.

A couple things with this ...

"... Trying to pair with Intel's CPUs ..." (BTW best English term would be " ... trying to find parity with Intel's cpus ...")

There's pretty much only one place where Zen2 isn't at least somewhat superior to Intel's latest offerings and that is gaming, when the CPU is bottlenecked. Usually when a CPU is bottlnecked, FPS is 150 - 400FPS because the GPU doesn't have much work to do except produce easy frames to produce (low res, low settings, etc).

Let's say the Intel CPU when bottlenecked produces 300fps and the AMD one produces 270fps (note that this would likely only be possible on a very high end graphics card - one which almost no one owns). Now what refresh is your monitor? 75hz? 90hz? 120hz? 244hz? -- in this case both cpus are capped at whatever the monitor is capable of. Add to that, you will never see a difference in a 10% delta in FPS at anything over 100fps - the difference is just too small.

This scenario is really strict and only applies to about 0.001% of all gamers out there. It is generally an artificially induced situation that is done strictly for making graphs on reviews ... its not "real world", except to that 0.001%

At pretty much any other point AMD is (at least a little) ahead of Intel ... (Intel is still better at SuperPi though if you like to play that ;), and quicksync encodes with Premiere is wonderful)

AMD has better thermals per performance, better value per performance, better performance per watt, better outright multithreading across the board (not even a competition in this area), more efficient SMT, and more consumer friendly platforms, etc. (at this current point in time)


It happend with Sandybridge that had 4 core 4 thread variant that decimated the AMD Phenom Hex Cores. I'm so stoaked to see Intels response so I can build a high end Intel system, as well as 4th gen Ryzen so I can slot in a nice efficient 12 core/24 thread Ryzen chip into my b350m micro atx motherboard.

If Intel could just whip up something better really fast, they would have done it. Are they feverishly working on stuff to fight off AMD? Yes they even admitted that AMD would overtake them for about two years, particularly in desktop and server spaces, which already happened with Zen2. They did figure they might be able to hold onto their bottlenecked CPU gaming numbers, but we'll see with Zen3 next year.

Zen3 should launch in ~7-9 months if AMD keeps their roadmap, which they have been doing quite well with. So whatever new thing Intel might be able to come up with right away will be likely competing with Zen3 (which has significant cache improvements which may help in gaming) ... not even Zen2.

I think Intel won't have a major meaningful response until 2021, IMO ... In 2020 they plan on adding two more cores to the 9900k still at 14nm. That's their current roadmap and that doesn't sound too exciting to me.

But they didn't hire Jim Keller (the head architect for Zen) for nothing ... and I think by 2022 Intel will have an entirely new architecture that will be very exciting. Right now and for the near future AMD is looking a lot more exciting.
 
Last edited:
...
But they didn't hire Jim Keller (the head architect for Zen) for nothing ... and I think by 2022 Intel will have an entirely new architecture that will be very exciting
....
I seriously wonder just how much near-term impact an AMD guy can have with a deeply entrenched Intel design and development team with their own personal stakes to defend. And 2 years in mprocessor design is VERY near term...if he has any impact it will be whatever follows whatever comes in 2 years.

I personally think the primary value derived hiring him away is to deny AMD his services.

Not to say Intel's not working on something. I'm just not sure he'll influence it all that much.

AMD's real advantage will come from whatever IP they've accumulated in their march to Zen on 7nm. It could be Intel will have to license quite a bit to get back in the game.
 
Last edited:

joeblowsmynose

Distinguished
I seriously wonder just how much near-term impact an AMD guy can have with a deeply entrenched Intel design and development team with their own personal stakes to defend. And 2 years in mprocessor design is VERY near term...if he has any impact it will be whatever follows whatever comes in 2 years.

I personally think the primary value derived hiring him away is to deny AMD his services.

Not to say Intel's not working on something. I'm just not sure he'll influence it all that much.

AMD's real advantage will come from whatever IP they've accumulated in their march to Zen on 7nm. It could be Intel will have to license quite a bit to get back in the game.

I don't quite agree with all of that.

Jim Keller is not really an "AMD guy". He is one of the best CPU architects in the world and bounces around from company to company. He's already been working with Intel for over a year I believe. I wouldn't underestimate what he could do with Intel's resources behind him. AMD should be worried about what Intel can come up with in two+ years and I think that AMD's "foot to the floor" approach lately is testament to this.
 
I don't quite agree with all of that.

Jim Keller is not really an "AMD guy". ...
But coming straight away from AMD he will be perceived as a threat by some, it's bound to happen. It's simple organization dynamics in play, where personalities are everything.

And even so, it will take 5 years at least to bring a clean-sheet, all new architecture to market if that's what he's supposed to do. Would be interesting to know because he might very well be building his own design team to do that very thing in order to bypass all the drama. Intel has sufficient cash to do it.

Otherwise, all he'll be doing is working with what they have because they have to have something in 2 years. That will inevitably constrain him.

I see AMD's 'foot to the floor' approach for a number of reasons. To my mind, probably most important is that once established in the data center they'll be hard to displace. The cloud services are tired of paying Intel's exorbitant prices so even if AMD is merely 'competitive' they'll get 49% market share just to maintain a viable second source until they pop their 'next big thing'. But they have to get there while they have the clear and uncompromisingly superior product. They were at this same point once before, but now that Intel's being watched so closely on three continents (for illegal and monopolistic business practices) they might actually get there.

They just can't let up on it... it helps a bunch they have a strong and visionary CEO this time
 
Last edited:

joeblowsmynose

Distinguished
And even so, it will take 5 years at least to bring a clean-sheet, all new architecture to market if that's what he's supposed to do. Would be interesting to know because he might very well be building his own design team to do that very thing in order to bypass all the drama. Intel has sufficient cash to do it.

Otherwise, all he'll be doing is working with what they have because they have to have something in 2 years. That will inevitably constrain him.

Hes been at Intel for over 1.5 years already, with Intel's resources I say they could do an architecture in four years, leaving 2.5 years from now potentially - sometime in 2022 perhaps? I'm really speculating here though ... Jim's work might actually be nearing a finish at Intel - while it might take up to five years for a brand new architicture to come to fruition, the head architect is only really need for the first half of that.

I doubt Jim is working much if at all on improving existing designs. Hes an architect and they usually only work from ground up completely.
 
....
I doubt Jim is working much if at all on improving existing designs. Hes an architect and they usually only work from ground up completely.
Hah! My mistake! I just assumed that taking on title of Senior VP and General Manager of Silicon Engineering Group his responsibilities would go a little further now...to include bringing to market whatever's next. If time allowed an S-VP/GM to dabble in detail design, I can see how he'd be mostly involved in laying out the architecture though.
 

joeblowsmynose

Distinguished
Hah! My mistake! I just assumed that taking on title of Senior VP and General Manager of Silicon Engineering Group his responsibilities would go a little further now...

Well you might be partly right, but titles are generally just for fleshing out an impressive resume ;) ... if you look at Jim's history he doesn't stay in any one camp for very long. I'm pretty sure Intel really needed a new architecture and that was their impetus for hiring him ... Core is getting a bit long in the tooth.

He's worked for seven different companies since 1999.

Edited.
 
Last edited:

joeblowsmynose

Distinguished
Not really... S-VP and GM makes you an officer of the Intel Corporation. That kinda puts you on the hook when things go wrong.

More importantly it allows you to be able to make the decisions you need to get things done the way you see fit. Something he'd require to do his job to the very best of his ability.

No sense in buying a top of the line sports car then limiting its top speed to that of what everyone else is driving.
 

Mahisse

Distinguished
A couple things with this ...

"... Trying to pair with Intel's CPUs ..." (BTW best English term would be " ... trying to find parity with Intel's cpus ...")

There's pretty much only one place where Zen2 isn't at least somewhat superior to Intel's latest offerings and that is gaming, when the CPU is bottlenecked. Usually when a CPU is bottlnecked, FPS is 150 - 400FPS because the GPU doesn't have much work to do except produce easy frames to produce (low res, low settings, etc).

Let's say the Intel CPU when bottlenecked produces 300fps and the AMD one produces 270fps (note that this would likely only be possible on a very high end graphics card - one which almost no one owns). Now what refresh is your monitor? 75hz? 90hz? 120hz? 244hz? -- in this case both cpus are capped at whatever the monitor is capable of. Add to that, you will never see a difference in a 10% delta in FPS at anything over 100fps - the difference is just too small.

This scenario is really strict and only applies to about 0.001% of all gamers out there. It is generally an artificially induced situation that is done strictly for making graphs on reviews ... its not "real world", except to that 0.001%

At pretty much any other point AMD is (at least a little) ahead of Intel ... (Intel is still better at SuperPi though if you like to play that ;), and quicksync encodes with Premiere is wonderful)

AMD has better thermals per performance, better value per performance, better performance per watt, better outright multithreading across the board (not even a competition in this area), more efficient SMT, and more consumer friendly platforms, etc. (at this current point in time)




If Intel could just whip up something better really fast, they would have done it. Are they feverishly working on stuff to fight off AMD? Yes they even admitted that AMD would overtake them for about two years, particularly in desktop and server spaces, which already happened with Zen2. They did figure they might be able to hold onto their bottlenecked CPU gaming numbers, but we'll see with Zen3 next year.

Zen3 should launch in ~7-9 months if AMD keeps their roadmap, which they have been doing quite well with. So whatever new thing Intel might be able to come up with right away will be likely competing with Zen3 (which has significant cache improvements which may help in gaming) ... not even Zen2.

I think Intel won't have a major meaningful response until 2021, IMO ... In 2020 they plan on adding two more cores to the 9900k still at 14nm. That's their current roadmap and that doesn't sound too exciting to me.

But they didn't hire Jim Keller (the head architect for Zen) for nothing ... and I think by 2022 Intel will have an entirely new architecture that will be very exciting. Right now and for the near future AMD is looking a lot more exciting.

Honestly this reply seem to try convince yourself other than me. Firstly, I think most people on this forum realize there is cap on the monitors regarding their refresh rate. Secondly, an overhead of 30 fps (300-270) would still be 10% better. 10% that could help increase a 4K gaming experience for example.
 

joeblowsmynose

Distinguished
Honestly this reply seem to try convince yourself other than me. Firstly, I think most people on this forum realize there is cap on the monitors regarding their refresh rate. Secondly, an overhead of 30 fps (300-270) would still be 10% better. 10% that could help increase a 4K gaming experience for example.

100% wrong ... at 4k the GPU is the bottleneck easily and CPUs won't make any difference. Why is this so hard for people to understand? Anything higher than 1080p and the GPU becomes the bottleneck again ... You may have missed my post earlier showing when you combine 1080p and 1440p (excluding 4k) differences there is only 4% between AMD and Intel ... not 10%. 4% is way less than you could get from just tuning your memory on an AMD CPU - gamers nexus claims that up to 20% increase in gaming can be had on Ryzen just from Ocing and tweaking memory.

Look here below ... 9900k vs much cheaper AMD 2700x 4k across a bunch of high end cards ... what happened to the 10%? It looks like 0% to me ... What you generally see in the graphs in reviews is not representative of real life gaming (with the exception of some Techspot and Anand reviews, tip of the hat to you both).

Why is it so hard to convince people of this? I know, because lazy reviewers won't disclaim that the setups they use to bottleneck the CPU isn't typical of any real gaming scenario except for the 0.001% of people mentioned earlier ... or won't include any real world gaming graphs. I won't mention any names ahem, Paul Alcorn, ahem ...

Let's ask this question ... if all you did was game, and at 4k, with a 2080ti even, between last gen 2700x and 9900k, which purchase would be completely wasting $350 of your hard earned cash? The graph below will give you the answer ... Is the Intel brand name alone worth $350?


2018-11-25-image-5.png
 
Last edited:
  • Like
Reactions: rigg42
...
Why is it so hard to convince people of this? I know, because lazy reviewers won't disclaim that the setups the use to bottleneck the CPU...
..
Just to add a point... Steve Burke at GamersNexus has stated in his CPU reviews that he does gaming comparisons at 1080p in order to remove the GPU since at 1440 and higher, e.g. 4k, it becomes pretty much pointless.

I'm pretty sure that Steve at Hardware Unboxed has said as much too.

People just choose to ignore that.
 

joeblowsmynose

Distinguished
Just to add a point... Steve Burke at GamersNexus has stated in his CPU reviews that he does gaming comparisons at 1080p in order to remove the GPU since at 1440 and higher, e.g. 4k, it becomes pretty much pointless.

I'm pretty sure that Steve at Hardware Unboxed has said as much too.

People just choose to ignore that.

Also, (to add anotherpoint) note that what is labelled as "pretty much pointless" because the difference is not worth mentioning, IS where almost all real life gaming scenarios happen - at that point where the CPU makes no difference.
 
Last edited:
  • Like
Reactions: rigg42

TRENDING THREADS