Ryzen Versus Core i7 In 11 Popular Games

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


No, was just referring to people asking "how much did intel pay for this review?"
 
I don't get why so many reviewers refuse to run stable, standard, overclocks on BOTH Ryzen and Kaby Lake.

Would it make Ryzen look *very* bad to go up against a 4.9-5.0 ghz i7 7700k? I think it would look disastrous for Ryzen in games. add 15-20% more horsepower and you aren't talking "Close enough if you like to game" but rather "substantially behind".
 


Simple answer is because you are never warranted a high OC in a CPU, so using an OC'ed CPU as a *baseline* is misguided at least and a horrible idea at worst. Plus, investigating OC is a different topic as well. I am sure Toms will do it eventually.

Cheers!
 


You are observant :)
This was the first time I tested CIV VI, so the i5-7600K's lead in the AI test certainly stood out to me, as well. As a first measure, I did a bit of research and found that other outlets (that I generally trust and which also used DX12) also recorded a leading position for the i5-7600K in the AI test over the -6900K. Of course, the question is why, so I logged CPU usage during the AI test on the -6900K. The workload spreads well among the -6900K’s cores, which suggests the -6900K should easily beat the i5. However, the -6900K’s core utilization with Hyper-Threading hovers in the 35-45% range for most of the 16 cores during the test, but some were as low as 25%. In contrast, the i5-7600K’s four cores average 85%, so you are getting mondo thread utilization.
Hyper-Threading one of the major differentiators (outside of clock, core count and cache) between the two processors, so I disabled Hyper-Threading on the -6900K and conducted the AI test again – it yielded an 8% increase in performance. This time around -6900K’s CPU usage across the eight cores was in the 55-75% range. The workload scales well across cores, but the engine spawns a lot more extra threads on the -6900K, which likely penalize it through increased context switching and perhaps inter-thread communication. The AI test is generally very stable and doesn’t suffer much run-to-run variability, but the we present an average of 5 test runs for the AI score to smooth out any margin of error or variability.
The -6900K’s sudden jump in the graphics test also stood out as wonky, and again, as a first measure, I verified that other outlets had experienced similar results – and they have. Again, back to the question of why. I logged the -6900K’s CPU utilization during the graphics test, and core utilization averaged in the 30-40% range, and once again, the workload spreads well across the cores. The i5, in contrast, averages 55-70% on the cores, so you aren't getting quite the thread utilization advantage during the graphics test. I think in this case the i5-7600K’s lower per-thread utilization allows the -6900K to leverage its thread count advantage and take the win. The graphics test and AI test are also two different animals that are designed to test different factors (the graphics test involves a lot more manipulation of the scene and is much longer). I also recorded higher GPU utilization during the graphics test with the -6900K, but that’s kinda a no-brainer. I think the -6900K’s massive draw call advantage (as measured in our 3DMark tests) comes into play.
As a generic rule, most games either favor clock or core count, but that isn’t always the case. Because the workloads are very different, I also theorize that the -6900K’s beastly L3 cache advantage (20M v 6M) comes into play during the graphics test.
 


Afterthought--from a smoothness perspective, my subjective opinion is that the Intel’s vs the Ryzen’s are somewhat similar, though I did notice that the Ryzen’s tend to be a bit more choppy the first time you load a benchie. Also, they seem to take longer to load games, and the same can DEFINITELY be said about the overall boot time. Ryzen’s take much longer to boot, but this is probably some of the early firmwares at play.

 
"Out of interest, what do you consider "great" framerates and what games is your rig struggling with? Do you have a variable refresh rate monitor?"

I'm not asking for anything other than a fairly solid 60fps---I cant stand screen tearing, and my monitors are 60hz.
I dont have Gsync or Freesync---just plain old triple-buffered Vsync.

Offhand, Deus Ex: MD still struggles to hit a consistent 60fps at 1080p using MSAA 4x (DX11 gives higher framerates for Nvidia, but DX12 seems more stable in terms of max/min, so i use that one most of the time). GTA 5, with all options maxed at 1080p, except only 4x MSAA and no AA on reflections, has bad frame drops at times in the city, especially on the road past the theater district. Both games are running off a dedicated SSD (not used for os or swap), so its not a slow, fragmented physical drive issue. Both of these games are not known for being single thread heavy (aka, they both perform very well on the old FX 8 cores), which is a bit of an outlier in the gaming world, so its most likely a gpu-taxed issue.
 
"Someone says games are not optimized to use 8 cores...well that applies to 6900k as well sherlock. "

Not sure why youre getting downvoted for this statement---its pretty much true. Must be residual, lol.
 


I'd guess it's 50% residual only because of previous statements, and 50% because of the 'tone'. That last word was completely unnecessary and doesn't help keep a discussion civilized.
(I'll clarify that I didn't downvote that specific post)
 
Ryzen has gotten hammered for not being a good gaming CPU. This myth must be dispelled. First of all, The reviewers claim that Low-res gaming is a predictor of future gaming performance. It is not. See "the tech press loses the plot:" https://www.youtube.com/watch?v=ylvdSnEbL50

Secondly, if anyone is buying a new computer, or upgrading, they will not be gaming below 1440p, or likely even 4k! Who does that? Also, at obsolete resolutions like 1080p, you are creating so many frames with a modern video card that it doesn't matter how fast your processor is anyway!

Thirdly, you need processors that aren't being hammered at 100% constantly to prevent dropped frames and jerky motion. AMD, not Intel delivers the smoothest gameplay without dropped frames since not all cores are maxed out constantly!

Fourthly, Ryzen performance will only get better over time as more software is written to take advantage of it! If you want a future-proof computer, you want more cores that are more efficient at working together (and more-efficient at multi-threading), not just single-threaded performance! My five year old 32-core AMD Workstation is still a beast today!

Lastly, what about performance per watt and per dollar? The Ryzen 1700 blows Intel out of the water here. Why waste electricity and heat your home with your PC?

For these reasons and more, you should be computing/gaming on a Ryzen! (Drop Microphone.)
 


If you want an objective viewpoint listen to the guy called "Zen Gamer", not some obviously-biased "Tom's Hardware". /S
😀

 


BTW, I went by "Zen Gamer" before AMD chose it. I hesitated before even using it for that very reason 🙂

 


Mhmmm me too...

So then why are you here arguing against the results of benchmarks and calling benchmarks a "myth"?

Ryzen has gotten hammered for not being a good gaming CPU. This myth must be dispelled.

You can't dispel a "myth" of ryzen not being good for gaming until the benchmarks actually show something different. So while you might be right that this might improve in the future, it's not a myth, it's literally fact and has been proven.

 


It depends what you're looking for in a computer. Ryzen is excellent at everything except single-threaded performance, which, over time, will be less and less important as software is written to use all CPU cores. At 1080p, you already get more frames than you need. At 1440p+ you hit the GPU bottleneck, and it doesn't matter much what your CPU does except for dropped frames because of CPUs getting maxed out.

I've seen the benchmarks, and for an RX470 and above, I have seen maybe one game that anyone would say they could actually tell it looked any better on an i7 or i5 CPU.

The difference I've seen is in dropped frames. If your CPU is maxed out, you have trouble when moving rapidly, and you drop frames. See any side-by side CPU usage meters when playing games, and you'll see what I mean.

So, yes, Intel is slightly better in getting frames out today in ways you won't notice, however, if you are buying for the future, the Ryzen 1700 is the way to go.
 

Yes, which is why these are specifically gaming benchmarks. To which you specifically said "Ryzen has gotten hammered for not being a good gaming CPU. This myth must be dispelled."

When/if things improve I'm sure they'll revisit how games perform. However right now it is what it is.
 
"It depends what you're looking for in a computer. Ryzen is excellent at everything except single-threaded performance, which, over time, will be less and less important as software is written to use all CPU cores. At 1080p, you already get more frames than you need."

You are forgetting that when Bulldozer came out, we heard the same thing: 'games will soon be able to use 8 cores, and then FX will shine!', and here we are 5 years later, while *some* games like GTA5 or DX:MD are multi-core friendly, Intel's single thread performance still gives them the victory. The saving grace this time is that Ryzen's single core performance vs Intel is MUCH, MUCH closer this time.

As for me, I'm gonna wait a couple months until the initial expected bugs are ironed out (seen some power management issues) before replacing my 9590 in the AMD box, but I will be buying 8 core Ryzen at some point (probably the 1800X).
 


Vulkan and DX12 do allow for better multi-core use by not forcing the game engine to do nearly everything from a single shared context. It does put the responsibility of many more things on the engine instead of the API. Barrier to entry goes up, but potential performance goes up.

Certain things still can't be done from more than one context, but many things can now. Just look at async shaders. Even if they are currently not that great, they open a potentially big door. I use async shaders as a general example of the kind of concurrency that has been added. There are many other examples.
 


You're not listening. As I said, Intel is better at imperceptible differences. Ryzen is better to prevent herky-jerjy frames, especially if streaming. Tested and confirmed. See https://www.youtube.com/watch?v=O0fgy4rKWhk
 


Don't know if that's possible with an R7, but the R3 and some of the R5 models will have 4 cores presumably all on the same CCX. So that should get rid of any such issues.

Might help us understand how much of an impact the latency between CCXs has, once the R5 and R3 are released.
 
According to AMD's latest informations, Ryzen's performance should be compared with a multiple socket system: Windows 10 is treating them pretty much as such, and does recognize the SMP virtual threads.
However, it's been a long time since game makers had to deal with multisocket enthusiast processors - since the end of the K8 lines and a few AMD FX processors which were actual dual sockets processors.
 


While in general I agree that Ryzen's 6 and 8 core processors may have a bit more staying power over the long run than the comparably-priced 4 core processors from Intel, claiming that 1080p is obsolete and no one with a new computer uses it is just nonsense. Going by Steam's latest hardware survey, at least 95% of Steam users have a display resolution of 1920x1200 or below. Only about 2% are at 2560x1440 or 3440x1440, and 0.69% are at 3840x2160. The install base of higher-resolution screens is growing, but they are still a relatively niche product. Considering one's graphics card has to push twice as many pixels for 1440p, and four times at many for 4K, just to make the image look a bit sharper, it's not going to be something for everyone, and 1080p will be the most popular screen resolution for a while still.

Of course, as you mentioned, if someone is using a 60Hz screen, it doesn't actually matter whether that screen is 720p or 4k as far as processor usage is concerned. Sure, the CPU might be the limiting factor more often at 1080p or below, but that's only because the framerates in general will be higher, and if your framerate remains above what your screen can display, it doesn't really matter. By the time CPU requirements rise enough to make a difference, it's also likely that more games will be optimized for more than four threads, and also developed with Ryzen's hardware in mind.

I will say that the one place where this can make a notable difference is with high-refresh rate monitors though. If you're on a 144Hz screen, you'll actually be able to see those framerates in excess of 100fps with a high-end graphics card, and while the difference in framerates gets less noticable the higher you go, someone might not want their CPU to be the limiting factor there if they have a high-end graphics card. Just like higher-resolution screens though, the install base of high-refresh rate screens is still relatively small.
 
*WINDOWS SCHEDULER IS NOT BROKEN PEOPLE*

PCPer has proven that. One of the issues does seem to be cross-core thread jumping which adds latency which perhaps SOUNDS like the Windows scheduler should fix that but it's not so.

Windows scheduler can identify what are LOGICAL and PHYSICAL cores and assign only a thread per physical core where that makes sense.

However, the game software itself needs to be written in such a way that it can TELL Windows, in more detail, how to manage the threads.

In some games running four PHYSICAL cores with no HT/SMT on the new Ryzen architecture gives the best results. Aside from SMT as said some threads were jumping from one core to another with added latency but when dropping to only FOUR cores that reduced the jumping in some games so much that they gained over 10% FPS.

(It's a complicated issue how thread jumping, CCX load sharing etc works)
 
Windows GAME MODE:
I'm wondering if a simple PROFILE for each game could be added later, such as "only use physical cores 0,1,2,3" with no SMT for an 8C/16T Ryzen could fix some of the issues.

Probably, because I've seen this work with "CPUCores" before which is a program that attempts to optimize how cores are assigned.
 


Pardon my 'paranoia', but since you joined the forum today and your arguments (and wording even) are quite similar to another user's that has been repeatedly voted negative, I'll personally assume you are either that same user, or that you just joined to support his views.
 
Status
Not open for further replies.