Ryzen Versus Core i7 In 11 Popular Games

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


People could also try reading multiple reviews instead or expecting 1 reviewer to waste days, weeks or even months trying to cover every possible scenario in a single product review. Some reviewers actually make a point to avoid covering the same games/scenarios as everyone else is using. Some might even go that extra mile to test different system settings to show the effects.

As is usual with reviews, the fanboys always want to see a review that's heavily biased towards their preferred company. If that doesn't happen, they accuse the reviewer of being paid by someone.
 


Do you work in AMD's Marketing Department? The vast majority of the gaming community is still gaming at resolutions below 1440p..... That's not going to change any time soon.

1080p is not an "obsolete resolution"....lol. It's only recently become a mainstream resolution.

This has got to be the funniest post I've read lately.

The only part of your post I agree with (and coincidentally, the only part that's really accurate) is that it's a myth that Ryzen 7 isn't a good gaming CPU. Considering it's essentially a workstation CPU, it handles gaming just fine. Considering the majority of the gaming community is still using 60hz, 1080p or lower screens, having a minimum framerate of 60fps is far more important than having a maximum framerate in excess of 100fps.

1440p will not replace 1080p until those displays reach similar pricing levels. Currently, you can find 1080p displays for $90+. I have yet to see a 1440p display anywhere near $100 that isn't refurbished or used...
 

That's true only if your ignoring lower FPS and only looking at the average. You really want the FPS to be at or above 60 FPS if you are using vsync, otherwise you will get stutters as it hovers below and up to 60fps. Unfortunately dropping in-game details and resolution will not help the fps if you have a slow cpu. The intel i5 is still the go-to chip for budget gamers. Unless the AMD quad core models can best my Ivy Bridge I won't be upgrading any time soon.
 
OK, first off we see tests like Deus Ex: Mankind Divided, where there is no significant difference in the top three benches.

Then we have things like this one Civilization VI Graphics Test.

Then we have things like this Battlefield 1 where the Dif betwwen 6900K and 1800x is 3.8%...hmmmm..what's Tom's margin of error in testing. 3% is pretty damned good. I'll be damned if I can get 4% at my bench.

What I am seeing here is either noise (error margins) or a SEVERELY software compromised CHIPSET. When was the last BIG chipset change for AMD? These tests are like AMD vs. Intel in a football game where the rules were written by team Intel.
Then there's MITCH074's point, which dovetails nicely here.

And I don't want any fan boy flames chuckle heads. I'm way to old for it and I run a 6950x at 4.5Ghz CPUz confirmed with DDR43200 15 16 16 33 1T and two 1080's SLI and all games at 4k. I have not purchased Rizen since the eco system for it doesn't exist.

To make a long story short, if things don't change in 6 months I'll buy this analysis, but all I'm seeing here ain't worth the time it took to do. Why not focus on some of the deeper software issues causing difficulties, so that those of us who come here for GOOD data can have a clue as to tuning around these difficulties until patches are released....Oh yeah this is Tom's(Intel Inside) Hardware. I don't think it was until Tom's P4's caused a core breach and a small nuclear meltdown that they finally kinda sorta said AMD Athlon's were decent for gaming. Oh well live and learn they say.
 
Oh, and 1 more thang. I ALWAYS set high perf in Winblows when I game. ALWAYS. I also crank up my GPU OC's when I game. That's why I have profiles. The author's disparagement of work arounds for software glitches is truly funny. New platform, no eco system, the chip manufacturer suggests that you don't use the glitchy stuff until things are patched and the author has the gall to say we shouldn't have to do this? Really? I have specific tunings for every F'ing game I play..and PROFILES, including BIOS profiles...wow imagine that. I guess the Motherboard manufacturers ought to get rid of that whole BIOS profiles saving thing cuz this author says one size should fit all. Bleh.
 
Sorry for the last post Paul, I actually found your USEFUL article on this CPU etc. Is it screaming fan boys wanting plethoras of benches that produced this article? I see no real sense in it otherwise.
 
What would the chances be of adding an old Phenom II 965 or 1090T to the lineup? I have a feeling plenty of people skipped bulldozer and couldn't afford Intel
 


As of 2016 Intel's compiler and more importantly it's various libraries still cripple functions on any CPU without a GenuineIntel ID.

http://www.agner.org/optimize/blog/read.php?i=49&v=f

Agner's done a great job of documenting the nuances and analyzing the various code involved, he also produces some software optimizations guides for people to use. This is just something the industry has come to accept, any code that goes near an Intel produced product will only function in it's entirety on an Intel branded CPU.
 
I've only read about five random entries on that list and *IF* what Agner Fog is saying there is true,

Wow... just wow...
 


I don't know who you're talking about. I can guarantee you that I never posted before yesterday on this forum under any name. I've been reviewing products on Amazon as "Zen Gamer" for years because that's how I feel when I game, not anything to do with AMD which hadn't even chosen the name back then (as far as I'm aware.).

Regarding my other points, others have seen what I have. Intel CPUs drop more frames. Ryzen gives you smoother gameplay because the CPU isn't at 100% all the time, unable to keep up. See: https://www.youtube.com/watch?v=O0fgy4rKWhk
 
Simply being an i5 alternative isn't the worst first step forward. Allot of gamers use and get by just fine with those.

In fact, most would be better off getting an i5 if it meant putting more $$ to put into their GPU.
 
One of the reasons why Ryzen's game performance is still lagging in this Toms Hardware review, is because you test all games in DX12 mode. It's specially the DX12 mode of games that is not running too well on Ryzen. Being a low level API, it will need optimization to run well on Ryzen.
I already posed about this on TR forum. ComputerBase tested all games in DX11 and DX12. The difference is noticeable.

https://techreport.com/forums/viewtopic.php?f=2&t=119280

Ryzen easily looses 20-30% performance in DX12 in many games.
 
since i tend to buy a new cpu/board every 4-5 years nowadays (am currently on haswell i7 as intel has done little more then up the clock a notch and slap a new core-name on it since ... forever), i am more interested in long term outlook then current AAA games (aka tomorrows old news).

in that respect i am positive that ryzen with its 8 cores is going to trash anything intel has in the consumer market right now in one year+ on new titles and that is what counts for everyone who does not shell out a few thousand euros every year on the newest shiny toy.

also when i hear people comparing prices and stating the i5 is supposedly cheaper than ryzen 1700 ... you would do well to include the motherboards.

you get a high end gaming board for amd for the price you get a lower midrange on intel. get any quality gaming board with equal features for both platforms as a base and you will be hard pressed to find a core i3 for what you would have left in your budget if the watermarke is a combo of board + ryzen 1700, let alone an i5 or i7.
 
"Ryzen easily looses 20-30% performance in DX12 in many games"

If thats true, thats quite an odd situation for AMD, since their own GPUs are the ones that benefit the most from DX12 vs DX11.

So once again, its better to have an Nvidia GPU on an AMD platform, lol. With Bulldozer---its weak single-thread performance could be partially rectified by running a Nvidia GPU because Nvidia's drivers have always been better at multithreading DX11 and below, as well as overall less cpu usage, leaving shoddy/unoptimized game engines more of the cpu to use.
 


It's teething pains for a new uArch that's all. Everyone just expects for computers to "magically work" but there is a shit ton of work that goes on to tune and optimize timings and I/O. Windows itself has a schizophrenic scheduler that needs coerced in order not to make performance impairing decisions, furthermore games and various engines need careful adjustments to align their activities with the processor being used. Intel's CPU uArch hasn't changed in a long time so we're seeing the results of that fine tuning, while AMD's has undergone two major revisions in recent times. Bulldozer uArch was radically different from previous generation uArch's and thus needed different scheduling and timing, Ryzen is dramatically different from Bulldozer and again needs different scheduling and timing. What's likely happening is code is seeing the Ryzen as an unknown newer AMD CPU and defaulting to any Bulldozer specific timing optimizations which don't work so well with Ryzen.

Personally I think AMD kinda screwed up making Ryzen an "eight core" chip originally. It's not an eight core, it's two separate four core CPU's on the same die communicating to each other using a specialized protocol. It acts far more like a dual socket NUMA architecture then what we're used to seeing. AMD should have released a four core Ryzen CPU that was just a single CCX, would have avoided all this performance optimization requirements.

Another thing to note is that the interconnect fabric between the CCX's is running on the same clock and shares bandwidth with the memory bus. Thus raising the memory bus bandwidth also raises the bandwidth between the two processor nodes and thus reduces the penalty that happens with a thread migrates from one CCX to another. So running higher speed memory has a significant impact on performance, of course motherboards are still working out the kinks on the memory bus so it'll be awhile before we see the true performance capability of the CPU uArch. This is VERY NUMA like behavior and a reason platforms like Unix have thread homing and other options to prevent threads from crossing processor boundaries.

So everyone hold your horses, it'll be at least three months before we see realistic performance from this new design. Motherboard manufacturers need to get their BIOS's fixed and OS / platform designers need to update their code to recognize Ryzen's uArch and try to not move work across its boundaries.
 
I see a solid win for Intel here. After a bit of soul searching AMD was unable to hit Intel's IPC figures so it got close and widened core count and brought it out against the aging X99 platform. There is nothing wrong with that other than the fact that X99's only advantage is PCI-Lane count. Everyone knows that it is old and slow per core. The only reason I would build X99 is for multi gpus stacked up with NVMe drives with multicore optimized workloads (or bragging rights). If all I am going to do is a single GPU and no NVMe Kabi lake still dollar for dollar is the more powerful and cost effective choice in non optimized workloads (Which by the way is the norm, not the exception). That includes both Broadwell-E and Ryzen. If I am going after multi-gpu and really if I am going to throw a NVMe drive in my rig then Broadwell-e still makes performance sense even though its price is higher. X99 has gotten way too long in the tooth and AMD is exploiting that fact. But then again all Intel has to do here is bring forward a mainstream 6 core for Ryzen 1800 price (actually increasing their high end mainstream price by almost $150) and Ryzen is pretty well contained due to IPC. Oh that sounds alot like coffeylake due out this year. Prior to launch here I thought that Intel would have to open up some sweet R&D up early. Now it just looks like PCs picked up two to four cores that the software still doesn't know what to do with. Too bad massive parallel thread scheduling quickly becomes more process intensive then the workload itself or all we would have to do is widen core count forever.. Maybe they can tack a quantum task scheduler in there somewhere.... (Hey I may have actually found a mainstream purpose for quantum computers. Well that and encryption...) Bragging Rights Anyone? Break out the liquid helium!

But I digress. Why am I paying $150 more for four additional cores that still under performs against a four core model in a gaming rig and lacks the PCI lanes to build a powerhouse workstation rig for intensive business workloads @ $500 less?
 
My bet is on the 1600x - same clock speeds as the 1800x, so single threaded performance should be equivalent, but enough cores and threads to still outgun with the 7600k in multi-threaded apps. And for half the price of the 1800x.
 

The 1800x is their flagship. It would be pretty bold to assume that IPC is going to improve by removing cores, especially considering that the core design is more like two four core processors on a single die in the 1800x. Even at $250 7600k is priced the same right now as your target price. If AMD had brought this out two years ago they would be in the performance game. But I just don't see the value unless you think that core count is more important than performance. The I5 gives the I7s a run for their money. And it really doesn't make sense if you are willing to spend a hundred bucks more and go with the I7.

 
Hmmm... https://software.intel.com/en-us/articles/optimization-notice#opt-en
WU8xsE9.gif

"Certain optimisations not specific to Intel microarchitectures are reserved for Intel microprocessors."
So right in their "optimisation notice" ("optimisation disclaimer?") they state that optimisations that can work on other uarches and not just Intel's are actually reserved for Intel uarches.

Judging by the revision number, it appears that this notice was issued on 2011/08/04. Perhaps they have a more recent optimisation notice that says otherwise, but this is what I got when I was on the page "Intel C++ Compilers" ( https://software.intel.com/en-us/c-compilers ): "For more complete information about compiler optimizations, see our Optimization Notice."

However, I think that any further discussion on this topic is pointless without someone compiling some tests, running them through Intel's C++ compiler (and hopefully a few others) and then comparing the varies EXEs on Intel and non-Intel platforms. Perhaps the game devs "just" need to switch compilers... (although, to be honest, I highly doubt that a performance improvement would be that simple.) Would anyone here be open to running such tests on their Intel and/or non-Intel machines? I would be open to running the tests on a Windows box running 6700 (non-K,) provided that the source be available so that I could check it out.
 


That disclaimer was something Intel was forced to put up after the last lawsuit with AMD, the one they had to pay 1bn for. Prior they were marketing their compiler as the greatest thing every because it would automatically profile the CPU's extension flags and then chose the correct code path. ICC actually compiles several code paths into it's executable and then the dispatcher will determine which code path to use during run time. This way you can compile your code to run on the maximum number of platforms without losing out on performance gains from newer instruction features. What Intel didn't tell people was that it's compiler completely ignored processor flags and just checked the Vendor ID, if the ID was GenuineIntel it would then check flags and use the optimized code paths, if it was anything other then GenuineIntel it would run on the slowest path possible. Because they weren't telling anyone this while simultaneously marketing their compilers ability to read instruction flags and automatically determine appropriate code paths, everyone just assumed it was processor agnostic.

The real world impact of this effect isn't so much in program executables, since most of those are done in MSVC on a Windows platform, but in the various libraries that are included in the application. Most coding is done in the DLL's that get shipped with various products and the ICC is really popular for those because it does produce the fastest code and you want to optimize linked functions that your constantly using. The "optimizations" are really just chosing things like SSE4 over SSE2 or FMA over SSE4 for specific functions. Hell the older ICC would generate code that would only use SSE on Intel CPU's while everyone else had to use really old FPU calls, and that older code was then used inside several very popular benchmark programs that were then used to show how amazing the latest generation of Intel CPU was.
 
dumb review for games written with the intel developers kit. none of the games used the ryzens developers kit, cause it was not ready during the time of programming. this is clearly a bias review leaning towards intel. curious to know how much is intel paying review sites for bad ryzen reviews?
 


Yeah, I should have been more clear in my answer - if you're still using a single rendering queue, as most current engines still do, Vulkan will not help you make better use of your CPU (because the rendering queue is the main point where Vulkan or DX12 can impact CPU use, both in draw calls numbers and in scene assembly complexity); if you DO create several rendering queues, CPU use will be spread across several cores. However, stuff like physics computation and IA are elements that can bog down a CPU and should, thus, be multithreaded before we look at rendering queues - performance won't improve if your game has multiple rendering queues but your physics engine is bogging down a single core...
 
People who are saying frames over 60 do not matter, please go get a 144hz monitor and see the difference. When I'm playing PVP 144 frames gives me a noticeable advantage over someone with a 60hz monitor. If the games you are playing is mostly single player/PVE then I guess it does not matter.
 


Placebo effect is real.

60hz is one frame every 16.666ms, that is faster then the human mind can react. Going further, 120hz is one frame every 8.333ms, 8ms is far below the threshold where a human mind can actually react. Higher refresh rate does absolutely nothing for you or any other human being on the planet. The most you'll get out of it is slightly smoother video effects while processing black -> white -> black effects. Your reactions are more dependent on you anticipating the action and starting the mental processing before it's even happened.

Note on how the human eye works, it doesn't "see" frames or have a refresh rate. The cones and rods simply sense photon impacts and send electrical pulses back to the brain where they are interpreted and an image is synthesized out of them. The upper limit on the brains synthesis ability is right around 20 individually distinguishable images per second. Of course since the eyeball and brain are analogue they can distinguish sharp changes in light patterns even if it's not a full image or blurred and this is why we want higher image rates in scenes with dramatic lighting effects. So a film that's a bunch of long range nature scenery with natural lighting would be fine at 20fps, while more complex movies would need closer to 30fps for a natural feel and even faster scenes (action / sports) would be better with 40+ fps.

Anyhow, as someone who owns a 1440p 144hz monitor I can tell you that the extra refresh rates are undetectable outside of some extreme situations. It's all placebo.
 
Status
Not open for further replies.