AMD Ryzen 5 1600X CPU Review

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


True- Wikipedia is hardly the only go-to for info, but I honestly saw very little results anywhere when Googling DX12. One would think that any developers who are developing DX12 games- and dropping teasers- would mention it... after all, it seems some gamers actually base purchasing decisions on the API (I'm guessing 470/460 owners).

Either way, it certainly appears that the DX12/Vulkan hype-train has quieted. Especially since Nvidia's cards aren't always being beat in DX12 titles. It's a good advantage for AMD, but hardly a slam dunk.

 


A beta for Mad Max running on Vulkan just came out. Just sayin'.
 

Well that's something I guess. However, I'm not sure that releasing a beta Vulkan API for a 2015 game is anything to get excited about.

 


Considering it uses the very same engine as that used by the Hunter: Call of the Wild, which came out a couple months ago, it's not bad.
 
@RCARON Regd. 'A great many people will be buying AMD for this reason.'

I'm one of those who needs more in the Enterprise segment (Workstations) and quite less on the Consumer class (home PCs). To me, System Stability (OCing isn't a priority) and Power Consumption are very important considering I very much rely on Renewable Energy sources to power my business requirements. For my enterprise requirements (STEMM R&D machines), I have invested in hundreds of machines running Linux as the primary OS involved in Design, Modeling, Simulation and Development tasks. I also use many Mac OS X and Windows machines too, but nothing like on Linux scale.

Bottom line is that Ryzen CPUs appeal me very much. I'm pondering on axing the existing Intel machines in favor of the AMD's new products in a phased manner (with a thorough plan). I'm well convinced that am going with Ryzen and Vega architecture in my upcoming workstation builds during 2017-18 period, if everything works out smoothly (my evaluations). I reckon I had enough with the looting done by Intel and Nividia on an annual scale.

ps: Now, memory manufacturers are killing it in the name of b$ short supplies.
 
Why in the Earth nobody ever consider the i7-7700?!?! And keep on putting the Ryzen CPUs only against the i5-7600K and/or the i7-7700K?

i7-7700 has the same clocks as the i5-7600K, but double the threads and 2MB more L3. It consumes a lot less power than the i7-7700K and no more than the i5-7600K. You can see it as a more powerful i5-7600K or as a slight less powerful i7-7700K (but far more efficient).

If anyone is torn between R6-1600 / i5-7600K then the i7-7700 is, quite ironically, the best choice.
 

Sure, we'll take it. Anything AMD can do to up their game benefits all of us- whether you're an AMD fan who wants a good product, or an Intel/Nvidia fan who is hoping that the competition will drive prices down.


 


Same boost clock, but the 7600K has a higher base clock. It's also unlocked, so if what you happen to be doing likes more MHz better than more threads, it could easily be faster. The i5 plus a decent cooler is ~$275, which will likely get you within a stone's throw of 5 GHz. Admittedly, one would also need a Z-series MB, which we can think about as another $25+ in platform cost. That brings the two solutions darn near price parity, and the question back to clocks or threads. For gamers, at least at the moment, that choice is easy: i5. For content creators, you're correct in that the 7700 is probably the better bet, though I can't imagine why one wouldn't look long and hard at the 1600 in that case.
 
Are Ryzen 5 1400, 1500x (4 core CPU's) failed 1600, 1600x (6 core CPU's) and can you unlock the disabled cores by over voltage. I'm thinking of AMD's sempron CPU's and the fact you could unlock disabled cores. OK it was hit or miss and stability was a bit of an issue but if you were lucky..... Happy days.
 


Because the i7 7700 is multi locked, so despite the price being closer it's vastly inferior to both Ryzen and the i5 7600K in real terms. Even turbo clocking the i5 to 4.6ghz across 2 cores will destroy the 7700 in everything. Without an OC, the 7600k is better due to the higher base clock and multithreading not representing much in games.
 


All current Ryzen CPU's are pretty much 1800X with 2 or 4 disabled cores, and for the lowest R5 even some disabled cache (it is unknown yet if it's a whole disabled CCX or 4 Mb of cache per CCX disabled). As far as re-enabling them goes, it might be possible but no BIOS maker has released anything hinting at it being possible. Also, as the architecture is very different from K10/Phenom, the way the cores were disabled might just not be worked around.
 


Yes but without data you don't get see the interesting trends. Like low resolution gaming tests do not predict future performance. Something I have personally noticed but this video did a pretty god job showing the data.

https://www.youtube.com/watch?v=ylvdSnEbL50
 


Actually it does. It shows that when a CPU becomes the bottleneck, where the CPU will stand. That is the only purposed as to why they bench at 1080p with lower settings.

If you are testing a GPU do you want the GPU to be the bottleneck in the system or the CPU? You want the GPU to be the bottleneck to see what the maximum performance of the GPU is before it can't push anymore out. That's why you max the settings out. Even though the majority of gamers are still running 1080P most high end GPUs get benched at up to 4K maxed out to see where the GPU stands. If you bench at lower settings, the CPU can become the bottleneck and can warp the results.

It is not a end all be all but I always like to see where a CPU will be in a few years and they have not been that far off. A 2600K was beating a FX 8150 pretty easily and plenty of people still game on a 2600K today.
 

It really doesn't though. The biggest problem here is that we're testing a new architecture that none of these games were optimized for during their development. Games from recent years have been optimized to run well on Intel's Core series CPUs, and perhaps to a lesser extent on AMD's last generation of CPUs. So, these existing games will naturally more often see a performance advantage on that existing hardware. That doesn't mean that will be the case for future games though. Most big games coming out a year from now are likely to be tested on both Ryzen and Intel hardware, and many will be optimized for both. So Ryzen's performance relative to Intel's in games is likely to improve. That's not to say that they'll necessarily be outperforming Intel's chips anytime soon, but they're at least likely to close the performance gap to an extent. That's not something that will be reflected by simply running existing games at high frame rates.

There's also the core-count advantage, which doesn't really effect many games yet. The vast majority of today's games won't see any notable advantage to having access to more than four physical cores. That's not something that is likely to change overnight, but for someone who plans to keep their CPU for a few years, which applies to most people, it's something that's likely come into play later on. I doubt we'll see many games making full use of the eight cores and sixteen threads of a Ryzen 7 anytime soon, but it's reasonable to assume that the additional cores and/or threads of the Ryzen 5 series may eventually make a difference compared to the four core, non-hyperthreaded chips they are up against.

Now obviously, these are not things that can be accurately tested for in any meaningful way, so I'm not against reviews posting 1080p benchmarks with high-end graphics cards to avoid bottlenecks. Most people still game at 1080p or below, and some even have high refresh rate monitors that can display those additional frames. Having a bunch of graphs showing similar performance at high resolutions that most people don't use might not be particularly useful. It is probably worth pointing out that most people's monitors can't display more than 60fps though, and that high frame rates won't necessarily reflect performance in games that have had a chance to optimize for the new hardware.
 
I was beaten in Germany from a few readers, because we not used 720p benchmarks to show the CPU-bottleneck even clearer. But this resolution isn't real, 1080p is. It is more or less a real-world-compromise, nothing else. :)
 
It seems that some game developers simply put in a check saying "if AMD cpu then all core are real"; those who went and detected Ryzen to send it onto the same code path as Haswell got an instant 20% performance boost - here's your SMT bug.
 


What optimizations? Through the extensions? SSE? AVX? For the most part there is very little optimizations for games to take advantage on a CPU. Unless they plan to use the CPU for say physics and push it through AVX.

My point was that the process is valid. It is a valid way to test the game. You nor I have no idea how games will perform in the future. Remember all the hype behind DX12? Still waiting for that insane performance boost.

And core count advantage has been touted since the FX series. Still waiting to see a game where it makes a big enough difference.

The only benefit to show 4K max settings is if you are reviewing a GPU.

That actually brings up another side people argue, driver optimizations. To be honest there is very little in that for CPUs unlike GPUs. I update my drivers pretty constantly and I have never seen a massive performance increase from installing the newest chipset drivers.

I think Ryzen will get a bit better but I do not see it getting insanely better. It just is not normal and there is no way that they will somehow optimize for the CPU and see massive gains.

I take that back. They will IF they finally code a game to properly utilize multiple cores. Until then gaming will remain as it is.
 


Yes in theory what you say is correct however history shows us that low resolution game testing does not translate to real world future performance(future being they key here). In 2012 the 2600k was around 10% faster in low resolution gaming compared to the FX8350, but by 2017 it is now almost 10% slower, so the tables turned. You will see the way tech sites are testing low resolution gaming really is not giving the full story(I'm not even just saying with AMD here) except of course if the actual use case is low resolution gaming today.

Battlefield 1 is a pretty good example of how a new demanding game should be designed and its scales well with more cores. What is interesting and to this point is that Intel's 6900k starts pulling away from the pack in 2k and 4k BF1 tests.





 


Actually, the AdoredTV video there was comparing the 8350 to the i5 2500k, not the i7 2600k. He claimed the 8350 started out 10% slower in games than the i5 and is now 10% faster. But he was citing aggregate benches from computerbase.de, whose testing methodologies changed from test to test. Different games were tested each time, different GPU, different OS (Win 7 to 8 Enterprise to 10) and even different CPU's since they stopped testing certain models. So someone tried to replicate his findings:

https://www.youtube.com/watch?v=76-8-4qcpPo

Hardware Unboxed's results didn't match, they still showed the i5 2500k as being faster. So AdoredTV posted another video to defend his findings. Basically, his argument revolves around "outliers" and "edge cases" skewing the results. Then he admits the benches he cited from computerbase were themselves an outlier/edge case.

So he goes through again singling out different outliers and edge cases, basically saying anything that favors one over the other like resolution or settings or memory speeds or specific games, messes things up. Ie, anything that doesn't support his conclusions.

https://www.youtube.com/watch?v=gFBKFz9n2hc

It's actually pretty hilarious, especially since he starts off at the very beginning saying that you can make a video that shows anything by cherry picking your sources, and finishes by calling himself (and all other benchmarking sites) "entertainers."
 
What I really don't like about this review is:

"Rise of the Tomb Raider is also sensitive to IPC throughput and clock rate, which is why the Core i5s fare so well. Overclocking Core i5-7600K to 5 GHz only adds to Intel's dominance in this title."

This is just not true.
Nvidia DX12 drivers are bigged, this is common knowledge and DX 12 drivers from nvidia severely handicap ryzen.
The results are meaningless.
 


Actually, the Nvidia drivers handicap Intel chips that have more than 4 cores as well. The driver just doesn't scale well. I don't know if Nvidia is just being lazy or if there's a valid reason and they're focusing more on mainstream (2-4 core) than specific markets (+4 cores).
 


Correct. We will see in a month or two if they will fix this before Vega hits the shelves as they know there is a ton of 4+ core CPU's out there now that Vega will get tested on and compared to nvidia cards. If nvidia doesn't fix it by then it may be something they are doing in software to compensate for a lack of hardware. Which possibly is why we are hearing about Volta getting moved up. Shall see what happens.

 


But code is not as precise as you make it out to be. Here's a simple example. Prior to Ryzen, all AMD cores were real phyiscal cores, even with Bulldozer+. There were no 'virtual' cores and hence no SMT. If you were a programmer you thus could block your code from bothering with SMT load balancing and that would freely assign threads in sequence or at random (cores 1,2,3,4 or cores 1427 vs cores 1,3,5,7 with heavy threads and 2,4,6,8 for light threads on an i7 because the latter set are not real). This would create monster performance reduction on Ryzen as some cores completely bottleneck and others are left with no work, and not because the work could not be done by Ryzen, but would be preferred behavior for every other AMD chip. You'd have used it because it's 1 line of code and proper enumeration of features/SMT checking would take a page or more and thus make your program slower.

That is a real example taken from the Ryzen AMA btw.

This is also the point in time we are in when AMD is being benchmarked as the underdog. Nobody expects AMD chips to support modern technologies and has thus disabled many many technologies or simply implemented them only on the Intel side of the 'if def' code.



Then look at the raw texels being pushed in Rise of the Singularity and you will realize that when someone with GTA money get's hold of it it will do a huge amount. It's the only reason VR is even an option on current gen consoles. You can't wait for that insane performance boost until DX11 cards are hopelessly outdated... and Windows 7 needs to go for that to be possible.



Speak for yourself many people have Ultra 4K plans, or VR which can and does push as many pixels is some setups.


Chipset drivers are not CPU drivers. CPU drivers are provided in the form of microcode updates which are packaged inside your BIOS updates. Look at BIOS change logs next time where you see some have 'support added for x new chip' and that BIOS contains a microcode update which MAY then want to connect to an updated chipset driver version to complete the chain. Chipset drivers increase reliability and support for advanced commands and interfaces like new chips, bus configurations and microcode optimizations. They do not provide performance increases.


Even fundamental analysis makes me disagree greatly. Ryzen's 'slow' CPU to CPU interconnect is locked to RAM speed, so you actually get a boost of performance in-CPU whenever that update for 3200mhz RAM is out. This also means that IPC loss from multicores will go down simply as RAM speed goes up because the ring bus is locked to it and not the CPU bus, which is a crazy idea but ingenious. Most of of all this is a 1.0. Not even a 1.0. Ry+Zen aka Rise of Zen, not Zen. Equivalent to Core 2 Duo in terms of where AMD are at in the platform maturity curve. That makes Nehalem next, then Sandy Bridge all whilst Intel is eeking out single digit IPC gains per release at the end of thiers.

This makes me laugh, as if you could play a 2017 game on a single core processor today. Just because they all code for quad doesn't mean it will stay that way.

 
Agree with every bit, +1.

The very last response about games using multiple cores needs a touch more. There are games today that use all threads on Ryzen 8 core CPU's and Intel's HEDT CPU's, for example Battlefield 1. Bethesda/id software came out yesterday with a video saying they are optimizing new games to use all available threads as well. The real take away with today's games is that the AAA titles that tend to need a lot of CPU resources have already or are moving to a much more multi-threaded approach.
 
Status
Not open for further replies.