Discussion AMD Ryzen MegaThread! FAQ and Resources

Page 28 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

salgado18

Distinguished
Feb 12, 2007
966
426
19,370


Tom's Hardware review tested without SMT, and performance actually improved:

The Zen architecture is AMD's first with simultaneous multi-threading, so we also tested the Ryzen 7 1800X with SMT disabled to flesh out any performance deltas attributable to this feature. Indeed, we observed higher performance with SMT turned off in some titles.

r_600x450.png


Lisa Su also admited there's a problem with latency, but right now SMT is also a problem in games, for the reasons I and gamerk stated.
 


Look at it this way: What group is most likely to purchase an AMD CPU?

The answer: Gamers.

That pretty much explains why AMD wanted gaming results to show the CPU in the best possible light.

At the end of the day, OEMs are going to favor Intel due to name recognition (Sales), AMD is all but absent in mobile (though Intel isn't doing much better), and Servers, while high margin, are locked down by Intel. Gamers are the primary market for AMD right now, so poor gaming performance especially hurts Ryzen sales.

Over time, this can change. But any type of performance that is less then investors expect could easily cause AMDs stock to nose dive just as fast as it's risen over the past year. And that's an issue for AMD.

EDIT

And as a disclaimer/reminder: I'm a Software Engineer, not a CPU design expert. I've always tried to leave the low-level details to others, while pointing out the software side issues with said designs.
 
Despite my mild disappointment in the gaming results, imho this is not Faildozer pt. 2. It hasn't required narrowly cherry-picked benchmarks to show there are common-use applications in which Ryzen performs extremely well. Furthermore, much unlike Faildozer, there is no area where it truly sucks. It is much improved over past AMD offerings, across the board, including power consumption (i.e. they haven't pulled another FX-9xxx fiasco). People say it doesn't overclock well; I say for once, chips were released labeled more accurately as to their true clock speeds. Finally, gamers, who make up a tiny percentage of overall computer buyers, are not the target market for this chip. For everyone else, it looks like an excellent choice, certainly competitive, especially for the price.
 

Phuntasm

Distinguished
Aug 12, 2014
31
0
18,530
So does anyone know what the actual problems Ryzen is running into are? Let's excuse the performance in non-multi-threaded games, but what about say Watch Dogs 2, which the 6900k tops while Ryzen can't beat the 7700k on still? I haven't heard much substantive info from AMD on potential performance caveats. I wish they would at least speak up on it beyond "Trust us, it'll be great. Trust us."
 


And that doesn't contradict gamerks or my take on this (SMT). Just keep in mind, just like Toms showed, the gains from the patches will not make the 1800X be better than the 7700K magically. I will keep on saying (and siding with old gamerk comments) that it's all about speed for the time being. AMD needs those extra Hertz unfortunately until games that can go wider actually do so.

Plus, in the Reddit AMA they acknowledged something odd going on with the IMC latency. I didn't think it would be fixed as part of Zen2 (Zen+?) though. Where did you read that, Juan?

EDIT: Adding a bit more -> There might be games that try to do low level implementations to handle threads or have a finer grained mechanism to detect CPUs that are not part of libraries used across different games. This is to say, some specific game engines might have strict CPU recognition mechanisms that default to a "safe" codepath if the CPU is not recognized. It's not my cup of tea as a programming philosophy, but it's a quick way to gain performance in certain uArchs instead of making monster logic to fit all.

Cheers!
 


Besides a few public statements from amd and game developers that Jaymc pointed out, without being a cpu engineer and/or someone that works directly at amd, none of us will exactly know what's going on under that integrated heat spreader.

We can speculate what the issues are, SMT, BIOS, Memory Controller, OS, game optimization, ect, but only AMD really knows what the issues are.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I joined just before Steamroller launch. I only know about Bulldozer launch from what more veteran users write in forums.



Keywords in Toms quote: "in some titles".

This is the same situation found in the gamersnexus article mentioned above. Ryzen with SMT disabled was faster in some titles, slower in other titles and performed the same in the rest:

In fact, disabling SMT reveals that performance is largely the same in Watch Dogs 2 – though we’ll see a decrease in other games – netting an output of ~85FPS AVG versus the original 84.3FPS AVG.

[...]

Disabling SMT on the R7 1800X results in performance that boosts to 134.5FPS AVG, up from 132FPS AVG. Because GamersNexus also measures frametimes, we benefit from the knowledge that disabling SMT further increases 1% low and 0.1% low metrics by upwards of ~30FPS (around 30% better 1% low values with SMT off).

[...]

Disabling SMT in Ashes of the Singularity results in a performance decrease of roughly 1-2FPS AVG, and so is insignificant overall. This game seems to get some slight benefit out of SMT.

[...]

For GTA V, the AMD R7 1800X Stock (SMT0) operates at around 127FPS AVG, leading the 1800X Stock (SMT1) at ~125FPS AVG. This is repeatable and is not within “margin of error.”

The AMD R7 1800X with SMT enabled (stock) operates at 124FPS AVG, or about 3FPS behind when SMT is disabled. Frametime performance also decreases with SMT, dropping from 83FPS 0.1% low to ~62FPS.

[...]

Total Warhammer shows the biggest change in performance when disabling SMT. The AMD R7 1800X moves from ~127FPS AVG (1% low: 90, 0.1% low: 65.7) to ~153FPS AVG with SMT0. That’s an increase in performance of 20.5% by disabling AMD’s most advertised property.

The reason for the bad gaming performance is what CanardPC and me wrote before launch

https://twitter.com/CPCHardware/status/836346777267761155?p=p

And reviews have confirmed it. AMD and a pair of partners can say "trololo", just like when Intel tried to convince us that Devils Canyon got 5.5GHz on air. I knew a watter cooling was used. And sites even mentioned me as source then:

However, as pointed to me the user Juanrga of SemiAccurate, this frequency has been achieved through the use of cooling liquid, such as Intel itself has indirectly confirmed: the press service of the Santa Clara house has not been able to provide any screenshots that demonstrate the use of air systems to achieve 5.5 GHz.

http://www.bitsandchips.it/9-hardware/4480-cpu-devil-s-canyon-a-5-5-ghz-ad-aria

I don't have anything more to say.
 
@jdwii, I think it's important to differentiate the situation between Ryzen and Bulldozer.

Ryzen has (easily demonstrable) *respectable single thread performance* and *comparable FP throughput* to Intel.

It's gaming performance isn't where many would like it to be- but it's hardly horrendous (unlike a number of titles on an FX). The issue with the FX 8 core part is game developers *had* to make the game very very well threaded (without overloading any specific thread) to just about match Intel. That did in fact happen in a number of titles (you'll note there are a few cases when FX 8370 at stock is faster than Broadwell E as well as Ryzen, these cases are just few and far between), however that was down to some very clever coding to get around the bottleneck.

In the case of Ryzen we have a generally strong cpu- it should cope better with more single thread dependent games, and has a full 8 cores for all resources. Ok- it's behind Intel in some titles at lower resolutions but we aren't talking massive margins. I think for gaming the lower core count parts are going to offer a real performance advantage over Intel for the money, even if at the very top end Intel still has the best gaming CPU. The fact you have a platform that can offer strong (if not the outright best) gaming performance as well as top tier performance in heavy duty tasks is quite compelling- especially when you are only paying mainstream prices for the platform itself.

As for if Ryzen gaming performance can improve through bios tweaks, game updates, general trends in gaming to move towards more cores or if it will 'stay the same'- I think all are true depending on the titles you pick.

If we look at the Vulkan results in Doom (notably missing from a number of reviews but there are a few scores) you'll note Ryzen matches the best Intel parts- so it stands to reason that it is possible to write the software in a way that uses the Ryzen resources to it's full potential. That said older titles aren't going to see such updates- and updates on other games will be heavily dependent on the engine used (so Bethesday stuff with Vulcan support will be good, just like how the engine in the Battlefield games scales well on FX and also now Ryzen). That said, our 'worst case' numbers aren't especially bad this time around.

When it comes to the memory latency, I can envisage that this can be improved at least somewhat with bios tweaks, either through leveraging higher clocked DDR4 kits or improvements to timings. That isn't likely to be universal so will depend on deeper reviewing on a range of platforms / memory kits / setups for people to find the best option. I don't expect a magic '20% Ryzen boost bios' from AMD that will lift all board / cpu combos.

Where the question about how many threads games need comes in- well you cannot ignore the argument that thread count in games increases (slowly) over time. I've been around PC gaming since the 486 days. Single core was enough, you'll never need more than a 1ghz processor, dual core is overkill- quad core is totally unnecessary, all you'll ever need for gaming is an i5... All these have moved aside in the long run. It's is easily demonstrable that an i7 is a better gaming chip than an i5 at the same clocks (depending on the title). Games these days need about 6 threads to function well (which unsurprisingly is how many threads the consoles use for gaming). What we are currently seing is that a high clocked quad core 8 thread is currently better than a lower clocked 8 core 16 thread part in games. I think that situation will start to change over the next few years however this is like the prelonged move from dual core to quad core. It took YEARS before we started to get games that just don't run properly on dual cores (there are many titles now that still work on a high clocked dual core). I think quad core / 8 thread will remain optimal for some time. That said I wouldn't be against going 6 core / 12 thread (that 1600X looks tempting) as I can see that providing some good longevity. Games will move that way- in the mean time it's unlikely Ryzen is actually going to hold you back.

The final point is this: It's worth noting that at higher resolutions where we are typically 'gpu bound', the 1800X is actually providing the *highest* frame rate (abliet by a small margin). That keeps getting overlooked and I think it's important to understand whats going on in that situation- when you increase resolution you don't *just* increase gpu load. There are many games that use *cpu based* effects in conjunction with the GPU- and by pushing such a high resolution you are likely loading the CPU more as well. Yes in most games the draw calls to the gpu are the biggest load (especially in the contrived low resolution tests), but there is more to a game than that. I would like to see a deep dive where they look at exactly what is going on in the high resolution tests where Ryzen is ahead. I'm not sure I'm totally comfortable with the established logic that "if the system pushes 300 fps at 1024 x 768, then with a powerful enough gpu it will push 300 fps at 1080p or even 4k" because that assumes that *no other loading will increase on the cpu with a higher resolution*. I don't think that is the case.

At a higher resolution you'll have higher draw distance, which means more things on screen such as AI, which loads in more audio, more physics objects and so on. I think with Ryzen the low resolution tests are highlighting the memory latency as a bottleneck which pushes Intel ahead- but I don't think you'll ever hit *that specific bottleneck* at substantially higher resolutions. The TL, DR; to all this is I suspect that Ryzen might actually be consistently *as fast* or *faster* than Intel in many games at 4k even with faster gpus- it has good single performance, a huge chunk of resources and so on. It seems to suffer a road block when trying to push very high frame rates at lower resolutions however I'm not convinced that points to an overall lack of performance in the way it did with the FX.
 
@Juanrga just looking at the Toms AOTS benchmakrs, if you compare the Ryzen @3.8ghz vs FX @ 3.8ghz, it's worth noting we are seing 45% and 38% increases in frame rates in average and minimum frames respectively.

That actually fits quite well with what we were originally expecting from Ryzen and in that light is pretty acceptible. If we work it the otherway based on AMDs own 52% IPC uplift score, we should see 68fps average- which is interestly *less* than the no SMT results.

Essentially, based on where FX is Ryzen is scoring very well and in line with the IPC gains- although it looks like SMT is holding it back a touch (which I expect can be patched out in newer titles / moving forward). The results really aren't that bad at all.

The main issues AMD are facing here- the IPC is good but not *as good* as Intel, and due to the process, they aren't hitting the clock speeds they would need to to challenge Kaby Lake directly in games. I've already said I don't see these results as terrible, Ryzen can most certainly game (and I think it could possible excel if games are written to suit it better, it doesn't have the same fundamental weaknesses FX did- Vulkan results are strong and as an RTS fan I can see these being really impressive in a well threaded engine- I fully expect great things from AOTS once they release the patch they are working on, however it won't be universally fast in the way the 7700k is).
 


With the exception of possibly doing some general optimization to deal with SMT effects, games should not be responsible for optimizing performance for specific CPU architectures. It's bad practice.

The "fix" is simple: Have AMD reuse the CPUID field Intel uses for HTT to broadcast which of their processors have some form of SMT. It's the same fix I proposed even before the Bulldozer launch.
 


Wow you got in at $1.60? I remember that and wanted to buy some but didn't have the cash (even without Zen AMD were seriously undervalued at that).

The stock is down to $13 according to my app- that's still pretty respectable. I'm guessing it will drop a bit lower ($10 worst case?) and then it's going to start climbing up again once cheaper Zen parts are released and once Vega drops (so long as Vega doesn't get hit by a huge backlash because it's not faster than the 1080ti, so long as they price it right I don't see why it needs to be).

I think the issue is there are quite a few game related reviews being very harsh on Ryzen (games Nexus did a pretty major hatchet job on it), I think the sentiment will shift significantly though when we see the same performance as the 1800x for a lot less from the lower core count parts (as has been pointed out you don't need 16 threads if all you do is gaming). I also think games nexus is very wrong for dismissing the areas where Ryzen is strong 'if you render use Cuda' was his attitude- as someone who actually does quite a bit of rendering for my job I've never actually used a rendering tool that supports gpu acceleration (they are few and far between and very industry specific). One of the biggest rendering suits out there at the moment for CAD based rendering is Keyshot- which is purely CPU based and the devs have confirmed the methods they use *cannot be rewritten* for gpu acceleration. Ryzen would likely shine in that for example.

Anyway the stock is just settling after the hype pushing it further than it should have. I don't see it dropping that far.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
I said guys I said.... Keller is not the lead architect of Ryzen. Keller is overhyped. Contrary to false information spread by hype sites as bitsandchips, WCCFTECH, and so, Keller wasn't the lead architect of K7 neither of K8, and he didn't play any role on the development x86-64. And now Lisa Su confirms he was not the lead architect for Zen:

Who had the biggest role in the creation of Ryzen? Was it you? Jim Keller? Someone else?
In terms of the creation of Ryzen, I am really really really PROUD of our team. To build something like Ryzen takes really smart people coming together around a big, audacious goal and the Ryen team did it. The lead architect on Ryzen was a guy named Mike Clark and together with the entire global team, made Ryzen a reality.

https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def6i3q/

Next is a photo of the Zen team. Mike is the guy in front. And she is Suzanne Plummer the lead of the Zen team.

rbb-AMD1.jpg
 

french toast

Distinguished
Jan 18, 2012
20
0
18,510
Gaming has been fixed people, its a bios update, if you want benchmarks of ryzen watch joker productions video, he used newer bios.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Same or lower clocks

AMD-Ryzen-5-1600X-and-Ryzen-5-1500X.jpg

 

NickatNight8320

Commendable
Jan 12, 2017
13
0
1,510


More memory will be better. Think of it like this. (oversimplified)
1 strip gets you 8 transfers a second x 1. Total throughput of 8
2 strips gets you 7 transfers a second x 2. or total throughput of 14
4 strips gets you 6 transfers a second x 4. or total throughput of 24.

I am sorry, but I don't have the full understanding how how much data you can really push. but the idea is each step up you get a little slower per strip, but you have more data being pushed each cycle. Dual channel is far superior to single channel.

Looking at Intel's Quad channel memory, i am guessing throughput far ouptaces what the total system can make significant use of, otherwise we would see significantly better memory performance from the 6950x vs the i7-7700 or the ryzen 7 1800x.
 
The problem now, as I see it, for AMD is there is now a reverse "hype train" in progress. As others have pointed out Ryzen is far from a disappointment, but that is what is being spread all over the net, including this thread, and it is a shame. We are focusing on only one metric of the processor's performance and comparing it to much more expensive Intel parts (when compared to the same processor makeup Intel has - the i7 6900K) and saying it came up short or its a failure. Lets keep a few things in mind:

1. Computers weren't invented for the sole purpose of gaming
2. Ryzen just launched and anyone who thought there wouldn't be a few bugs was delusional. Any time a new
line of anything is launched there are always issues. How many re-calls do most first generation new cars have.
3. Ryzen is equal to or even outperforming Intel in the market where it stands to gain the most money.

Yes there is some issue with the software, memory controller, or verification codes that has Ryzen currently experiencing a loss of FPS in gaming performance. Even with the loss in FPS though Ryzen is still very capable of gaming. This is not Bulldozer all over again as Ryzen may be behind Intel in gaming, however it is still very respectable. Ryzen is also a huge success when looked at as a pure workstation computer. The rendering and server capabilities of Ryzen are very impressive.

Ryzen literally just released today. If Ryzen was a car would anyone really believe it would go a full 4 or 5 years without having several re-calls? Any product released in first generation always has bugs to work out, and looking at Ryzen's bugs they really don't seem to be that severe. Although I do believe Ryzen could benefit from a new stronger memory controller there should be good performance increases through bios and driver updates. I wouldn't be surprised either if more game manufactures started utilizing Vulcan more as AMD processors power the console system that is their bread and butter. As benchmarks have shown, even with the issues Ryzen is currently experiencing, Ryzen scores much better in games that utilize Vulcan. While I don't expect all of Ryzen's issues to be corrected though software / driver / bios updates and game optimizations those things will help Ryzen's performance maybe raising its in game performance to equal Sky Lake, which is no gaming slouch. It probably won't be until Ryzen's second generation that gaming performance may really equal Intel.

Another thing to consider is Ryzen is a workstation beast. Ryzen is the equal to, or even surpasses the i7 6900K. The gaming market has always been a nice niche to have in your back pocket, but the real money is in the server market, where AMD only controls maybe 2% of the entire market leaving Intel in control of the other 98%. With Ryzen competing so well against Intels best "equal" processor we can assume that Ryzen server processors are going to really give Intel headaches in the server market. Intel has a lot of ground to loose and AMD has a lot of ground to regain, but at their price to incredible performance index Ryzen is positioned perfectly to regain a substantial portion of the server market, and that is where the money really is.

One last though, considering most nit-picking the heck out of Ryzen's gaming performance on this thread didn't think it could even equal Haswell, now that it is gaming at the level of Haswell they are still nit-picking the heck out of Ryzen before AMD even has a chance to update bios and drivers. Lets see how much gaming performance AMD can squeeze out of Ryzen before jumping on the "failure hype train". Far from a failure Ryzen has taken it to Intel's best in the workstation based computer market and put AMD back into the very capable gaming processor market that they have been lacking since Bulldozer. Yes, Ryzen has definite room for improvement, but it is still a resounding success.
 

jdwii

Splendid
Ashes of the Singularity CPU benchmarks seem very odd to me i personally think something is up with that one benchmark i mean a 6900K easily beats the 7700K but the 1800X loses at to the 7700K when all are at 3.8Ghz.
 
Just to clarify, I think people were comparing the 1800x to the 7700k as far as games are concerned. And Haswell level performance in absolute terms. Ie, it may perform as well as Haswell in games on a clock-for-clock basis, but it hasn't reached Haswell level clockspeeds. So as an ultra high end gaming solution I think the criticism is fair enough right now, given the relative pricing and performance. But if the lower core count parts offer similar gaming performance (and there's no reason not to think so given the threading performance) they could be poised to dramatically raise the bar for entry level systems. If they can offer i5 6500 level performance at i3 level prices it would completely redefine the "value" gaming market.
 


As far as clock speeds go we really have to wait and see. I know that reviewers are quoting 3.9Ghz max overclock on all 8 cores, however I posted earlier that you can buy a professionally overclocked 4.2Ghz R7 1800X package from Scan. Now if they can overclock those processors stable to 4.2Ghz and offer them in packages guaranteed to work, why could none of the reviewers do the same? If the R7 1800X can overclock to at least 4.2Ghz, which I think Scan has proven, it only seems to reason that R5 and R3 components with fewer cores can overclock even higher. Keep in mind that on average the i7 6900K (the 8 core 16 thread offering from Intel) overclocks to 4.3Ghz on all 8 cores, so a 4.2Ghz overclock on the 8 core 16 thread R7 1800X is very good.

If AMD is able to also implement bios updates which improve performance the R5 and R3s might prove to be effective gamers, although not at the level of Kaby Lake.