Discussion AMD Ryzen MegaThread! FAQ and Resources

Page 36 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Ok I think this settles it quite nicely:
https://www.youtube.com/watch?v=ylvdSnEbL50

Here Adored TV have done an investigation into the idea that low resolution tests = future performance with higher end gpus.

What they have found is: Core i5 2500k was around 10% faster than FX 8350 on average in games at launch at 1080p. At 480p it was 17% faster on average using a GTX 680. Moving forward to the original Titan, the gap at 1080p and 480p remains pretty much the same. Forward to today using Titan Pascal- the gap in the most modern titles is actually *reversed* and the FX8350 is 10% faster on average than the 2500k. It basically proves that even the low ipc 'failure' that is the FX cpu's in fact have improved with age better than the 'best' gaming cpu from Intel of the same geneartion due to having more threads. If you still think a 4 thread i5 is a better investment than Ryzen in games today I don't know what more I can say but basically more cores = better longevity.
 

salgado18

Distinguished
Feb 12, 2007
966
426
19,370


Also Ryzen is killing dual-cores too, since the base Ryzen 3 starts at 4 cores. Some APUs and mobile chips might have it, but definitely with SMT enabled, and will be very-low-end.

My sig quotes a big truth today ;)
 

Embra

Distinguished


palladin9479. Wow... now there is a name I miss seeing around. I always appreciated his insight and thoughts.
 



No, intel has not reduced the price of the i7 at all yet.
It's still around $340 to get a kabylake i7.
 


As I understand it (and it is specifically only in a few titles)- it's because Windows / the game engine doesn't understand what is a 'core' and what is the extra 'smt' thread.

The SMT threads add about 20% of the performance of a full core- so for optimal performance you need to load all 8 cores first, then start adding extra tasks to the 8 SMT threads.

The way the cores / smt threads are organised is like this:
Core 1, SMT1, Core 2, SMT 2 etc...

What appears to be happening in some titles is the game is simply loading up the first threads it finds- which means it's putting load on SMT threads rather than physical cores. Disabling SMT forces the application to only use the primary cores which are faster- hence performance goes up.

The thing is this situation used to be the case on Intel when the first generation i7 came out- back then turning off HT boosted performance a bit. That isn't the case now though (even on the same first gen i7)- as software knows which threads to prioritise, so essentially it's a software glitch rather than a fundamental problem with Ryzen.
 


Keep in mind, Intel created a CPUID bit to identify CPUs with HTT support for EXACTLY this reason. AMD should re-purpose the same bit, which would solve all these SMT issues instantly.
 


This is hopefully exactly what they will do
 

truegenius

Distinguished
BANNED
i think this article can explain how much of an issue slow memory is now
http://www.eurogamer.net/articles/digitalfoundry-2016-is-it-finally-time-to-upgrade-your-core-i5-2500k
and ryzen is around haswell ipc, and turbo is also mediocre compare to intel, thus expect only around haswell level gaming performance from it
also i don't think scheduling is causing a performance issue, 2-3% performance difference is too small to call it a day, if os is scheduling threads at core1 then second thread at smt of same core then we would be getting only 145% performance ( as smt is giving 45% performance ) compared to 1 core that means much more performance difference than just 2-3%.
 


If I'm not mistaken he was saying that Intel is going to shift their 4 core i7 and rename it an i5 and more than likely lower its price a bit. If Intel is going to give hyper-threading to the i5 then it there is no difference between it and the 4 core i7. If this occurs then the high end i5 will be 4 core 8 threads and the base i7 will be 6 cores.
 


So in other words "Scotty" from the previous link was right and testing at 480p resolutions is total <mod edit>. It doesn't accurately show the future performance of a processor with better generation GPUs, so why do we even use it as the "standard"? Why aren't we testing 1080p all Ultra settings? Lets face it, your not going to buy a high end processor and GPU and set your gaming experience to 480p. At least benchmarking 1080p all Ultra settings shows realistically where most are going to be gaming and how the various processors compare to eachother with the most popular settings. In the previous link we saw that at 1080p and Ultra settings the Ryzen keeps up pretty good with Intel.
 
I think a few people have spotted it, but the elephant in the room, imho, is that the gaming CPU market is a pittance compared to the productivity market, where this chip appears to excel; doing no worse than trading blows with more expensive competitors, and in some cases having them for lunch.
There are so many potential optimizations that may be done (SMT, latency, BIOS, drivers, etc...), and if each of them adds a small improvement, collectively this new chip may yet improve remarkably.
So essentially, after more time to contemplate this offering, I can no longer justify my initial disappointment. This is a good chip, will likely get better, and subsequent versions might go even farther. I've been all-Intel since the retirement of Omega (FX-8320), but I see a Ryzen build in my near future, likely the R7 1700.
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985


Has to be said this was posted last night by thegentlewoman, excellent video !!!
Very well put together and makes some excellent points.

Also I love the points being made about multiplayer gaming, very interesting. We may need to find a new way to test altogether...
It seems most reviewer's testing methods don't really apply in the real world as most gamer's love online multiplayer. This is where it's really at and this is where dropping frames will cost the most.. .as milliseconds seem to count more than ever.. slightest lag and your toast.
 


They didn't for Bulldozer. Instead, they relied on Kernel patches for a specific CPU architecture, which naturally broke for the next one. The *correct* way is to fix it on their end so this never becomes an issue again, but I suspect they'll push it to MSFT/Developers to put in special hardcoded paths for Ryzen instead.
 

salgado18

Distinguished
Feb 12, 2007
966
426
19,370




Ok, so I took TechSpot's review, and did the math:

Out of 32 tests (16 games at 1080 and 1440), 15 resulted in improved performance with SMT disabled. A few got worse, and the rest had no effect.

But if you consider they solve the SMT issue, then only those games who had better performance without it will see gains, while the others will not (since they look already optimized, or has no effect). So, I did the difference in each scenario where SMT hurts performance, and found this (sorry for the bad table):

SMT - no SMT
min/avg - min/avg == %min/%avg (game)
56/94 - 59/96 == 5,4/2 (hitman)
50/71 - 53/72 == 6/1 (civ)
47/70 - 52/70 == 10,6/0 (civ)
133/166 - 146/170 == 9,8/2,4 (ow)
95/136 - 98/139 == 3,2/2,2 (ow)
104/126 - 106/131 == 1,9/4 (gears 4)
102/126 - 105/130 == 2,9/3,2 (gears 4)
57/79 - 58/93 == 1,8/17,7 (deus ex)
85/120 - 99/128 == 16,5/6,7 (F1 2016)
87/111 - 90/114 == 3,5/2,7 (F1 2016)
65/78 - 71/88 == 9,2/12,8 (TW Warh)
65/79 - 70/86 == 7,7/8,9 (TW Warh)
87/104 - 96/111 == 10,3/6,7 (GTA V)
60/88 - 65/100 == 8,3/13,6 (FC Primal)
61/86 - 66/93 == 8,2/8,1 (FC Primal)

which resulted in 6,98% average increase in min FPS, and 6,13% for average FPS. Highlights for F1 2016 at 1080p, for a 16.6% increase in minimum FPS, and Deus Ex at 1080p, with a 17.7% increase in average FPS.

Also, this only takes into consideration the games that won't take advantage of SMT instead of being hindered by it, in which case performance would grow even higher.

Solving the SMT problem will bring A LOT to the table. It won't solve the latency issues, but many reviews would have to be redone, and I think conclusions would change. AMD realy should have waited a bit more to launch Ryzen.
 


But here's the issue: It's those markets where purchasing decisions are driven entirely by management, and not by individual needs. There's likely bulk-buy plans in place, with favorable pricing/support. And that hurts AMDs capability to break into those markets.

And that's the issue I've forseen for some time: Yes, price wise, Ryzen is competitive. But that isn't enough. AMD has to convince business to dump Intel, and that's going to take a very long time to accomplish. And all the while, Intel will prepare their response.

We'll see I guess.
 


It's interesting as you'd think AMD would know about the 'is HTT bit' and have included that on the CPU. I wonder if this bit is in fact present and it's something else that's the issue- Ryzen is a good design, so I think it's safe to say AMD aren't totally incompetent.

Actually thinking about it- the SMT problem only appears to be an issue on *DX12* titles- I wonder if AMD aren't allows to utilize the same tag Intel do (for IP reasons)? or if the DX12 titles are only looking for the HTT tag if there is a 'genuine Intel' string on the cpu? It looks like SMT works fine under DX11 (where as I understand it the thread handling is abstracted from the game engine) that suggests WIndows is SMT aware and the problem lies elsewhere.... I mean we're not seeing poor SMT scaling in other workloads either.

 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985
Hopefully it is for IP reasons that they can't use Intels HTT system an not pure stubbornness.

It's not quite the same though (correct me if I'm wrong here) Where Intel is using shared level 3 cache AMD is using 8mb shared per CCX... this means keeping treads of similar workloads on the same CCX or suffering a cache copy (from ccx to ccx) penelty...

i.e. so maybe some more elaborate scheduling system is required...
 


Well I think there is demand for some differentiation in the market- and AMD are an established name. I think the question is not if Ryzen and Naples will be successful but more a case of scale. I mean on the server side of things AMD currently have 0 presence, however if we look historically when they did have a competent server part they did pick up a respectable chunk of the market (up to about 20%). Given the margins involved in that space, even if they only pick up a few % points of market share that could really make a big impact on their bottom line. Given how competitive Ryzen is I think they have a shot of a reasonable uptick in server sales as well. If they could hit even 10% market share they'd make a killing- I mean I agree they face an uphill struggle but the numbers they need to hit are achievable imo.

@Jaun- I get where you're coming from on the CCX issue for the high core count part- still I wonder if this is exactly why AMD are targeting things such as virtualization servers for web services? I mean- if they effectively break a 32 core ryzen cpu down into multiple single core servers the issue of copying data between ccx's goes away entirely. I certainly think there are enough positives in the design to make it very compelling in a number of markets.
 


Except the i5 2500k and the 8350 aren't the same generation. The 2500k was released January 2011, and the 8350 was released October 2012, 1 year and 9 months later. Ivy Bridge had already been out for 6 months by this time. If you look at the charts in that video, all the Ivy Bridge chips maintain their lead against the 8350, although they stop following the i5 before the full 4 years are up. The 3770k actually increases its lead, which does support the overall trend towards higher core utilization. But again, you're comparing a 6 year old i5 2500k to a 4 year old 8350. More comparable would be the Bulldozer 8150 (October 2011, 9 months after Sandy) which has not aged as well despite the higher core count.

Edit: Typo. Bulldozer came out 9 months after Sandy, not Ivy.
 
Just another interesting read that delves a little into the SMT issues currently facing Ryzen.

http://www.pcgamer.com/the-amd-ryzen-7-review/

At least we know one thing for sure, Ryzen isn't Bulldozer all over again. Ryzen actually has good single core performance in benchmarks like Cinebench, and that fuels the hope that the gaming issues can be ironed out via software / driver / bios updates. Bulldozer / Piledriver never really had a chance in gaming (vs Intel's best) even with game optimizations due to low single core performance, Ryzen on the other hand is not handicapped by that issue. In fact looking at all the "workstation" benchmarks Ryzen should be, by all accounts, trading blows with Intel's best in modern games. Its going to be very interesting to see how much gaming improvement Ryzen can get with patches in the next couple of months.
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985


I think that was me talking about copying data between ccx's (scheduling) ??

My point being it's not just as simple as using Intels hyperthreading method..

Fair point though breaking the ccx's into individual servers or cpu's would work nicely. Should help keep relevant treads going to relevant ccx's without so many cache copy's going on across ccx's.