Discussion AMD Ryzen MegaThread! FAQ and Resources

Page 37 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

8350rocks

Distinguished


No, i5s will not receive hyper threading. They will just up core counts to 6 cores supposedly on 8th Gen in 2018 when it launches in Q2.
 

8350rocks

Distinguished


If by this year you mean, in the next 15 months. then sure.

As it stands, Intel announced 8th Gen coming in Q2 2018.
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985
Another sales pitch for zen or naples is built in hardware security, something that Intel is not offering at the moment:
SME (Secure Memory Encryption), SEV (Secure Encrypted Virtualization) and hardware based SHA, powered by a security co-processor.

This could help them break back into the server market. Along with the fact that they (AMD) can provide a full solution as in CPU/GPU package deal... Zen/Vega... at cheaper rates than if the customer had to purchase both solutions seperately from Intel/Nvidia.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Nice work. Note that 6--7% average you got is the result of manually enabling/disabling SMT after testing which is the better option on a individual title per title basis. This is a best case scenario. No one is going to provide a scheduler with specific profiles for each one of the games that exist or will exist in a future. A real scheduler will provide a schedule on the fly less optimal. Thus a supposed future patch will provide an average performance gain of about 5% or less.

Recall what happened with the Bulldozer patches for Windows. Installing them increased performance on average by a ridiculous 2--5%. We are in a similar scenario now.

This supposed 5% extra performance from some future Ryzen patch doesn't solve the ~20% deficit that RyZen has compared to BDW-E.
 

jdwii

Splendid


Sure its not bad its not like games are unplayable on even a 8150 hell a pentium with HT will game just fine in most cases its getting I3 performance with a 65$ price tag.

The point is its 20% slower in GTA5 in both FPS and frame times in the article you mentioned and in crysis 3 its close enough imo if perhaps we get a SMT patch with Windows 10 maybe things will look a bit better.

My main issue is with games that can use 8 cores the 1800X still loses to a 7700K if that isn't over scheduling issues and it might be then i can NOT recommend a Ryzen part over a I7 even more so when it cost the same for a 7700K as a 1700 which OC's terrible on REALISTIC(Sorry someone posted some crazy results and called Ryzen a winner cause it could OC to 5.8 or something with LN2 just about spit water on my monitor) cases like water or air cooling

"Does Ryzen 7 REALLY suck for gamers?" Jayztwocents just in case some people might think he is paid off or some other conspricy theory
https://www.youtube.com/watch?v=8-mMBbWHrwM

 

jdwii

Splendid


Compare a 8150 to a 2500K also the 8150 cost 270$ not 230$ so keep that in mind you should be comparing the 8120 i think to a 2500K and in that case no the 2500K will beat it even more so since the 2500K could OC to 4.4-4.6 for a decent amount of people, Also it took years for piledriver to even look this good and its still trading blows with a I3 today so i can not say its worth it to have lower performance for 3-4 years then again that's up to the builder if it is.

I'd much rather have fantastic performance for 4 years then to have mediocre performance for 4 years and keep up with a 130$ processor the 5th year.

If a SMT patch or driver/bios fix happens and solves these issues i will however recommend the 1700 or R5 to gamers i really think even now a hypothetical R3 would do nice for budget gamers if the price is at 130. My main issue was never with Ryzen but with how Amd and the fanbase was acting.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Or maybe the design with a partitioned LLC linked via a slow interconnect is just ridiculous to start and no magic scheduler will hide the deficiencies of that. I have been thinking on some other similar design and don't remember anything. Everyone, Intel, ARM, IBM, Sun,... uses unified LLC.

For practical purposes Ryzen is working as two quad-core CPUs glued together in a die.
 


Q6600 anyone?

In any case, Juan, it doesn't matter how awful or elegant a solution is; if it works, deal with it. Now, Intel has the better and more elegant solution it seems, but if it weren't the better performing, people wouldn't care.

The Phenom 1 (good ol' Barcie) was a "true" core and a very nice solution, but the Q6600 was better (and other Kentsfield successors), so no one really cared it was 2 dual cores slapped together.

I think you're missing the point with the CCX arrangement and it's a very clever one. I won't share my opinion, because it is plenty obvious and you might already even know why.

Cheers!
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985


Does Dx12 not allow the Game Engine's themselves to allocate processor resources specifically... on a game by game basis. (instead of windows doing it that is) ??
So each game can uniquely tune the cpu resources to suit each games individual needs ?
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985


Agreed... an maybe this is how windows an the game engines need to view it... as two quad cores that is.

Doesn't sound as good as an octa-core does it :)

An this is also why simply using intel's cpuid's would not be the most efficient solution...

@yuka
I guess you mean the fact that they can replace one of the ccx's with a vega gpu... it's pretty clever in that sense for sure mate ;)

http://www.pcgamer.com/amds-shows-off-how-crazily-zen-can-scale/
 

eathdemon1

Reputable
Oct 30, 2016
40
0
4,530

for that to happen, ms would have to add it to the home version of windows 10. right now multi cpu support is a pro version locked.

 

salgado18

Distinguished
Feb 12, 2007
966
426
19,370


I so love this part :D

Running a seismic analysis workload that involves computationally intensive 3D wave equations, which taxes the entire system—cores, memory, and IO—Naples basically destroyed Intel's Xeon server.

Could this mean, and I'm just dreaming here, that it's possible a 12c/24t desktop 1900X exists in the future?

Also, how big of a gpu can fit in that space?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


My point is not about "elegance", but about the performance deficits. Note as well that I was the first guy that predicted the quad-core clusters... I did this so early as 2014 when the codename Zen was not public still. I predicted that correctly because I was able to infer AMD modular plans as a way to reduce R&D costs. What I couldn't imagine is that they would do a quad-core modular approach with a split LLC and then get trapped into the difficulties they are experiencing now.
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985


I dunno.. good question how many ccx's can fit into an AM4 slot... are we maxed out at two ??
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985


Does that mean they could get 4 into the same space or 3 an a GPU into an AM4 ????

The mind boggles.... :??: Whaaaaat !

So we could be looking at 12c/16t up to even 16c/32t theoretically speaking. Or even a 12c/16t with a GPU ? Would the Pin Grid Array deal with the throughput ?

So how many CCX's can an AM4 chipset actually support ??

AM4 Supports PCIe 3.0, up to 24 lanes

@salgado18 While were dreaming we might as well dream big...
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985
"40%+ More Pins Than AM3+ & New Mounting Hole Spacing
Just like previous generation AMD sockets AM4 is a Pin Grid Array socket with zero insertion force. It maintains the same locking mechanism as the previous generation and takes up exactly the same area as AM3+, measuring in at 40x40mm. One of the most significant changes however is the number of pins. AM4 supports 1331 pins which is a very significant increase over the 942 of AM3+ and quite a bit more than Intel’s LGA1151. In fact this makes it the first PGA socket that we’ve seen from AMD to accommodate more contact pins than Intel’s LGA."

Can't help wondering if all these extra pins were to accommodate the increased throughput of adding extra CCX's... maybe the Ryzen 1900x is a 12c/24t part.
 


In theory, yes. But then developers are basically overriding the OS Scheduler, which can lead to a LOT of problems as new CPUs get released that aren't accounted for in code. It's also a lot of work to confirm that your custom partitioning is doing a better job then the OS scheduler is. I'd say in 99.9% of use cases, it's not worth the effort to go out of your way to optimize over the scheduler. Scheduling is inherently an OS function, and unless there's a very specific reason to, it's generally best to let the scheduler handle scheduling threads.

For example, Ubisoft has a tendency to manually lock threads to cores. This is bad design practice, as they are making assumptions about CPUs that may not be true in the future, which could affect performance or even outright prevent titles from running. Case in point, some of their games lock a major thread to the fourth processor core (Core 3). Depending on CPU topology, this could actually be a logical core, rather then a physical one, and this approach could easily kill performance if the topology for a SMT capable CPU is in the form of 00112233, rather then 01230123. Like Ryzen might be. Whoops.
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985




Thanks for that interesting stuff..
When AMD said they were working with developers directly to show them how to get the most from Ryzen.. I just presumed this is what they meant. I presumed software would identify the CPU in question and use an if/then to allocate treads.

Can't help wondering what optimizations AMD are talking about when they speak of working directly with game developers to achieve... Or are they simply trying to get everyone to develop games to make use of more cores...
 


I've always been in the camp of, barring some serious mismanagement on the part of the OS, that the OS scheduler should handle the scheduling details. There's simply too many conditions out there where wrong assumptions can kill performance.

Now, that isn't to say you might have some code path specific for a CPU architecture to go down a different computing path for one reason or another. For example, if I were doing AES workloads I would probably have a dedicated path for Ryzen to take advantage of it's AES engine. But as far as scheduling goes, I leave it to the scheduler to manage.
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985
Do you think these optimizations they speaking of will have much of an impact on fps in games running on Ryzen ?

I know thats the million dollor question... I know they are trying to get people to code for more cores. Trying to get developers to use dx12 better and vulken. But coding for more cores will benefit Intel multi core CPU's just as much. Can't help wondering what these optimizations are that they speak of when it comes to game developers.
 

Nope 1151

Commendable
Feb 8, 2017
70
0
1,630
I think it depends on what AMD meant by "optimizations".
If it is a-la faildozer as in, "Code for moar cores" its probably not going to happen yet.
If it is a scheduling bug, maybe.

Side note: Has anyone seen the price drops on previous AM3+ board and mobos? TIME TO FIX THAT x6 1045 HERE I COME! (DDR3 ram too :) )
 


From my perspective, if the CPU is doing enough where the GPU is constantly loaded, there really isn't much more you can do. Point being, if four cores is enough to max the GPU, using more cores won't have any real benefit by itself.

Now, if other things are going on in the system, having more threads doing less work is a good thing, as you reduce the chance of any individual core getting overloaded. But for a *single* application, if the CPU is doing enough, then there really isn't much point to complicate things for little to no performance benefit.

The *real* advantage in DX12 is the fact you no longer have a strict GPU pipeline; you can use unused resources in one stage to start the next, which could drastically improve performance. But all the other low level stuff won't amount to much while adding a LOT of difficulty to code, and will likely fall out of favor over time.