AMD's Future Chips & SoC's: News, Info & Rumours.

Page 115 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Dec 27, 2019
37
8
35
0
Lets see what the heat dissipation looks like at those clocks; just because you can clock north of 5GHz doesn't mean you should.
I think they will beat them with increases in IPC not frequency i really don't expect Zen 3 will come in with any clock speed enhancements.
 
Unifying the L3 cache across the CCX and adding a few more cores to it would make the CCX easier to clock higher if they reduce the total amount of transistors per same performance level I think? I wonder how much extra room that will mean, but given process optimization and this, ~300Mhz extra don't seem too bad given how they've been getting really close to 4.5Ghz turbo on all cores recently for 1 CCX configs.

Although that turbo for the "best" core is kind of shady as normal tasks get put in core 0 =/

Cheers!
 

cdrkf

Distinguished
Although that turbo for the "best" core is kind of shady as normal tasks get put in core 0 =/

Cheers!
That isn't true- there was a recent article going over how Windows assigns tasks to cores- the latest version of Windows 10 reads the core performance values from the cpu and assigns single thread tasks accordingly.

Where there was a bit of confusion with regard to Zen 2 is that AMD Ryzen Master reports the single fastest core, however Windows prioritises cores based on the fastest pair of cores in one CCX and then juggles the load between them (which helps prevent localised heat build up resulting in higher average clocks over a prolonged period). The issue was that quite often one CCX has one very fast core with a few slower ones, whereas another CCX has two moderately fast cores so Windows will target the latter if the predicted average clock speed is higher than the 'fast' core.

TL, DR; for single thread tasks Windows 10 scheduler should target the fast cores on the cpu. That does require all the latest updates and official AMD chipset drivers installed (and running on the Ryzen power plan).
 
Dec 27, 2019
37
8
35
0
While true that Windows tries to put the ST application on the fastest cores some app's override this setting and manually always set core 0. CPU-Z is the first to come to mind with this.

Lol someone on reddit said a 8700K never bottlenecks a 2080Ti i showed him like 5 games 9900K 5.0Ghz(1440P 144hz) haha ah they are old games though. Crysis, Age of empires 2 and 1 remake, Fallout 4, GTA5, will only get worse with the GTX 3080Ti and Big navi.

ST performance will always be important to games
 
Last edited:

cdrkf

Distinguished
While true that Windows tries to put the ST application on the fastest cores some app's override this setting and manually always set core 0. CPU-Z is the first to come to mind with this.

Lol someone on reddit said a 8700K never bottlenecks a 2080Ti i showed him like 5 games 9900K 5.0Ghz(1440P 144hz) haha ah they are old games though. Crysis, Age of empires 2 and 1 remake, Fallout 4, GTA5, will only get worse with the GTX 3080Ti and Big navi.

ST performance will always be important to games
The thing is though with all those ST limited games- if you overclock the 8700K to 5ghz it will match the 9900K (as will a 7700K at 5ghz and even a 6700K if you can push it that far). They are all built on the Skylake core, the only thing that would slow the older parts down vis a vis the 9900K is recent security mitigation patches (which you can disable).
 
Reactions: NightHawkRMX
Dec 27, 2019
37
8
35
0
The thing is though with all those ST limited games- if you overclock the 8700K to 5ghz it will match the 9900K (as will a 7700K at 5ghz and even a 6700K if you can push it that far). They are all built on the Skylake core, the only thing that would slow the older parts down vis a vis the 9900K is recent security mitigation patches (which you can disable).
Well yah but it was funnier picking the best CPU from Intel proving it can easily bottleneck a 2080Ti in older games even at 1440P 144hz. In fact even at 4K crysis and age get bottlenecked haha.
 
While true that Windows tries to put the ST application on the fastest cores some app's override this setting and manually always set core 0. CPU-Z is the first to come to mind with this.
Bethdesia does this, and I've called them out on it in the past. The application really shouldn't be making assumptions about CPU topography; what if we eventually head back to fast single-core CPUs (quantum CPUs?); oh wait, you app forces a thread onto Core 1? Well, that's not running again any time soon.

The only time manually overriding the OS makes sense is on embedded systems, and even then I shy away from doing so unless it's necessary to do so.
 
Reactions: PolkFan and cdrkf

Eximo

Titan
Ambassador
Quantum has very specific use cases, general computation isn't really one of them. There are certain tasks that standard CPUs are 'slow' at. That being a relative slow. When they do make it into regular PCs they'll be used as accelerators, and for encryption/decryption.

Probably all be cloud based anyway. I don't think anyone is going to figure out how to make them portable/for home use for a good long while. Need some serious breakthroughs in material science first.
 
Quantum has very specific use cases, general computation isn't really one of them. There are certain tasks that standard CPUs are 'slow' at. That being a relative slow. When they do make it into regular PCs they'll be used as accelerators, and for encryption/decryption.

Probably all be cloud based anyway. I don't think anyone is going to figure out how to make them portable/for home use for a good long while. Need some serious breakthroughs in material science first.
Using it more as an example of how things can change.

As for cloud based processing, the problem you are always going to have is latency; 32ms latency is simply too slow, and that's without even considering the processing delay itself. Given games want 16ms (and most of us was lower latency then that), that makes this approach a no-go. Nevermind that there won't be equal distribution of servers; some regions would get royally shafted due to server locations.

[On this point, I remember back when I was doing semi-pro gaming the discussions my group had on server locations (back when dedicated servers were a thing). We eventually chose Dallas as the "least bad" for all parties, but server location absolutely mattered for some people.]
 

Eximo

Titan
Ambassador
Using it more as an example of how things can change.

As for cloud based processing, the problem you are always going to have is latency; 32ms latency is simply too slow, and that's without even considering the processing delay itself. Given games want 16ms (and most of us was lower latency then that), that makes this approach a no-go. Nevermind that there won't be equal distribution of servers; some regions would get royally shafted due to server locations.

[On this point, I remember back when I was doing semi-pro gaming the discussions my group had on server locations (back when dedicated servers were a thing). We eventually chose Dallas as the "least bad" for all parties, but server location absolutely mattered for some people.]
I wasn't referring to general cloud computing, specifically quantum, do to the technical difficulties in making it a household item. Latency becomes less of an issue if the computation in question is solved in less time then that latency represents.

For the type of calculations you might do with quantum, I'm thinking games would not be a consideration. You might have quantum circuits running on the broader internet or between large organizations for security and general prediction and search functions. Keep in mind that quantum is good at finding answers on questions with unknowns, not very good for straight computation where all variables are known.

You can certainly brute force quantum using standard computing, but right now that means MW of power. Where a lot of the actual mathematical research is being done, since quantum computers aren't super reliable at the moment.
 
Dec 27, 2019
37
8
35
0
Zen 2 mem controller isnt bad
It's extremely poor in latency like i have 63ns on my Zen 2 chip and with the same memory i would easily get 38ns on Intel(75% less latency). That's a MASSIVE difference their memory controller is only good for bandwidth and nothing else and a lot of applications benefit from low latency memory for example games.

1usmus and Stilt both say Amd is using very old memory controllers another area of proof is 14cas latency barely does anything at 3600mhz memory over the memory controller just not being able to handle it. I get around 65ns at 16cas timings vs 63 with 14 for example.

Not to mention in terms of pure brute forcing it can't handle speeds like Intel can either, Bullzoid tried

People have tried everything too and the best i've seen over 1000 pages of material on the memory overclocking forum is 61ns and that is with a 3950x and 14 cas latency at 3800mhz. Not sure how low Intel's can get but i know its almost twice as good in terms of latency

Edit Amd knows this too which is why they through a massive L3 cache at Zen 2 to hide the memory controller latency
 
Last edited:
Dec 27, 2019
37
8
35
0
The reason for the latency is probably the chipset design. You have to realize the IMC isn't part of the same die as the CPU cores.
This is only a very small part of the issue as with Zen1 and zen+ latency was only slightly better by around 5ns. Latency on Amd memory controller have always been behind Intel since 08 or so. Amd designed their older memory controller for server use and nothing else really which was for max bandwidth, however as we can tell from Intel not only can we get high IPC cores with high frequency we can also get high bandwidth memory controllers with low latency(though again i will stress Amd's controller is superior in bandwidth to Intel's at the same frequency)

It seems with those patents Amd is updating their memory controller which is way past due and it also appears that they are improving their frequency on the core too.

Happy to see Amd so determined to continue to improve Zen's weaknesses instead of focusing primary on their strengths.


Basically it doesn't even seem like we have the same company from the FX days this is a whole new Amd, hopefully i can say the same soon from their radeon division.
 
Reactions: NightHawkRMX

JaSoN_cRuZe

Reputable
Mar 5, 2017
413
34
4,890
30
It seems Zen 3 will come with a long list of supported motherboards ranging from lower end B550 to upper end X670 chipsets.

I think AMD must rename the B550 to B650 now as it does not make sense to be on the 500 series in 2020.

For the zen 3 clocks AMD will at least boost higher of up to +200Mhz on all the product lineups and some better optimizations to their IPC, it might be double digit improvement.

Coming to Intel, they have enabled HT on their product lineups but will still might be premium in comparison of cores between their lineups, will boost up to 5.2 on higher end chips based on leaks but will need some serious cooling.
 
Reactions: PolkFan
Dec 27, 2019
37
8
35
0
It seems Zen 3 will come with a long list of supported motherboards ranging from lower end B550 to upper end X670 chipsets.

I think AMD must rename the B550 to B650 now as it does not make sense to be on the 500 series in 2020.

For the zen 3 clocks AMD will at least boost higher of up to +200Mhz on all the product lineups and some better optimizations to their IPC, it might be double digit improvement.

Coming to Intel, they have enabled HT on their product lineups but will still might be premium in comparison of cores between their lineups, will boost up to 5.2 on higher end chips based on leaks but will need some serious cooling.
Don't forget most likely being supported on B350,X370,B450,X470 as well with a simple bios update
 
Dec 27, 2019
37
8
35
0
Has that been confirmed? How much of a performance hit will occur if you can’t attain the memory frequency on the older boards with Zen 3?
I haven't personally seen this on older boards as the board barely even matters anymore unless its junk, my x370 board went from 3066mhz on my R7 1700 to 3466mhz on my 2700X and now the same kit of memory hits 3800mhz at 14 cas timings perfectly stable for 10 hours in memtest.


Precision Boost is now built into the CPU and the board really has nothing to do with it, the only thing stopping older boards might be some boards limited bios chip itself.

Boards really are dummy parts now a days(Last 5+ years) with the south and north bridge and memory controller being built into the CPU. When looking for a board get one with the VRM to last and with everything you want on it or more importantly need.
 
Last edited:
That isn't true- there was a recent article going over how Windows assigns tasks to cores- the latest version of Windows 10 reads the core performance values from the cpu and assigns single thread tasks accordingly.

Where there was a bit of confusion with regard to Zen 2 is that AMD Ryzen Master reports the single fastest core, however Windows prioritises cores based on the fastest pair of cores in one CCX and then juggles the load between them (which helps prevent localised heat build up resulting in higher average clocks over a prolonged period). The issue was that quite often one CCX has one very fast core with a few slower ones, whereas another CCX has two moderately fast cores so Windows will target the latter if the predicted average clock speed is higher than the 'fast' core.

TL, DR; for single thread tasks Windows 10 scheduler should target the fast cores on the cpu. That does require all the latest updates and official AMD chipset drivers installed (and running on the Ryzen power plan).
I remember reading about that, but it's not a sure-fire (nor portable) way of doing it... It should be at BIOS level or even at CPU (like TLB is exposed).

But fair point nonetheless. I will stick with my "shady" impression of it until I see that it's a non-issue going forward (not that the difference is massive anyway). How does Linux's power governor behave there? Anyone knows?

--

Zen 3 bringing noticeable differences in their IMC is also something interesting to see. Maybe their move to unified L3 cache has to do with the IMC changes? Depending on what they're trying to do with the IMC, changing the cache topology is not that weird. Although one doesn't imply the other... Uhm... Random thoughts alright...

In any case, an improvement is always welcome.

Cheers!
 
Reactions: cdrkf

MasterMadBones

Distinguished
Dec 26, 2012
353
37
18,890
49
Zen 3 bringing noticeable differences in their IMC is also something interesting to see. Maybe their move to unified L3 cache has to do with the IMC changes? Depending on what they're trying to do with the IMC, changing the cache topology is not that weird. Although one doesn't imply the other... Uhm... Random thoughts alright...
What you're saying is pretty close to the truth. Most of the changes AMD has made to the memory controller in Zen 2 were intended to reduce the penalty of having the cores further away from DRAM. The increased L3 cache size was also a result of the chiplet strategy.

With the unification of L3 cache (the new 8-core CCX), there is the obvious benefit that the core has access to more cache and that communication between cores has to go through DRAM less frequently. The downside is that L3 latency will increase once again. The changes to the IMC and, to a certain degree, the uOP cache, are in part intended to alleviate the increased L3 latency by reducing the number of L1/L2 misses and reducing the L3 miss penalty.
 
Reactions: PolkFan

ASK THE COMMUNITY

TRENDING THREADS