[SOLVED] Questioning the Ryzen 3000 Series, 12 Cores/16 Cores (7nm)? FX Stunt?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

valeman2012

Distinguished
Apr 10, 2012
1,272
11
19,315
19-113-102-V02.jpg

The Ryzen 3000 series with 12 cores/ 24 threads or 16 Cores/32 Threads? That all i been seeing...all computer news articles seem to be following it...is it true or going be 8 Cores /16 Thread again....
 
Last edited:
Solution
Uhhh....It looks like AMD's CSGO 9900k number pretty much perfectly matches 1080p high AVG in your chart.
While 1080 med runs 100FPS faster,do you have any understanding on what benchmarks are?
If zen2 can't hit the same 500FPS mark it's not just as fast it's a 25% difference in speed.

rigg42

Respectable
Oct 17, 2018
639
233
2,390
Recent leaks on Zen 2 suggests a 16 core sample with 3.3 Ghz Base and 4.2 Ghz Boost clock was found. It seems that the increase in core count through out the Zen+ lineup is real and based on the leaks the only variable seems to be the clock speed and the increase in IPC's over Zen+.

Zen2 ES 16 Core
Base clock 3.3 Ghzฺ
Boost clock 4.2 Ghz
MB X570
This CPU name can't decode by decode chart
PS(ภาพหน้าจออาจจะอัพโหลดให้ในภายหลัง)
— APISAK (@TUM_APISAK) May 9, 2019
I wouldn't read to much into clock speeds with engineering samples. You can't say how old that sample is. Zen gained about 700 mhz over some of its engineering samples. There are also zen 2 samples hitting 4.5. Just wait and see. Nothing to lose sleep over.
 

InvalidError

Titan
Moderator
I wouldn't read to much into clock speeds with engineering samples.
I wouldn't be too optimistic about those rumored 5GHz clocks either. When TSMC says 7nm provides 20% more performance than 10nm, that's based on scaling the exact same product (ARM A72 cores) down. For AMD to deliver a 10% IPC increase, it must have added a lot of extra resources to the scheduler and execution units, which means a large chunk of that 20% same-to-same timing headroom went into extra logic (such as deeper re-order queues, extra scheduling queues and matching execution ports) instead. If you add two queues and execution units to an already 10-wide architecture, that's at least 20% extra complexity in the scheduler and related resources right there, more once you add dependencies resolution and arbitration logic.

Of course, TSMC's 10nm is itself ~15% better than its own 12nm, so that's an extra 15% for AMD to play with.

My bet is AMD will be putting the bulk of that ~38% 12nm-to-7nm process improvement into IPC and we'll get 300-400MHz higher clocks across most of the board along the way. For people who think it is stupid for AMD to put so much more pressure on IPC than clock, keep in mind that those chiplets are designed for EPYC first, mainstream gets the scraps. It would make sense for AMD to heavily favor the most power-efficient way of increasing system throughput (more cores and IPC at lower clocks and voltages) instead of per-core throughput.
 

valeman2012

Distinguished
Apr 10, 2012
1,272
11
19,315
Recent leaks on Zen 2 suggests a 16 core sample with 3.3 Ghz Base and 4.2 Ghz Boost clock was found. It seems that the increase in core count through out the Zen+ lineup is real and based on the leaks the only variable seems to be the clock speed and the increase in IPC's over Zen+.

Zen2 ES 16 Core
Base clock 3.3 Ghzฺ
Boost clock 4.2 Ghz
MB X570
This CPU name can't decode by decode chart
PS(ภาพหน้าจออาจจะอัพโหลดให้ในภายหลัง)
— APISAK (@TUM_APISAK) May 9, 2019
I assume it's more that they're trying to predict future trends. Increased core count isn't a reason to buy it if people either aren't multi-tasking more, or software isn't multi-threading.

But, for example, look at the number of people on these boards who've asked about what kind of setup they should have if they want to stream while gaming.

Back around 2000, Intel was betting on clock speed being king. That turned out to not work out. I'm guessing AMD is betting that more and more things will be going multi-threaded, or that major multi-tasking is going to be more common.
I wouldn't read to much into clock speeds with engineering samples. You can't say how old that sample is. Zen gained about 700 mhz over some of its engineering samples. There are also zen 2 samples hitting 4.5. Just wait and see. Nothing to lose sleep over.
Ryzen 3000 probably going have no IPC increase something like that.
 
I assume it's more that they're trying to predict future trends. Increased core count isn't a reason to buy it if people either aren't multi-tasking more, or software isn't multi-threading.

But, for example, look at the number of people on these boards who've asked about what kind of setup they should have if they want to stream while gaming.

Back around 2000, Intel was betting on clock speed being king. That turned out to not work out. I'm guessing AMD is betting that more and more things will be going multi-threaded, or that major multi-tasking is going to be more common.
Multithreaded doesn't mean that you need more cores to run it just because.
Each workload needs a certain amount of compute to complete in a allotted amount of time and nobody including the software itself cares about where this compute is coming from,if you can make a single core be fast enough then that's ok if you need 8 very slow cores that's fine as well,if you can stream 4k/60FPS x264/x265 with your iGPU that's more than ok and that's what people do now, you don't need to stream with CPU cores,you don't need a lot of cores for multithreading if each core is very capable.
You don't need a lot of compute period,it's luxury just as a very fast core (5+Ghz) is luxury.
 

InvalidError

Titan
Moderator
Each workload needs a certain amount of compute to complete in a allotted amount of time and nobody including the software itself cares about where this compute is coming from
Software developers and end-users of said software do care: many algorithm classes don't scale well with thread count and fare much better on faster individual cores. Just about everything that involves parsing doesn't work on multi-threaded because how each byte gets interpreted depends on previously established context, which is why things like compiling code are done with object-level parallelism instead of trying to make the compiler itself multi-threaded. Core game logic is in a similar boat since so many things with inter-dependencies need to happen in a specific order for the game to work properly. You can offload some work to threads but a significant chunk still remains intrinsically sequential and ultimately limited by single-thread performance.

It may not matter as much as it used to, but it still does. Still will in the future as unless developers achieve perfect multi-core scaling, fewer faster cores will remain preferable where peak performance is the primary concern.
 
If shrinking a node in half doesnt yeild any IPC increase or any benefits at all, CPU manufacturers wouldnt spend millions in R&D and equipment retooling to shrink a node. You are a troll and I really dont feel like wasting my time here.

There WILL be an increase in IPC, its just a question of how much. We saw an 8 core ryzen at CES rivaling a 9900k with a score that would demolish an 8 core 2700x, so clocks and/or IPC must be up from 2nd generation since thread count is the same and performance is up by a good bit
 
Last edited:

InvalidError

Titan
Moderator
If shrinking a node in half doesnt yeild any IPC increase or any benefits at all, CPU manufacturers wouldnt spend millions in R&D and equipment retooling to shrink a node.
An optical shrink has no effect on IPC, you need architectural changes for that. A shrink does give the CPU designer extra headroom to cram extra transistors at strategic places in the architecture (mainly a wider scheduler with extra execution units) to increase IPC without sacrificing clock frequency or ballooning TDP.
 
You don't have to get my logic. I just feel better this way. I am not affected by any means because I have -10 FPS since I swapped my CPU. I feel better to know I can use background apps while gaming without having to ALT TAB just to say something on Discord. That's all. I prefer -10 FPS for a much better multitasking.
I would also suggest you look at your min FPS (or dips) when running stuff alongside games. Intel (all gens with 4 cores, including HT) stutter a lot at times and it's something most Intel hard-fans will not acknowledge. Ryzen's surplus of cores help in this regard tremendously (and you can see it in some benchies with min-FPS being displayed). Only higher core-count* Intel parts get rid of the problem.

Cheers!
 
Last edited:
  • Like
Reactions: 99StefanRO
An optical shrink has no effect on IPC, you need architectural changes for that. A shrink does give the CPU designer extra headroom to cram extra transistors at strategic places in the architecture (mainly a wider scheduler with extra execution units) to increase IPC without sacrificing clock frequency or ballooning TDP.
Theoretically speaking, you are correct. In reality, you HAVE to change the architecture slightly to accommodate the different process. That's a fact and not theory. You can go back and check Sandy vs Ivy. It was the same underlying uArch, but with tweaks anyway because why not.

That's why filling our mouths with "BUT MAH IPC NUMBASSSS!!!11!!" is moot. It's just better to talk about "single core performance" as under normal conditions, you no longer have regular clocks and Turbo has made this comparison/talk moot. Is IPC still important? Absolutely. Important in our context? Nope. Not anymore.

Also, the engy sample numbers are looking really good. So good, in fact, that AMD may hit Intel hard. We all win if that's the case :D

Cheers!
 
Software developers and end-users of said software do care: many algorithm classes don't scale well with thread count and fare much better on faster individual cores. Just about everything that involves parsing doesn't work on multi-threaded because how each byte gets interpreted depends on previously established context, which is why things like compiling code are done with object-level parallelism instead of trying to make the compiler itself multi-threaded. Core game logic is in a similar boat since so many things with inter-dependencies need to happen in a specific order for the game to work properly. You can offload some work to threads but a significant chunk still remains intrinsically sequential and ultimately limited by single-thread performance.

It may not matter as much as it used to, but it still does. Still will in the future as unless developers achieve perfect multi-core scaling, fewer faster cores will remain preferable where peak performance is the primary concern.
Wow,how did this "Multithreaded doesn't mean that you need more cores to run it just because. " not make it clear that I was not talking about single core?
 
I would also suggest you look at your min FPS (or dips) when running stuff alongside games. Intel (all gens with 4 cores, including HT) stutter a lot at times and it's something most Intel hard-fans will not acknowledge. Ryzen's surplus of cores help in this regard tremendously (and you can see it in some benchies with min-FPS being displayed). Only higher core-count* Intel parts get rid of the problem.

Cheers!
Because stutter is completely random and has nothing to do with how many cores you have,look at this the stock 9600k has a 0.1% min of 58.7 while the same CPU overclocked to 5.2 drops to 18.8 ... it's completely due to crappy games not the amount of cores.
All the results with low 0.1% are completely random if you look at them.
Just look at the 2990wx at stock (creators) and tell us again how a surplus of cores prevents you from getting low min.
intel-i7-2600k-2018-bench_fc5_1080p.png
 
  • Like
Reactions: valeman2012

InvalidError

Titan
Moderator
Wow,how did this "Multithreaded doesn't mean that you need more cores to run it just because. " not make it clear that I was not talking about single core?
You wrote: "nobody including the software itself cares about where this compute is coming from"

Where the compute is coming from DOES matter since things like a game's core control thread is intrinsically serial. If that thread can only iterate 100 times per second on a given CPU, then that's a 100FPS ceiling regardless of how heavily threaded the rest of the game is and however many spare cores are available to run them.
 
Because stutter is completely random and has nothing to do with how many cores you have,look at this the stock 9600k has a 0.1% min of 58.7 while the same CPU overclocked to 5.2 drops to 18.8 ... it's completely due to crappy games not the amount of cores.
All the results with low 0.1% are completely random if you look at them.
Just look at the 2990wx at stock (creators) and tell us again how a surplus of cores prevents you from getting low min.
intel-i7-2600k-2018-bench_fc5_1080p.png
You're completely missing the point... Look at the min-FPS difference between the 9600K and the 8700K (which can be treated as an 8C-equivalent CPU). This is not even Intel vs AMD, this is pure core count and how it affects "dips" in FPS. Yes, AMD has its own problems with Threadripper, but that CPUs main purpose is not gaming, so it's a moot comparison; specially when you can do core-parking.

So, the rule is simple: the more software you're running in the background, the BIGGER the impact on those dips and the more frequent they'll be. If you can't see that, or understand that, I won't push the topic for you.

Also, 99% of benchmarks don't have anything else running when benching (like browsers, chat programs, music stuff, etc).

Cheers!
 

valeman2012

Distinguished
Apr 10, 2012
1,272
11
19,315
You're completely missing the point... Look at the min-FPS difference between the 9600K and the 8700K (which can be treated as an 8C-equivalent CPU). This is not even Intel vs AMD, this is pure core count and how it affects "dips" in FPS. Yes, AMD has its own problems with Threadripper, but that CPUs main purpose is not gaming, so it's a moot comparison; specially when you can do core-parking.

So, the rule is simple: the more software you're running in the background, the BIGGER the impact on those dips and the more frequent they'll be. If you can't see that, or understand that, I won't push the topic for you.

Also, 99% of benchmarks don't have anything else running when benching (like browsers, chat programs, music stuff, etc).

Cheers!
Cinebench does not determinate a AMD Ryzen CPU is better than Intel
Gamers ares looking for Better gaming performance not "Better Cinebench scores"

Nothing by a big facepalm to AMD Fans.
🤦‍♂️
 
Last edited:
3rd gen Ryzen 8 core 16 thread ties a 9900k in Cinebench.
Better Cinebench scores indicate better performance if the thread count is the same meaning 1 thread performance is better.
Gamers are looking for better gaming performance, which is indicated roughly by a better Cinebench with the same thread count.
 
You're completely missing the point... Look at the min-FPS difference between the 9600K and the 8700K (which can be treated as an 8C-equivalent CPU). This is not even Intel vs AMD, this is pure core count and how it affects "dips" in FPS. Yes, AMD has its own problems with Threadripper, but that CPUs main purpose is not gaming, so it's a moot comparison; specially when you can do core-parking.

So, the rule is simple: the more software you're running in the background, the BIGGER the impact on those dips and the more frequent they'll be. If you can't see that, or understand that, I won't push the topic for you.

Also, 99% of benchmarks don't have anything else running when benching (like browsers, chat programs, music stuff, etc).

Cheers!
You are completely missing the point.If as you said 99% of benchmarks don't have anything else running when benching and still you get nonesense minimums like an O/C CPU getting hugely lower minimums than the same CPU not O/C than it's not the CPUs problem and running background apps will not change that.
 
  • Like
Reactions: valeman2012
You wrote: "nobody including the software itself cares about where this compute is coming from"

Where the compute is coming from DOES matter since things like a game's core control thread is intrinsically serial. If that thread can only iterate 100 times per second on a given CPU, then that's a 100FPS ceiling regardless of how heavily threaded the rest of the game is and however many spare cores are available to run them.
No I wrote "Each workload needs a certain amount of compute to complete in a allotted amount of time" .... "and nobody including the software itself cares about where this compute is coming from "
So of course if the CPU can't provide that certain amount of compute in the allotted amount of time then it won't provide you with the FPS (or performance) you need and everybody will care about where the problem lies.

But if it can provide that certain amount of compute it needs to complete the workload in the allotted amount of time than nobody cares where the performance comes from.
 
  • Like
Reactions: valeman2012
3rd gen Ryzen 8 core 16 thread ties a 9900k in Cinebench.
Better Cinebench scores indicate better performance if the thread count is the same meaning 1 thread performance is better.
Gamers are looking for better gaming performance, which is indicated roughly by a better Cinebench with the same thread count.
Cinebench scores don't even indicate the level of performance for rendering tasks,just look at benchmarks of other renderers like povray, luxmark and so on.
 
  • Like
Reactions: valeman2012

valeman2012

Distinguished
Apr 10, 2012
1,272
11
19,315
Cinebench scores don't even indicate the level of performance for rendering tasks,just look at benchmarks of other renderers like povray, luxmark and so on.
You are completely missing the point.If as you said 99% of benchmarks don't have anything else running when benching and still you get nonesense minimums like an O/C CPU getting hugely lower minimums than the same CPU not O/C than it's not the CPUs problem and running background apps will not change that.
No I wrote "Each workload needs a certain amount of compute to complete in a allotted amount of time" .... "and nobody including the software itself cares about where this compute is coming from "
So of course if the CPU can't provide that certain amount of compute in the allotted amount of time then it won't provide you with the FPS (or performance) you need and everybody will care about where the problem lies.

But if it can provide that certain amount of compute it needs to complete the workload in the allotted amount of time than nobody cares where the performance comes from.
Improved Benchmark Software Performance.

Yup, many ridiculous fooled AMD Fan on "AMD/Intel related" articles comment sections believes AMD having higher Cinebench numbers means its better than Intel in gaming. 🤦‍♂️

Can you please show me 1 gaming benchmark for Intel Processor beating a AMD Ryzen Processor the one with High Cinebench Numbers...

I mean gotta dive deep to discuss about this "issue with users believing such ridiculous claims"
 
You are completely missing the point.If as you said 99% of benchmarks don't have anything else running when benching and still you get nonesense minimums like an O/C CPU getting hugely lower minimums than the same CPU not O/C than it's not the CPUs problem and running background apps will not change that.
This reads like you're actually agreeing with me... The OC'ed CPU getting bad min-FPS under benchmarking conditions will behave even WORSE in real life conditions. Also, that was your own example, so I'm just using your own evidence against your argument. Go fetch more, as I know everything you find will just support my point: anything from Intel with 4 cores is not good for today's needs (high end, mainly) and AMD has an inherent advantage there in the mid-range segment thanks to the 6 core offerings. The Ryzen 3K series will just close the gap against Intel's 6c+ models and I'm not even saying they'll be faster than them as the price gap is compelling enough for reasonable buyers to be swayed from buying Intel.

I don't even know why this "discussion" thread hasn't been closed, as the OP is clearly a troll. There's already a properly moderated discussion thread for the new AMD stuff.

Cheers!
 
  • Like
Reactions: Finstar
Apr 19, 2019
13
6
15
Improved Benchmark Software Performance.

Yup, many ridiculous fooled AMD Fan on "AMD/Intel related" articles comment sections believes AMD having higher Cinebench numbers means its better than Intel in gaming. 🤦‍♂️

Can you please show me 1 gaming benchmark for Intel Processor beating a AMD Ryzen Processor the one with High Cinebench Numbers...

I mean gotta dive deep to discuss about this "issue with users believing such ridiculous claims"

Did you have a stroke?

I can show you a "gaming benchmark for Intel Processor beating a AMD Ryzen Processor" - shall we put the 9900K up against the 2200G?

Or are you asking for us to show you an example of an Intel processor beating an AMD Ryzen that has high Cinebench numbers? Then let's put the 9980XE up against the TR 1950X which has "High Cinebench Numbers."
 
  • Like
Reactions: valeman2012
Jun 28, 2018
122
4
715
still nothing official out yet, just website placeholders, speculation, 'leaks' (i.e., someone else's speculation), etc...

It should be an interesting build up until these processors' assorted releases....

Looking forward to seeing the 9900K challenged or defeated in both CInebench and in gaming! (which should help with pricing on either/both...we hope!)
Intel’s flagship i9-9900K was already defeated by AMD’s 8c/16t on the first ever preview of 7nm last year and with significantly lower TDP in CB! The real question is if the Ryzen 5 36XXY will be able to beat i9 9900K in ST performance and provide more FPS for half the price or even lower since its expected to cost around 200-250$ MSRP. As it’s going to be a mid range CPU
 
Status
Not open for further replies.