The Ryzen 3000 series with 12 cores/ 24 threads or 16 Cores/32 Threads? That all i been seeing...all computer news articles seem to be following it...is it true or going be 8 Cores /16 Thread again....
Last edited:
While 1080 med runs 100FPS faster,do you have any understanding on what benchmarks are?Uhhh....It looks like AMD's CSGO 9900k number pretty much perfectly matches 1080p high AVG in your chart.
I wouldn't read to much into clock speeds with engineering samples. You can't say how old that sample is. Zen gained about 700 mhz over some of its engineering samples. There are also zen 2 samples hitting 4.5. Just wait and see. Nothing to lose sleep over.Recent leaks on Zen 2 suggests a 16 core sample with 3.3 Ghz Base and 4.2 Ghz Boost clock was found. It seems that the increase in core count through out the Zen+ lineup is real and based on the leaks the only variable seems to be the clock speed and the increase in IPC's over Zen+.
Zen2 ES 16 Core
Base clock 3.3 Ghzฺ
Boost clock 4.2 Ghz
MB X570
This CPU name can't decode by decode chart
PS(ภาพหน้าจออาจจะอัพโหลดให้ในภายหลัง)
— APISAK (@TUM_APISAK) May 9, 2019
I wouldn't be too optimistic about those rumored 5GHz clocks either. When TSMC says 7nm provides 20% more performance than 10nm, that's based on scaling the exact same product (ARM A72 cores) down. For AMD to deliver a 10% IPC increase, it must have added a lot of extra resources to the scheduler and execution units, which means a large chunk of that 20% same-to-same timing headroom went into extra logic (such as deeper re-order queues, extra scheduling queues and matching execution ports) instead. If you add two queues and execution units to an already 10-wide architecture, that's at least 20% extra complexity in the scheduler and related resources right there, more once you add dependencies resolution and arbitration logic.I wouldn't read to much into clock speeds with engineering samples.
Recent leaks on Zen 2 suggests a 16 core sample with 3.3 Ghz Base and 4.2 Ghz Boost clock was found. It seems that the increase in core count through out the Zen+ lineup is real and based on the leaks the only variable seems to be the clock speed and the increase in IPC's over Zen+.
Zen2 ES 16 Core
Base clock 3.3 Ghzฺ
Boost clock 4.2 Ghz
MB X570
This CPU name can't decode by decode chart
PS(ภาพหน้าจออาจจะอัพโหลดให้ในภายหลัง)
— APISAK (@TUM_APISAK) May 9, 2019
I assume it's more that they're trying to predict future trends. Increased core count isn't a reason to buy it if people either aren't multi-tasking more, or software isn't multi-threading.
But, for example, look at the number of people on these boards who've asked about what kind of setup they should have if they want to stream while gaming.
Back around 2000, Intel was betting on clock speed being king. That turned out to not work out. I'm guessing AMD is betting that more and more things will be going multi-threaded, or that major multi-tasking is going to be more common.
Ryzen 3000 probably going have no IPC increase something like that.I wouldn't read to much into clock speeds with engineering samples. You can't say how old that sample is. Zen gained about 700 mhz over some of its engineering samples. There are also zen 2 samples hitting 4.5. Just wait and see. Nothing to lose sleep over.
Multithreaded doesn't mean that you need more cores to run it just because.I assume it's more that they're trying to predict future trends. Increased core count isn't a reason to buy it if people either aren't multi-tasking more, or software isn't multi-threading.
But, for example, look at the number of people on these boards who've asked about what kind of setup they should have if they want to stream while gaming.
Back around 2000, Intel was betting on clock speed being king. That turned out to not work out. I'm guessing AMD is betting that more and more things will be going multi-threaded, or that major multi-tasking is going to be more common.
Software developers and end-users of said software do care: many algorithm classes don't scale well with thread count and fare much better on faster individual cores. Just about everything that involves parsing doesn't work on multi-threaded because how each byte gets interpreted depends on previously established context, which is why things like compiling code are done with object-level parallelism instead of trying to make the compiler itself multi-threaded. Core game logic is in a similar boat since so many things with inter-dependencies need to happen in a specific order for the game to work properly. You can offload some work to threads but a significant chunk still remains intrinsically sequential and ultimately limited by single-thread performance.Each workload needs a certain amount of compute to complete in a allotted amount of time and nobody including the software itself cares about where this compute is coming from
Ryzen 3000 probably going have no IPC increase something like that.
An optical shrink has no effect on IPC, you need architectural changes for that. A shrink does give the CPU designer extra headroom to cram extra transistors at strategic places in the architecture (mainly a wider scheduler with extra execution units) to increase IPC without sacrificing clock frequency or ballooning TDP.If shrinking a node in half doesnt yeild any IPC increase or any benefits at all, CPU manufacturers wouldnt spend millions in R&D and equipment retooling to shrink a node.
I would also suggest you look at your min FPS (or dips) when running stuff alongside games. Intel (all gens with 4 cores, including HT) stutter a lot at times and it's something most Intel hard-fans will not acknowledge. Ryzen's surplus of cores help in this regard tremendously (and you can see it in some benchies with min-FPS being displayed). Only higher core-count* Intel parts get rid of the problem.You don't have to get my logic. I just feel better this way. I am not affected by any means because I have -10 FPS since I swapped my CPU. I feel better to know I can use background apps while gaming without having to ALT TAB just to say something on Discord. That's all. I prefer -10 FPS for a much better multitasking.
Theoretically speaking, you are correct. In reality, you HAVE to change the architecture slightly to accommodate the different process. That's a fact and not theory. You can go back and check Sandy vs Ivy. It was the same underlying uArch, but with tweaks anyway because why not.An optical shrink has no effect on IPC, you need architectural changes for that. A shrink does give the CPU designer extra headroom to cram extra transistors at strategic places in the architecture (mainly a wider scheduler with extra execution units) to increase IPC without sacrificing clock frequency or ballooning TDP.
Wow,how did this "Multithreaded doesn't mean that you need more cores to run it just because. " not make it clear that I was not talking about single core?Software developers and end-users of said software do care: many algorithm classes don't scale well with thread count and fare much better on faster individual cores. Just about everything that involves parsing doesn't work on multi-threaded because how each byte gets interpreted depends on previously established context, which is why things like compiling code are done with object-level parallelism instead of trying to make the compiler itself multi-threaded. Core game logic is in a similar boat since so many things with inter-dependencies need to happen in a specific order for the game to work properly. You can offload some work to threads but a significant chunk still remains intrinsically sequential and ultimately limited by single-thread performance.
It may not matter as much as it used to, but it still does. Still will in the future as unless developers achieve perfect multi-core scaling, fewer faster cores will remain preferable where peak performance is the primary concern.
Because stutter is completely random and has nothing to do with how many cores you have,look at this the stock 9600k has a 0.1% min of 58.7 while the same CPU overclocked to 5.2 drops to 18.8 ... it's completely due to crappy games not the amount of cores.I would also suggest you look at your min FPS (or dips) when running stuff alongside games. Intel (all gens with 4 cores, including HT) stutter a lot at times and it's something most Intel hard-fans will not acknowledge. Ryzen's surplus of cores help in this regard tremendously (and you can see it in some benchies with min-FPS being displayed). Only higher core-count* Intel parts get rid of the problem.
Cheers!
You wrote: "nobody including the software itself cares about where this compute is coming from"Wow,how did this "Multithreaded doesn't mean that you need more cores to run it just because. " not make it clear that I was not talking about single core?
At this point, all they have to do is avoid all the IPC losses that Intel's running into to resolve all those security flaws, and add half a Ghz. Extra cores will be gravy.Ryzen 3000 probably going have no IPC increase something like that.
You're completely missing the point... Look at the min-FPS difference between the 9600K and the 8700K (which can be treated as an 8C-equivalent CPU). This is not even Intel vs AMD, this is pure core count and how it affects "dips" in FPS. Yes, AMD has its own problems with Threadripper, but that CPUs main purpose is not gaming, so it's a moot comparison; specially when you can do core-parking.Because stutter is completely random and has nothing to do with how many cores you have,look at this the stock 9600k has a 0.1% min of 58.7 while the same CPU overclocked to 5.2 drops to 18.8 ... it's completely due to crappy games not the amount of cores.
All the results with low 0.1% are completely random if you look at them.
Just look at the 2990wx at stock (creators) and tell us again how a surplus of cores prevents you from getting low min.
Cinebench does not determinate a AMD Ryzen CPU is better than IntelYou're completely missing the point... Look at the min-FPS difference between the 9600K and the 8700K (which can be treated as an 8C-equivalent CPU). This is not even Intel vs AMD, this is pure core count and how it affects "dips" in FPS. Yes, AMD has its own problems with Threadripper, but that CPUs main purpose is not gaming, so it's a moot comparison; specially when you can do core-parking.
So, the rule is simple: the more software you're running in the background, the BIGGER the impact on those dips and the more frequent they'll be. If you can't see that, or understand that, I won't push the topic for you.
Also, 99% of benchmarks don't have anything else running when benching (like browsers, chat programs, music stuff, etc).
Cheers!
You are completely missing the point.If as you said 99% of benchmarks don't have anything else running when benching and still you get nonesense minimums like an O/C CPU getting hugely lower minimums than the same CPU not O/C than it's not the CPUs problem and running background apps will not change that.You're completely missing the point... Look at the min-FPS difference between the 9600K and the 8700K (which can be treated as an 8C-equivalent CPU). This is not even Intel vs AMD, this is pure core count and how it affects "dips" in FPS. Yes, AMD has its own problems with Threadripper, but that CPUs main purpose is not gaming, so it's a moot comparison; specially when you can do core-parking.
So, the rule is simple: the more software you're running in the background, the BIGGER the impact on those dips and the more frequent they'll be. If you can't see that, or understand that, I won't push the topic for you.
Also, 99% of benchmarks don't have anything else running when benching (like browsers, chat programs, music stuff, etc).
Cheers!
No I wrote "Each workload needs a certain amount of compute to complete in a allotted amount of time" .... "and nobody including the software itself cares about where this compute is coming from "You wrote: "nobody including the software itself cares about where this compute is coming from"
Where the compute is coming from DOES matter since things like a game's core control thread is intrinsically serial. If that thread can only iterate 100 times per second on a given CPU, then that's a 100FPS ceiling regardless of how heavily threaded the rest of the game is and however many spare cores are available to run them.
Cinebench scores don't even indicate the level of performance for rendering tasks,just look at benchmarks of other renderers like povray, luxmark and so on.3rd gen Ryzen 8 core 16 thread ties a 9900k in Cinebench.
Better Cinebench scores indicate better performance if the thread count is the same meaning 1 thread performance is better.
Gamers are looking for better gaming performance, which is indicated roughly by a better Cinebench with the same thread count.
Cinebench scores don't even indicate the level of performance for rendering tasks,just look at benchmarks of other renderers like povray, luxmark and so on.
You are completely missing the point.If as you said 99% of benchmarks don't have anything else running when benching and still you get nonesense minimums like an O/C CPU getting hugely lower minimums than the same CPU not O/C than it's not the CPUs problem and running background apps will not change that.
Improved Benchmark Software Performance.No I wrote "Each workload needs a certain amount of compute to complete in a allotted amount of time" .... "and nobody including the software itself cares about where this compute is coming from "
So of course if the CPU can't provide that certain amount of compute in the allotted amount of time then it won't provide you with the FPS (or performance) you need and everybody will care about where the problem lies.
But if it can provide that certain amount of compute it needs to complete the workload in the allotted amount of time than nobody cares where the performance comes from.
This reads like you're actually agreeing with me... The OC'ed CPU getting bad min-FPS under benchmarking conditions will behave even WORSE in real life conditions. Also, that was your own example, so I'm just using your own evidence against your argument. Go fetch more, as I know everything you find will just support my point: anything from Intel with 4 cores is not good for today's needs (high end, mainly) and AMD has an inherent advantage there in the mid-range segment thanks to the 6 core offerings. The Ryzen 3K series will just close the gap against Intel's 6c+ models and I'm not even saying they'll be faster than them as the price gap is compelling enough for reasonable buyers to be swayed from buying Intel.You are completely missing the point.If as you said 99% of benchmarks don't have anything else running when benching and still you get nonesense minimums like an O/C CPU getting hugely lower minimums than the same CPU not O/C than it's not the CPUs problem and running background apps will not change that.
Improved Benchmark Software Performance.
Yup, many ridiculous fooled AMD Fan on "AMD/Intel related" articles comment sections believes AMD having higher Cinebench numbers means its better than Intel in gaming. 🤦♂️
Can you please show me 1 gaming benchmark for Intel Processor beating a AMD Ryzen Processor the one with High Cinebench Numbers...
I mean gotta dive deep to discuss about this "issue with users believing such ridiculous claims"
Intel’s flagship i9-9900K was already defeated by AMD’s 8c/16t on the first ever preview of 7nm last year and with significantly lower TDP in CB! The real question is if the Ryzen 5 36XXY will be able to beat i9 9900K in ST performance and provide more FPS for half the price or even lower since its expected to cost around 200-250$ MSRP. As it’s going to be a mid range CPUstill nothing official out yet, just website placeholders, speculation, 'leaks' (i.e., someone else's speculation), etc...
It should be an interesting build up until these processors' assorted releases....
Looking forward to seeing the 9900K challenged or defeated in both CInebench and in gaming! (which should help with pricing on either/both...we hope!)