News AMD Unveils 5 Third-Gen Ryzen CPUs, Including 12-Core Flagship

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
It stands for how many instructions the cpu can execute during a single clock tick. Keyword= single clock tick
So you're admitting that you have no clue what IPC means?
https://en.wikichip.org/wiki/intel/microarchitectures/coffee_lake#Pipeline
https://en.wikichip.org/wiki/amd/microarchitectures/zen+#Key_changes_from_Zen
https://www.agner.org/optimize/blog/read.php?i=838
That would be 10 instructions zen+ can choose from out of 6 micro ops it can execute each tick vs 8 instructions on lake from which it can execute four micro ops per tick.
But we all know that that's not the performance difference we get from any app.

Look at the gaming loop then on the same page ... 9900k pulling over 20% more power in lightly threaded tasks ...

The fact that Intel lost the efficiency crown with their lofty 5.0ghz goals isn't debatable. It happened. Blame Intel for fooling all its supporters by vastly changing the way they determine what TDP to put on their chips. As is clear by these power consumption charts I linked, AMDs 105w TDP in reality indicates vastly lower power consumption than Intels "supposed" 95w TDP ... cat's out of the bag on that one.
Yeah they use the witcher 3 for that without having the FPS results for it....
Looking at other sites though you use 20% more power to get 30% more FPS so not really a bad trade off, is it?!
https://www.eurogamer.net/articles/digitalfoundry-2018-intel-core-i9-9900k-review?page=3
 
To make it clear how bad i9-9900K TDP rating is and don't know on what basis Intel even listed it. When stress tested i9-9900K out of the box the power draw goes above 200W ease. Even for gaming it goes above 150W.

Don't think AMD is gonna pull similar stunt as they have been trying to do it right what Intel is doing wrong.
 
  • Like
Reactions: JQB45

joeblowsmynose

Distinguished
...

Yeah they use the witcher 3 for that without having the FPS results for it....
Looking at other sites though you use 20% more power to get 30% more FPS so not really a bad trade off, is it?!
https://www.eurogamer.net/articles/digitalfoundry-2018-intel-core-i9-9900k-review?page=3

I'll accept that, but its still the very best case scenario in one specific workload on a CPU that isn't really priced or designed for "light" usage, and we already know what the other end looks like. Main takeaway is that Intel 8th and 9th gen chips consume a lot of power, morethan comparable AMD products, and that really is the result of trying to reach the magic 5ghz number - I'm not saying they shouldn't have done 5ghz (bring 'em if you got 'em), but the result is greatly increased consumption.

And just to be clear on gaming performance, if the CPU isn't bottlenecked the difference in CPUs actually ONLY ~1% - as shown at the end of your link (4k), but'll accept your above power consumption comment for games because Tom's also used a bottlenecked CPU number for that power consumption measurement that I mentioned.
 
I'll accept that, but its still the very best case scenario in one specific workload on a CPU that isn't really priced or designed for "light" usage, and we already know what the other end looks like.
You brought it up.
Also that's why intel offers more CPUs and not just the 9900k if you only need light usage you don't get the 9900k.
Main takeaway is that Intel 8th and 9th gen chips consume a lot of power, morethan comparable AMD products, and that really is the result of trying to reach the magic 5ghz number - I'm not saying they shouldn't have done 5ghz (bring 'em if you got 'em), but the result is greatly increased consumption.
There are no comparable (zen+ ) AMD products to the 9900k,go ahead and overclock a 2700x to 5Ghz all core and show us the power consumption,if it's still much lower then cool awesome but there are no such numbers to my knowledge.
And just to be clear on gaming performance, if the CPU isn't bottlenecked the difference in CPUs actually ONLY ~1% - as shown at the end of your link (4k), but'll accept your above power consumption comment for games because Tom's also used a bottlenecked CPU number for that power consumption measurement that I mentioned.
At 4k you can use a quad core and get the same results,also power usage would be much lower since the CPU works much less to achieve the much lower FPS.
 
I think the ryzen 9 12 core and its surprisingly low tdp is interesting.
The 16 core ryzen cpu is also interesting. It destroys a 7980ex in core hungry cinebench r15 by about 1000cb even though the ryzen cpu has 2 less cores.
 

joeblowsmynose

Distinguished
You brought it up.
Yes I did, and I ceded that that one point wasn't blowing smoke ... See how I can do that?

Also that's why intel offers more CPUs and not just the 9900k if you only need light usage you don't get the 9900k.
I agree, thus the "light workload power consumption" far less relevant and the full load power consumption (which is abysmal even on the 9700k) more relevant, which was the basis for my initial point. We're back full circle to my initial point.

There are no comparable (zen+ ) AMD products to the 9900k,go ahead and overclock a 2700x to 5Ghz all core and show us the power consumption,if it's still much lower then cool awesome but there are no such numbers to my knowledge.
The performance isn't that far off in any real life workload, and the 9700k (basically equal (bit more single, bit less multi)) still pulls a lot more power under full load than the 2700x - go back and check Tom's power consumptions again if needed.

At 4k you can use a quad core and get the same results ...
All the game results are the same when you don't impose cpu bottlenecks, because the GPU should be the bottlneck or else your wasting very expensive GPU resources that you paid for. A Pentium and 6700k game the same on a 1050ti at aany resolution ... its not about 4k ... its about pairing a realistic GPU CPU combo - if the GPU is bottlenecked as it should be, the difference mostly comes down to GPU anyway ... and yes, I'll be reminding all the AMD fanbois of this fact when they start bragging about how Ryzen 3xxx games better than Intel ... :)
 
Last edited:
  • Like
Reactions: rigg42

joeblowsmynose

Distinguished
I think the ryzen 9 12 core and its surprisingly low tdp is interesting.
The 16 core ryzen cpu is also interesting. It destroys a 7980ex in core hungry cinebench r15 by about 1000cb even though the ryzen cpu has 2 less cores.
I do believe that that benchmark leak is real ... I also believe that is a manual all core OC result and not out of the box expectations.
 
Oh, yea. It was using 1.572v so it is likely a high overclock, but it was just using water cooling, not LN2. Likely with a high-end AIO you could get a close to that clock speed, but I wouldn't recommend that high of voltage for a long time.
However, we don't know what is a normal voltage for ryzen 3000.
 
The performance isn't that far off in any real life workload, and the 9700k (basically equal (bit more single, bit less multi)) still pulls a lot more power under full load than the 2700x - go back and check Tom's power consumptions again if needed.
Under full load of an app they once again don't show you the results for...
Without results you can't draw conclusions on efficiency.
 
but i don't see anyone using "spaceheater" or "furnace" or other such words to describe these intel chips. seems crazy high power draw is ok if it's intel...........
To be fair, the situation then was a little different, in that Intel's processors at the time were not just more efficient, but they also offered substantially better performance per core. In the case of Intel's current top-end chips, they do draw more power, but they also deliver a bit more performance than AMD's existing offerings. And I suspect that they will at the very least keep up pretty well with the 3000 series on a per-core basis, even if they can't keep up with pricing for a given core and thread count.

I'm very annoyed they are shipping the 3600x with the spire and the 3600 with the stealth though. If its anything like the 2600x it going to be too much for it with stock boost.
We still don't know the exact power draw these chips will have though, and it's likely that those coolers will handle them fine enough at stock clocks due to the increase in efficiency brought on by the new process node. There will be plenty of people who will be fine with the stock cooling solution, and plenty who will want to use an aftermarket cooler anyway, so it's probably best not to make them pay extra for a larger stock cooler than is necessary. Compared to the competition, AMD's boxed coolers are clearly superior, especially for the performance parts, where Intel doesn't even bother to include one.

It would be super cool if there is significant OC headroom in these chips with aftermarket cooling. That way the enthusiasts who don't care about power efficiency can overclock their CPU up to 5ghz and everybody else can get great power efficiency and good temps. Perhaps that's just wishful thinking though.
I kind of think that's wishful thinking. At least going by their existing offerings, the 2700X and 2600X are able to boost pretty close to their overclocking limits out of the box. If AMD could get these new chips up near 5GHz, I think they would have, at least for lightly-threaded tasks. Then again, perhaps things will be different with the 7nm node.
 
  • Like
Reactions: rigg42

joeblowsmynose

Distinguished
Under full load of an app they once again don't show you the results for...
Without results you can't draw conclusions on efficiency.
Actually, I have seen enough data to confidently extrapolate that Ryzen is more efficient. I think almost every reviewer would agree.

Just consider this. 2700x has 105 TDP. All the reviews by all the different publications show the max this thing at stock will pull is ~105w - maybe 110 or 115 in special cases. An accurate alignment between TDP and consumption even with stress testing.

9900k has 95w TDP. Yet that processor will pull 250w under stress testing and all the reviews show this. During a cinebench run (which doesn't use AVX) the 9900k will pull about somewhere between 140w and 180w - depending on the mother board. That TDP of "95w" remember is what the CPU pulls with NO BOOSTING (ask Intel about this, the announced this like two years back) so obviously the 95w won't represent anywhere close to full load. Thats fact based on Intel's own admitted derivation of their TDPs

Here's 9900k on the POV-Ray rendering app (image below, forgot to clip the link) ... 170w (nearly double its TDP)... 2700x - same load 117w, a 31% difference. The performance difference certainly isn't greater than that power consumption difference in POV-Ray (actually, looking now, that looks to be less than 20% and that app greatly favours Intel CPUs). 16 core (edit: 12 core) threadripper uses only 12w more than 8 core 9900k, yet the 9900k doesn't come anywhere close to the TR ability to render. Give it a moments thought ...

So actually, yes I think we can extrapolate on one sure thing ... Intel 8th and 9th gen efficiency < Ryzen efficiency. Not willing to agree to this extrapolation is just an excersize in putting off knowing what almost everyone else already knows. Is there anything left to convince you to continue to believe that Ryzen isn't more efficient than 8th and 9th gen Intel? I've presented almost every single consideration that would lead to getting this verified once and for all, and all you have is "well we didn't see the power consumption on everything made so maybe somewhere it'll show different" (paraphrased). I think it is clear by this point that that is not going to happen. Sure the 9900k isn't quite a furnace without AVX on (but turning it off cripples that performance advantage), but it's not more effiecient than Ryzen even without AVX loads. I don't know how to illustrate this more clearly.

Besides this is a debate in futility as AMD is now on 7nm and even with a 15% IPC increase and ~20% increase in overall performance (15% IPC +5% clock increase - which was shown), Ryzen 3xxx looks to be about 20-40% more efficient than the last gen Ryzen based on what we saw at CES and Computex. Intel Comet lake, (Intel 10th gen) on the other hand will remain at 14nm until Late into 2020/early 2021, I believe. Its a bit of a pointless argument comparing 14nm+ Ryzen when 7nm is available to buy in one month.


fgs.png
 
Last edited:
Who copy-pasted "PCIe 4.0 Lanes" in all tables? There numbers should be 0 for all CPUs that are not Ryzen 3000.
Please correct the tables!

From what I understand the older motherboards will support PCIe 4.0 to the first PCIe and NVMe on a board to board basis. But nothing beyond the first PCIe. Remember the CPU is what generates the signal for PCIe, as it's a direct connect to the CPU. So it's just a trace from the CPU to the socket.

The X570 supports all ports and that requires a lot of power past the first PCIe because the signal decays quickly with distance.
 

Giroro

Splendid
I wouldn't hold my breath for 16 core Ryzen, at least not in the next year or so. Sure it's possible for them to make one, but it doesn't make any sense in their product stack until, at a minimum, 3rd gen threadripper parts are on the market (which aren't announced, and definitely won't come out until after Epyc Rome).
If the 16 core Ryzen eventually comes, I would also expect it to be clocked lower than the 12 core - since basically cramming 2x 3700x equivalent TDPs into a single 130W package has got to be a lot easier than 2x 3800x TDPs into a 210W package.

AMD would MUCH rather sell you a Threadripper 2950x for ~$800 (plus their HEDT platform) than the ~$650 people would expect from 16 core Ryzen. Not to mention they have a lot of 2920x stock they need to sell off before they come out and make that product totally worthless (because all those extra PCI lanes and quad channel memory at least give 2920x -some- value over Ryzen 3900x to some users).

Besides, AMD is going to be on 7nm for a few years. If they blow their load on 16 core Ryzen 3000 series, they what could they possibly do to get people excited next year for Ryzen 4000 series?
 
The fact is, they already have made a 16 core AM4 ryzen cpu. Its overclocked performance was even shown off to a YouTuber, who showed evidence of its cinebench speeds and also showed CPUz and HWinfo screenshots.
It hasn't been announced right now tho. Maybe E3 or Q4 of this year.
If AMD wasn't going to announce a 16 core for AM4, most motherboard manufacturers wouldn't have made $500 motherboards with many layer PCBs, 12 or 16 phase Vrms, and VRM cooling of supporting a 300w beast.
https://forums.tomshardware.com/thr...and-beats-the-2000-18-core-i9-9980xe.3483971/
I suspect closer to $699 or $799 for a 16 core.
 

joeblowsmynose

Distinguished
I wouldn't hold my breath for 16 core Ryzen, at least not in the next year or so. Sure it's possible for them to make one, but it doesn't make any sense in their product stack until, at a minimum, 3rd gen threadripper parts are on the market ...
Fully disagree, I think 16 core will be out before year end, and it might be TR that gets pushed back a bit. Here's why. TR HEDT platform still has advantages that Ryzen can't get - quad channel memory (very important for 16 cores performance is certain workloads), 64 PCI-e lanes, 32 cores, etc. Ryzen isn't in that league even if they can get 16 cores. With TSMCs 20% transistor increase from 7nm to 7nm+ .. uh yeah, we'll still be seeing further performance improvements next "tock" cycle - no issue there. TR will just be pushed back a bit, and offer 16 to maybe 48 or 64 cores ... we'll see, but 64 is possible. So TR and Ryzen gen 3, not competing nor cannibalizing at all.

If the 16 core Ryzen eventually comes, I would also expect it to be clocked lower than the 12 core - since basically cramming 2x 3700x equivalent TDPs into a single 130W package has got to be a lot easier than 2x 3800x TDPs into a 210W package.
Might be right ... but oddly, the 12 core has higher base and boost clocks than the 8 core ... TDPS are extremely rough numbers - don't use those to finalize consumption.

AMD would MUCH rather sell you a Threadripper 2950x for ~$800 (plus their HEDT platform) than the ~$650 people would expect from 16 core Ryzen.
Ryzen will be cheaper to make, keep in mind; and AMD is building a brand - why else would they be offering more performance for less money than the competitor? Wouldn't it make more sense to get all the glory out of Ryzen you can (considering 20% increase in transistor density on 7nm+ - very next refresh), then when that starts to wane, bring out "64 core gen 3 ThreadRipper!!!!" (imagine entertainment wrestling style introduction)

Not to mention they have a lot of 2920x stock they need to sell off before they come out and make that product totally worthless (because all those extra PCI lanes and quad channel memory at least give 2920x -some- value over Ryzen 3900x to some users).
They invented this thing, called "discounts". Besides, the 3900x already beats that in performance if you don't need the other TR benefits. 16 core Ryzen doesn't change that.

Besides, AMD is going to be on 7nm for a few years. If they blow their load on 16 core Ryzen 3000 series, they what could they possibly do to get people excited next year for Ryzen 4000 series?
Did I mention the 20% transistor density increase for 7nm+ next year? Add in some more clock speed refinements ... and I think we just keep this ball rolling ...


But of course we'll have to just wait and see. :)
 
Here's 9900k on the POV-Ray rendering app (image below, forgot to clip the link) ... 170w (nearly double its TDP)... 2700x - same load 117w, a 31% difference.
That's very different from the 100% difference though that got all of this started...
And yes it all depends on what and how you measure, while the 9900k might draw more w on it's own the whole system draw ,at least for blender and handbrake,is lower.
You do have your 2700x inside a system right?Does it work all on it's own without anything else?
https://www.techspot.com/review/1744-core-i9-9900k-round-two/
4Na2RZn.jpg
 

joeblowsmynose

Distinguished
That's very different from the 100% difference though that got all of this started...
And yes it all depends on what and how you measure, while the 9900k might draw more w on it's own the whole system draw ,at least for blender and handbrake,is lower.
You do have your 2700x inside a system right?Does it work all on it's own without anything else?
https://www.techspot.com/review/1744-core-i9-9900k-round-two/
4Na2RZn.jpg

Come on Terry, you once again posted system consumption power, these aren't reliable for CPU package draw test. And the 100% that got this started was a real demonstrable example of the topic of Ryzen is overall more efficient than Intel 8th and 9th gen.

Look I'm done debating on this, Ryzen 3xxx is even far more efficient than Ryzen 2xxx which is the only comparison that counts. The 2700x is last gen technology from over a year ago. If there was any doubt in your mind about Ryzen 2xxx power efficiency (not sure why there would be), there clearly won't be any left after we see the R3xxx reviews in one month.
 

Giroro

Splendid
Ryzen will be cheaper to make, keep in mind; and AMD is building a brand - why else would they be offering more performance for less money than the competitor? Wouldn't it make more sense to get all the glory out of Ryzen you can (considering 20% increase in transistor density on 7nm+ - very next refresh), then when that starts to wane, bring out "64 core gen 3 ThreadRipper!!!!" (imagine entertainment wrestling style introduction)

The thing is, I also don't expect to see a 64 core ThreadRipper 3 ( I do expect to see 48 cores though in a 8x6c configuration)... Yes than could do it, but I don't think they will, for similar (ok actually pretty different) reasons that I don't expect to see the 16 core Ryzen. 64 core Threadripper would cannibalize (insignificant numbers of home-user) sales of of Epyc Rome - but server chips are where the real money is at. Somebody setting up a single dual-socket server with 2x16 core Epyc probably has reasons they aren't looking at a single 32-core Threadripper.. But I don't know, maybe that math changes when they can double their processing power for significantly less money... But I'm speaking pretty far out of my depth about servers...
I do think in this case a little (a lot?) more importantly, is those good 8-core chiplets will always be prioritized for the highest profit-margin parts. The yeild on perfect 7nm chiplets probably isn't all that great right now.
It's hard for a guy like me to figure out exact pricing of an epyc server, but I think the current gen charges somewhere the range of 3x-4x per core compared to threadripper. At least, that's exactly what it looks like Intel does.
Granted, AMD badly needs market penetration so some Epyc configurations might ultimately be loss-leaders or low margin - because it is not going to be easy to convince data centers to give them a shot. Maybe the good press involved with having the highest core-counts everywhere ever is worth it just because it helps them gain market share (yay competition).

But heck, for all I know Threadripper 3 is going to get it's own chiplets instead of basing it on Epyc, and then who knows what they do at that point. Maybe Threadripper 3 is where they shrink the I/O chiplet down to 7nm. It's all speculation haha, but if I were the one in charge of this stuff, I'd get the tech in place but wait to release it until it's really needed.
 

joeblowsmynose

Distinguished
The thing is, I also don't expect to see a 64 core ThreadRipper 3 ( I do expect to see 48 cores though in a 8x6c configuration)... Yes than could do it, but I don't think they will, for similar (ok actually pretty different) reasons that I don't expect to see the 16 core Ryzen. 64 core Threadripper would cannibalize (insignificant numbers of home-user) sales of of Epyc Rome - but server chips are where the real money is at. Somebody setting up a single dual-socket server with 2x16 core Epyc probably has reasons they aren't looking at a single 32-core Threadripper.. But I don't know, maybe that math changes when they can double their processing power for significantly less money... But I'm speaking pretty far out of my depth about servers...

I'd say if AMD thought that was a real issue they never would have had 32 core Threadripper. TR platform isn't server platform, so the crossover is limited to extreme workstation applications (Epyc/Xeon, Intel HEDT or TR?), for real server applications that Epyc is used for, TR is not an option.


I do think in this case a little (a lot?) more importantly, is those good 8-core chiplets will always be prioritized for the highest profit-margin parts. The yeild on perfect 7nm chiplets probably isn't all that great right now.

Rumours have been indicating that AMD 7nm is getting better yeilds than Intel is with the super mature Xeon design, so i'm not convinced this is any issue at all. But I agree with what you are saying, and that is why I think TR will be pushed back - Epyc should be priority over TR so IF there's any resource constraints between those two, TR gets pushed, not Epyc. And if TR gets pushed, into 2020, why not give a 16 core Ryzen its glory now, so the hype isn't competing with TR hype. Functionality AND demographic here also have limited crossover.


It's hard for a guy like me to figure out exact pricing of an epyc server, but I think the current gen charges somewhere the range of 3x-4x per core compared to threadripper. At least, that's exactly what it looks like Intel does.
Granted, AMD badly needs market penetration so some Epyc configurations might ultimately be loss-leaders or low margin - because it is not going to be easy to convince data centers to give them a shot. Maybe the good press involved with having the highest core-counts everywhere ever is worth it just because it helps them gain market share (yay competition).

Considering the rumoured performance of new Epyc, I don't think loss-leaders are needed for quick uptake. Besides the margins on Epyic are greater than Xeon due to the modular design, so they can continue to offer lower prices and still make profit.


But heck, for all I know Threadripper 3 is going to get it's own chiplets instead of basing it on Epyc, and then who knows what they do at that point. Maybe Threadripper 3 is where they shrink the I/O chiplet down to 7nm. It's all speculation haha, but if I were the one in charge of this stuff, I'd get the tech in place but wait to release it until it's really needed.

If the I/O works at 14nm+, then there's no need to shrink it down, besides, AMD already said the reason they kept the I/O at 14nm was because its almost impossible to make the connects at 7nm at this time. I suspect though that when AMD moves to 6nm(7nm+) or 5nm, 7nm will be matured enough that they might be able to get the i/o at 7nm. So I don't think TR will be distinct from Epyc in more ways than first and second gen TR was different from Epyc.


We'll have to wait and see though, that's just my two cents.
 
Last edited:
All in all with the recent info coming in of AMD opening up more info regarding Ryzen 3000 series and its OC capabilities and what to expect in performance at E3 event. Things will get real interesting with that info out.

AMD going all out and Intel in a tight spot. That is what consumers needed. Hope AMD or even Intel comes up with something decent in GPU sector to give NVIDIA a thought of it.
 
  • Like
Reactions: King_V

spencer.cleaves2

Upstanding
Jan 5, 2019
178
25
240
I see that, but IMO, it should be on a by application basis. I get that it's cheaper, but how many people REALLY need 12 cores?
To get a 3900x and not use most of, or at least 70% of it's available resources is still wasting money, even if it's cheaper.
That's how I see it, anyways.
This doesn't even make sense, you compare hardware based on Price/Performance, not just performance. I think someone is a little bias because if you have appreciation for computer hardware, then you would appreciate what AMD just did.