News Ryzen 5 7600X Beats i9-12900K by 22% in New Single-Core Benchmarks

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
why bother exageration is a billion times more effective than understatement
Because they don’t want a repeat of zen 2 where they advertised “up to X.X Ghz clock speed” but many CPU’s never reached that speed. Plus, No one will buy a cpu based on manufacturer hype. AMD knows they will release first and that reviewer sites will publish their reviews.
 
Because they don’t want a repeat of zen 2 where they advertised “up to X.X Ghz clock speed” but many CPU’s never reached that speed. Plus, No one will buy a cpu based on manufacturer hype. AMD knows they will release first and that reviewer sites will publish their reviews.
Because they don’t want a repeat of zen 2 where they advertised “up to X.X Ghz clock speed” but many CPU’s never reached that speed. Plus, No one will buy a cpu based on manufacturer hype. AMD knows they will release first and that reviewer sites will publish their reviews.
ZEN2 over zen and even zen + was a huge gain, go have another look.
 

ottonis

Reputable
Jun 10, 2020
224
193
4,760
why bother exageration is a billion times more effective than understatement

It has been speculated that AMD's sandbagging is less for marketing purposes but rather to obfuscate price positioning in relation to Intel-based Raptor Lake counterparts. For example: If Intel believes that their top of the line 13900 SKU will perform much better than AMD's 7950x, they will introduce it at much higher price.
If AMD then surprises everybody with much higher performance than intially expected, Intel will be forced to drop prices and recalculate their margins - which in turn would disappoint shareholders and affect stock prices etc.
 
It has been speculated that AMD's sandbagging is less for marketing purposes but rather to obfuscate price positioning of Intel-based Raptor Lake counterparts. For example: If Intel believes that their top of the line 13900 SKU will perform much better than AMD's 7950x, they will introduce it at much higher price.
If AMD then surprises everybody with much higher performance than intially expected, Intel will be forced to drop prices and recalculate their margins - which in turn would disappoint shareholders and affect stock prices etc.
they are holding each others hand walking into this sharing the same licence
 
  • Like
Reactions: ottonis

ottonis

Reputable
Jun 10, 2020
224
193
4,760
sharing the same licence
Technologically yes, they have cross-licencing agreements, no question!
But what they don't have is each other's prices. There is a famous saying: There is no bad product, just bad prices. In the Bulldozer area, AMD survived only by largely undercutting Intel's CPU prizes.
Now, AMD may be leaving Intel in the dark with regards to the performance metrics of their upcoming CPU generation, which makes pricing for Intel much more difficult to predict and thus more complicated.
 
ZEN2 over zen and even zen + was a huge gain, go have another look.
He is talking about something completely different.
Only a small percentage of CPUs where reaching the advertised CPU clocks, in derbauers survey 17 out of 180 samples so 9.4% where reaching the clocks people where paying for.
It doesn't matter that they where faster than previous models, that's not the issue.
View: https://youtu.be/DgSoZAdk_E8?t=635
 

peachpuff

Reputable
BANNED
Apr 6, 2021
690
733
5,760
So you are telling me that AMD should take an income hit of $150 from $450 the 8core costs now and a $100 hit on the 6core.
Also the high end users will be really happy about the lower end getting more cores but not the high end.
That income hit will disappear with more chips sold in the end.
 
That would be fine if AMD had an endless supply of CPUs (wafers etc) but they can only buy that much to make all of their products. Using more cores per product would result in less CPUs for AMD to sell.
Not to say that is wrong (I, in fact, agree with your statement), but there's a bit of nuance there in favour of AMD: the Zen CCDs are very small. Small enough to have stupid good yields and, from what I've read, most of their "losses" happen at the packaging stage (I/O + cores). I don't know how much cost it adds or the % of losses there, but given how well they priced Zen2 (first iteration), I have the feeling with Zen3 they just priced them to the moon knowing they had a winner. If they think Zen4 is a winner, I am expecting them to price them accordingly and we all will be disgusted as always, haha.

Thing is, I'm sure Raptor Lake won't be a flop. They'll definitely be comparable and overall platform cost and preference will play a bigger role than pure CPU performace.

There's also another good point: AM5 being DDR5 only, that will definitely put pressure on AMD to lower the CPU cost to have a bit of parity on platform with Intel and their DDR4 alternatives at a similar performance point.

Regards.
 

shady28

Distinguished
Jan 29, 2007
447
322
19,090
Not to say that is wrong (I, in fact, agree with your statement), but there's a bit of nuance there in favour of AMD: the Zen CCDs are very small. Small enough to have stupid good yields and, from what I've read, most of their "losses" happen at the packaging stage (I/O + cores). I don't know how much cost it adds or the % of losses there, but given how well they priced Zen2 (first iteration), I have the feeling with Zen3 they just priced them to the moon knowing they had a winner. If they think Zen4 is a winner, I am expecting them to price them accordingly and we all will be disgusted as always, haha.

Thing is, I'm sure Raptor Lake won't be a flop. They'll definitely be comparable and overall platform cost and preference will play a bigger role than pure CPU performace.

There's also another good point: AM5 being DDR5 only, that will definitely put pressure on AMD to lower the CPU cost to have a bit of parity on platform with Intel and their DDR4 alternatives at a similar performance point.

Regards.

The thing is, tiled designs are a solution to a manufacturing yield issue. They can have more defects per wafer with tiles, but also get a higher percentage of useful chips per wafer despite those defects, by using the smaller tiles. This helps them increase margin, by decreasing loss (useless chips). There's also a practical size limit to a monolithic die, since far fewer of them fit on a wafer you actually have to have far fewer defects to get a good yield.

But tiles are not inherently superior in performance, in fact they're inherently inferior to the monolithic die which does not have to use special and separate interconnect circuitry between tiles (CCDs). If you look closely at Zen 2 and Zen 3 cores, they are almost the same as far as the core goes. The cache in Zen 3 was doubled, and the TLB was doubled to make use of the cache, and that pretty much is Zen 3. X3D was another similar exercise.

We'll see if Zen 4 is really different from this pattern but I kind of doubt it. It looks to me like they are looking to frequency ramps, DDR5, and AVX-512 for benchmark wins. There's no problem with that, but the core is likely still basically a Zen 2 core. I think they will need to move beyond Zen based cores after this or they'll be doing the Skylake++++ dance.
 
The thing is, tiled designs are a solution to a manufacturing yield issue. They can have more defects per wafer with tiles, but also get a higher percentage of useful chips per wafer despite those defects, by using the smaller tiles. This helps them increase margin, by decreasing loss (useless chips). There's also a practical size limit to a monolithic die, since far fewer of them fit on a wafer you actually have to have far fewer defects to get a good yield.

But tiles are not inherently superior in performance, in fact they're inherently inferior to the monolithic die which does not have to use special and separate interconnect circuitry between tiles (CCDs). If you look closely at Zen 2 and Zen 3 cores, they are almost the same as far as the core goes. The cache in Zen 3 was doubled, and the TLB was doubled to make use of the cache, and that pretty much is Zen 3. X3D was another similar exercise.

We'll see if Zen 4 is really different from this pattern but I kind of doubt it. It looks to me like they are looking to frequency ramps, DDR5, and AVX-512 for benchmark wins. There's no problem with that, but the core is likely still basically a Zen 2 core. I think they will need to move beyond Zen based cores after this or they'll be doing the Skylake++++ dance.
Hm... Well, yeah, it's "a" solution, but not the only one. Making your die smaller is a perfectly viable solution as well, but AMD doesn't want to make monolithic dies anymore for mainstream desktop parts (keep in mind they could), because the only relevant element they seem to care about (in laptops/mobile) is power efficiency. Desktop and up, chiplets is their solution to escalability and not manufacturing or even efficiency I'd say; and I'm ignoring APUs in the desktop, because those are laptop dies. The chiplet way is a long-term cost saving thing they just did before Intel, that's all. Intel is already going that route, but with a different take (EMIB) on packaging. To phrase this differently: the only real disadvantage of tiles over monolithic is same performance at a higher power consumption due to latency and more interconnects, which is what AMD doesn't want in laptops. As I've mentioned before, the penalty for the interconnect in terms of power in Zen2-chiplets and above is not minor. I'm actually looking forward to how Intel manages it when they release Sapphire Rapids and see how their solution stacks up to AMD's IF (which is really good, all things considered).

So, yes and no. I don't think (given what history tells me) AMD went chiplets due to manufacturing woes, but escalability first and foremost. I also believe they mentioned that explicitly at some point? Maybe? And yes, more cache helps hide some latency, but up to a certain point. They do need to power the IF so the latency is not unmanageable. That's also why increasing the IF has real, tangible, benefits on all AMD chiplet-based CPUs.

Regards.
 
Not to say that is wrong (I, in fact, agree with your statement), but there's a bit of nuance there in favour of AMD: the Zen CCDs are very small. Small enough to have stupid good yields and, from what I've read, most of their "losses" happen at the packaging stage (I/O + cores). I don't know how much cost it adds or the % of losses there, but given how well they priced Zen2 (first iteration), I have the feeling with Zen3 they just priced them to the moon knowing they had a winner. If they think Zen4 is a winner, I am expecting them to price them accordingly and we all will be disgusted as always, haha.

Thing is, I'm sure Raptor Lake won't be a flop. They'll definitely be comparable and overall platform cost and preference will play a bigger role than pure CPU performace.

There's also another good point: AM5 being DDR5 only, that will definitely put pressure on AMD to lower the CPU cost to have a bit of parity on platform with Intel and their DDR4 alternatives at a similar performance point.

Regards.
Unless TSMC managed to brake physics somehow 8 cores will still be 30% more die area than 6 cores, it doesn't matter how cheap they are (and I doubt they are since TSMC is going to raise prices for the second time in two years) it's not only a price issue it's also a availability/manufacturing capability problem.
This 30% larger area means a whole lot fewer CPUs for a lot higher price, and especially for the 6 core tier going ~30% up in price just from this would be brutal, and there would be no 6 core alternative from AMD for cheaper.
And yeah the I/O die is getting iGPU cores now as well, that's not going to be free either.
 
Unless TSMC managed to brake physics somehow 8 cores will still be 30% more die area than 6 cores, it doesn't matter how cheap they are (and I doubt they are since TSMC is going to raise prices for the second time in two years) it's not only a price issue it's also a availability/manufacturing capability problem.
This 30% larger area means a whole lot fewer CPUs for a lot higher price, and especially for the 6 core tier going ~30% up in price just from this would be brutal, and there would be no 6 core alternative from AMD for cheaper.
And yeah the I/O die is getting iGPU cores now as well, that's not going to be free either.
Well, there's 2 angles to what you're saying:
1- Defects.
2- Die Size.

For #1, if they increase the effective die size and start getting more defects, they won't get any less CPUs pér sè, but will just have to harvest them differently. Do keep in mind AMD throughout the past has been disabling perfectly working dies to fill gaps in their lineup when yields have been good (too good, ironically), so in this particular case, your point doesn't necessarily apply.

As for #2, you're right. Bigger die means less dies per waffer and there's no way around it. This being said, it then becomes a margins game and here's where I think you're spot on: AMD doesn't want to give up margins too much, if at all possible. They're not producing much more because TSMC (I believe) can't give them more capacity or AMD just is being careful with securing too much capacity and are taking a very conservative approach relying more on margins than pure volume of sales. As you already can imagine: they can produce less CPUs, but sell them at a higher markup or produce more and lower their markup. And this is an important thing to consider as well: it's not just waffers anymore, but also packaging. Which again I agree with you: a bigger die that fails at the packaging will hurt more than a smaller one, as I understand it, that die can't be harvested anymore, so it is a full write off. Maybe this is also why Intel has been delaying the move to chiplets/tiles for so long?

Regards.
 

shady28

Distinguished
Jan 29, 2007
447
322
19,090
Hm... Well, yeah, it's "a" solution, but not the only one. Making your die smaller is a perfectly viable solution as well, but AMD doesn't want to make monolithic dies anymore for mainstream desktop parts (keep in mind they could), because the only relevant element they seem to care about (in laptops/mobile) is power efficiency. Desktop and up, chiplets is their solution to escalability and not manufacturing or even efficiency I'd say; and I'm ignoring APUs in the desktop, because those are laptop dies. The chiplet way is a long-term cost saving thing they just did before Intel, that's all. Intel is already going that route, but with a different take (EMIB) on packaging. To phrase this differently: the only real disadvantage of tiles over monolithic is same performance at a higher power consumption due to latency and more interconnects, which is what AMD doesn't want in laptops. As I've mentioned before, the penalty for the interconnect in terms of power in Zen2-chiplets and above is not minor. I'm actually looking forward to how Intel manages it when they release Sapphire Rapids and see how their solution stacks up to AMD's IF (which is really good, all things considered).

So, yes and no. I don't think (given what history tells me) AMD went chiplets due to manufacturing woes, but escalability first and foremost. I also believe they mentioned that explicitly at some point? Maybe? And yes, more cache helps hide some latency, but up to a certain point. They do need to power the IF so the latency is not unmanageable. That's also why increasing the IF has real, tangible, benefits on all AMD chiplet-based CPUs.

Regards.


No AMD did the tiles specifically to improve yield, which lowers their manufacturing cost. There have been plenty of studies on this, as well as conversation with Su herself on the topic. Keep in mind that AMD bears a bigger brunt of manufacturing losses than Intel, because AMD is paying TSMC. They needed an equalizer, and chiplets was it.

Think of a wafer as a dart board. Any process will have some average X number of defects per wafer. Think of a wafer as a dart board, and lets say there are 50 defects, so 50 dart throws. Divide it up into 1500 chiplets - since each one is small more fit on a wafer - and throw 50 darts. At most, there will be 50 lost chiplets, and 1450 good chiplets. 96.7% of your chiplets are good on average, assuming all darts hit different chiplets.

Now divide that board up into 500 monolithic dies. Throw 50 darts. At most there will be 50 defects. Now you have 90% yield. Your defect rate though, is a solid 3X that with the chiplets.

The other advantage to chiplets is scalability. Once you have that chiplet, you can configure it multiple ways and turn it into a very large complex.

So I'm not detracting from this, AMD looked at this and cam up with a great solution, but it's primarily aimed at manufacturing efficiency and scalability.
 

PiranhaTech

Reputable
Mar 20, 2021
136
86
4,660
I bought the 5900X. I thought "man, Intel came out with the new CPUs and the performance difference...." until I saw Intel's TDP. I'm okay with the 105W default and 142W boost of the 5900X

I used to not look at TDP much in the past, but if AMD is slower but has less TDP, I'd consider it trading blows feature-wise. It doesn't look like this will be the case and looks like both Intel and AMD are going crazy with the power use

The 7600X looks like it'll use 65W pre-boost and 125W at boost. This is rumor though, but I hope it's true.
 
No AMD did the tiles specifically to improve yield, which lowers their manufacturing cost. There have been plenty of studies on this, as well as conversation with Su herself on the topic. Keep in mind that AMD bears a bigger brunt of manufacturing losses than Intel, because AMD is paying TSMC. They needed an equalizer, and chiplets was it.

Think of a wafer as a dart board. Any process will have some average X number of defects per wafer. Think of a wafer as a dart board, and lets say there are 50 defects, so 50 dart throws. Divide it up into 1500 chiplets - since each one is small more fit on a wafer - and throw 50 darts. At most, there will be 50 lost chiplets, and 1450 good chiplets. 96.7% of your chiplets are good on average, assuming all darts hit different chiplets.

Now divide that board up into 500 monolithic dies. Throw 50 darts. At most there will be 50 defects. Now you have 90% yield. Your defect rate though, is a solid 3X that with the chiplets.

The other advantage to chiplets is scalability. Once you have that chiplet, you can configure it multiple ways and turn it into a very large complex.

So I'm not detracting from this, AMD looked at this and cam up with a great solution, but it's primarily aimed at manufacturing efficiency and scalability.
But, again, the waffer yields when their dies are so small become a secondary problem when the new issue is packaging and slapping more cache to the dies. As I said, and I think I'm remembering this one correctly: when the packaging* fails, the dies are a full write off, so no harvesting is possible there. Meaning the "yield" needs to be adjusted to the "after-packaging" stage in order to see how good the approach is. This being said and given how much they've sold, I think the overall yields are not bad. Also, I think TSMC still charges even of bad waffers/dies.

And on the efficiency front. I don't think packaging is more efficient, but it does introduce other types of efficiencies which are not power related (being generous with the term).

All in all, I don't disage with your statements, but I don't think you're painting the big picture accurately even if you get the specific rights, if that makes sense?

Regards.
 
  • Like
Reactions: shady28

shady28

Distinguished
Jan 29, 2007
447
322
19,090
But, again, the waffer yields when their dies are so small become a secondary problem when the new issue is packaging and slapping more cache to the dies. As I said, and I think I'm remembering this one correctly: when the packaging* fails, the dies are a full write off, so no harvesting is possible there. Meaning the "yield" needs to be adjusted to the "after-packaging" stage in order to see how good the approach is. This being said and given how much they've sold, I think the overall yields are not bad. Also, I think TSMC still charges even of bad waffers/dies.

And on the efficiency front. I don't think packaging is more efficient, but it does introduce other types of efficiencies which are not power related (being generous with the term).

All in all, I don't disage with your statements, but I don't think you're painting the big picture accurately even if you get the specific rights, if that makes sense?

Regards.


No question the packaging has to be considered, but talking about that adds a really complex question.

Something that may not be well known, the cost per wafer went way, way up past 7nm. There were some articles on this a couple years ago, but basically they said that 7nm was the last point where the higher density gave a cost advantage. At 5nm and below, the chips actually cost more to produce even in volume due to the cost of the wafer going up faster than the density was increasing on the new nodes.

So that is going to factor into getting a bunch of small chiplets at 7nm / 5nm and lower, while using a 12nm IO tile which is much cheaper.

AnandTech has an article directly on this subject. There's another site I found that did cost analysis and used calculus... So anyway, not something I'm going to delve into in detail, but this was a factor.

I pulled two blurps of interest from the article linked to below. I should probably note, this article was in Jan 2022 and the 5600X no longer sells for $299 - it's $179 at my local Microcenter. I'm going to make a WAG that this is probably no longer a profitable part for AMD. Certainly current market prices for a 5600X are basically liquidation levels.

"In AMD’s consumer-focused product stack, the only products it ships with chiplets are the high-performance Ryzen 3000 and Ryzen 5000 series processors.
...
Everything else consumer focused is a single piece of silicon, not chiplets. Everything in AMD’s mobile portfolio relies on single pieces of silicon, and they are also migrated into desktop form factors in AMD’s desktop APU strategy. We’re seeing a clear delineation between where chiplets make financial sense, and where they do not. From AMD’s latest generation of processors, the Ryzen 5 5600X is still a $299 cost at retailers."


"Ultimately there has to be a tipping point where simply building a monolithic silicon product becomes better for total cost than trying to ship chiplets around and spend lots of money on new packaging techniques. I asked the question to Dr. Lisa Su, acknowledging that AMD doesn’t sell its latest generation below $300, as to whether $300 is the realistic tipping point from the chiplet to the non-chiplet market.

Dr. Su explained how in their product design stages, AMD’s architects look at every possible way of putting chips together."


https://www.anandtech.com/show/1720...-on-the-optimization-says-amds-ceo-dr-lisa-su
 
No question the packaging has to be considered, but talking about that adds a really complex question.

Something that may not be well known, the cost per wafer went way, way up past 7nm. There were some articles on this a couple years ago, but basically they said that 7nm was the last point where the higher density gave a cost advantage. At 5nm and below, the chips actually cost more to produce even in volume due to the cost of the wafer going up faster than the density was increasing on the new nodes.

So that is going to factor into getting a bunch of small chiplets at 7nm / 5nm and lower, while using a 12nm IO tile which is much cheaper.

AnandTech has an article directly on this subject. There's another site I found that did cost analysis and used calculus... So anyway, not something I'm going to delve into in detail, but this was a factor.

I pulled two blurps of interest from the article linked to below. I should probably note, this article was in Jan 2022 and the 5600X no longer sells for $299 - it's $179 at my local Microcenter. I'm going to make a WAG that this is probably no longer a profitable part for AMD. Certainly current market prices for a 5600X are basically liquidation levels.

"In AMD’s consumer-focused product stack, the only products it ships with chiplets are the high-performance Ryzen 3000 and Ryzen 5000 series processors.
...
Everything else consumer focused is a single piece of silicon, not chiplets. Everything in AMD’s mobile portfolio relies on single pieces of silicon, and they are also migrated into desktop form factors in AMD’s desktop APU strategy. We’re seeing a clear delineation between where chiplets make financial sense, and where they do not. From AMD’s latest generation of processors, the Ryzen 5 5600X is still a $299 cost at retailers."


"Ultimately there has to be a tipping point where simply building a monolithic silicon product becomes better for total cost than trying to ship chiplets around and spend lots of money on new packaging techniques. I asked the question to Dr. Lisa Su, acknowledging that AMD doesn’t sell its latest generation below $300, as to whether $300 is the realistic tipping point from the chiplet to the non-chiplet market.

Dr. Su explained how in their product design stages, AMD’s architects look at every possible way of putting chips together."


https://www.anandtech.com/show/1720...-on-the-optimization-says-amds-ceo-dr-lisa-su
Well, I can't give AMD the benefit of the doubt there, only because Zen2 (Ry3K chiplets) was using the same manufacturing nodes and packaging as Zen3 (Ry5K) and they launched at lower prices in their respective segments and even selling the 3600 as low as $120 (IIRC) for a long time. So no, I have a hard time believing the overall cost is way higher than monolithic dies. At best, it should even out somewhere (maybe 2 chiplets over a massive die?) for the cost, but never beat the monolithic in the low power department, ever; so, this is to say, I don't think AMD didn't use the chiplet approach for the mobile APUs because of cost, but power first and foremost and then everything else iis just a mix of escalability, yields and cost (using the same exact Zen die on everything is economical! xD). Really hard to argue one way or another without hard data on this, but I do remember what you're talking about TSMC mentioning that past 7nm the cost will increase by a lot per shrink. The question then becomes, what is AMD's margin vs real cost of manufacture+packaging.

I'll stop here as I believe it's spiraling into an assumptions game, even though I have fun talking about it.

Regards.
 
Talking to insiders, that 8-10% IPC improvement is intentionally an understatement by AMD. They would rather over-deliver than under-deliver. Plus remember this 8-10% IPC improvement number is just an average, meaning some tasks will benefit from architectural changes more than others.
Exactly.
Hopefully, we're looking at an average gaming uplift in the neighborhood of 15%, single-core. More as the number of cores being utilized goes up.
 

shady28

Distinguished
Jan 29, 2007
447
322
19,090
Well, I can't give AMD the benefit of the doubt there, only because Zen2 (Ry3K chiplets) was using the same manufacturing nodes and packaging as Zen3 (Ry5K) and they launched at lower prices in their respective segments and even selling the 3600 as low as $120 (IIRC) for a long time. So no, I have a hard time believing the overall cost is way higher than monolithic dies. At best, it should even out somewhere (maybe 2 chiplets over a massive die?) for the cost, but never beat the monolithic in the low power department, ever; so, this is to say, I don't think AMD didn't use the chiplet approach for the mobile APUs because of cost, but power first and foremost and then everything else iis just a mix of escalability, yields and cost (using the same exact Zen die on everything is economical! xD). Really hard to argue one way or another without hard data on this, but I do remember what you're talking about TSMC mentioning that past 7nm the cost will increase by a lot per shrink. The question then becomes, what is AMD's margin vs real cost of manufacture+packaging.

I'll stop here as I believe it's spiraling into an assumptions game, even though I have fun talking about it.

Regards.


Think you're missing something. The chiplets provide very good yields for higher core counts. The more cores, the more chiplets needed, the more cost effective.

At some point as the number of chiplets / core count needed for a design goes down, the cost of packaging takes over vs the yield and they are no longer cost effective.

Like a single chiplet CPU would not be cost effective - the packaging would be too big of a cost vs the number of cores - hence we didn't see much of the 3300X.
 
Think you're missing something. The chiplets provide very good yields for higher core counts. The more cores, the more chiplets needed, the more cost effective.

At some point as the number of chiplets / core count needed for a design goes down, the cost of packaging takes over vs the yield and they are no longer cost effective.

Like a single chiplet CPU would not be cost effective - the packaging would be too big of a cost vs the number of cores - hence we didn't see much of the 3300X.
I'm not missing anything, to be honest. As Terry mentioned, if AMD wanted to include more than 16 cores into the same packaging that fits in AM5, then they need to slim down the cores (Zen4-dense) or increase the size of the CCD (effectively lowering the amount of CPUs/dies per waffer). Thinning out the cores won't be without a performance sacrifice and making the CCD fatter won't be free either; sacrifices will be made for either. As it stands, AMD found that 8-core CCDs is their perfect balance for what they need and this restriction makes it so that if they want to increase over 2 CCDs per package they need to get creative, much like Intel did wth bigLITTLE. I'd love to see 10 or even 12 core CCDs, but that is a redesign of the IF (interconnects), package size and even arrangement on it. I'd even be willing to say they'd need to modify a lot of tooling that just works now.

I guess another way of saying that is: the package size is limited. For AM5 AMD can only do 1 or 2 CCDs. For Server they can do up to... 16 I think? So that's how they can escale, but it also restricts them. Zen4-dense will try to do a bit of bigLITTLE without sacrificing too much and we'll have to wait and see if AMD will release that to consumer.

And yes, a single CCD, unless they have a very good contract for defects and such, I don't think it is very cost effective for them. But keep in mind, like I said, we do not know how their margins and costs look like. Given how they managed to price Ry3K very well, I'm sure they can get away with low-ish prices; or at least comparable to Intel bis-a-bis in the performance tiers. Assuming the overall platform cost is also comparable. We'll see how AMD positions Ry7K vs Alder/Raptor Lake in terms of platform cost, but I do believe AMD can go lower than what Intel would like still. Raptor Lake's die size is humongous, so each defect must hurt a lot (lost profit, kind of?) and they can still price the 12600K competitively. Well, at least reasonably XD

Regards.
 

escksu

Reputable
BANNED
Aug 8, 2019
877
353
5,260
Talking to insiders, that 8-10% IPC improvement is intentionally an understatement by AMD. They would rather over-deliver than under-deliver. Plus remember this 8-10% IPC improvement number is just an average, meaning some tasks will benefit from architectural changes more than others.

Yes, it's possible. But, it's also extremely rare for processors to have IPC improvements of 20-30%. Some have pointed out that this gap could be due to avx512.

So, whatever it is, that performance improvement will not be something we will see in daily usage.