Question When will we see Ryzen 7X00G cpus?

spicy_cat

Prominent
Oct 28, 2022
31
15
535
Will we see -G series AM5 socket chips the way we got 5600G/5700G AM4 chips? As in somewhat limited features like smaller cache and slower PCIe generation in exchange for a bigger iGPU?

Looking at AMD's Phoenix chip it's limited to PCIe 4.0, only has 16MB L3 cache compared to Raphael's 32MB, and data so far seems to suggest that the 12 compute unit RDNA iGPU is comparable to a Radeon 6400. It seems like all they would have to do is lift the power limits to 65w and AMD would have a viable 7600G/7700G series CPU.

Or am I underestimating the increased manufacturing cost from having a 4nm monolithic die compared to seperate CPU and I/O chiplets?
 

kanewolf

Titan
Moderator
Will we see -G series AM5 socket chips the way we got 5600G/5700G AM4 chips? As in somewhat limited features like smaller cache and slower PCIe generation in exchange for a bigger iGPU?

Looking at AMD's Phoenix chip it's limited to PCIe 4.0, only has 16MB L3 cache compared to Raphael's 32MB, and data so far seems to suggest that the 12 compute unit RDNA iGPU is comparable to a Radeon 6400. It seems like all they would have to do is lift the power limits to 65w and AMD would have a viable 7600G/7700G series CPU.

Or am I underestimating the increased manufacturing cost from having a 4nm monolithic die compared to seperate CPU and I/O chiplets?
Since AMD put integrated graphics on all the CPUs in the 7000 series, I don't know if there will be a lot of interest within AMD to create more models.
 
Since AMD put integrated graphics on all the CPUs in the 7000 series,...
That's true, but as we've also been told the iGPU in the 7000 series does not qualify it for "APU" status as it's very weak. It's purpose is in the same vein as Intel's: a gap filler to get a display if the discrete GPU is missing for some reason. I suppose it might also be useful for assisting the CPU for processing of certain tasks without affecting dGPU performance... video streaming whilst gaming for instance?

So the question can be better asked a bit bigger: is AMD going to introduce an APU level of integrated graphics performance on the AM5 platform? and if so, what architecture(s) might it use for CPU...and the iGPU.

I think the first question can be answered by looking at the economics of it: AM5 is too darned expensive. APU's are a budget build but even B650 motherboards with DDR5 memory kits are just too darn expensive on their own to be considered "budget". THAT would be a good reason to not have much interest in it at this point.
 
Last edited:

kanewolf

Titan
Moderator
So the question can be asked a bit bigger: is AMD going to introduce an APU level of integrated graphics performance on the AM5 platform?
If they believe there is sufficient demand. Does the integrated graphics in the 7000 line meet the "business" desktop requirements? If so, there may not be enough demand. The APUs were required for the business class devices previously. Is there some embedded demand that requires more graphics? Some car maker that wants to buy 500K chips for their entertainment system.
This article from 6 months ago said there would be zen4 APUs -- https://www.tomshardware.com/news/amd-confirms-zen-4-dragon-range-phoenix-apus-for-2023
Has the plan changed in the last 6 months? I don't know.
 
If they believe there is sufficient demand. Does the integrated graphics in the 7000 line meet the "business" desktop requirements? If so, there may not be enough demand. The APUs were required for the business class devices previously. Is there some embedded demand that requires more graphics? Some car maker that wants to buy 500K chips for their entertainment system.
This article from 6 months ago said there would be zen4 APUs -- https://www.tomshardware.com/news/amd-confirms-zen-4-dragon-range-phoenix-apus-for-2023
Has the plan changed in the last 6 months? I don't know.
I definitely see your point when looking at business desktop requirements: they don't need APU class performance and the current iGPU is quite adequate enough.

So the Tom's article suggest there is a plan for one. I'd have to think they need to revisit that because of the reason I mentioned above: an APU is the centerpiece of a budget gaming build. That would predominantly appeal to entry-level gaming markets, home consumers not business consumers that are served well enough by the current iGPU. If they can't get motherboards and memory prices down to "budget" levels that plan will have to change.
 

kanewolf

Titan
Moderator
I definitely see your point when looking at business desktop requirements: they don't need APU class performance and the current iGPU is quite adequate enough.

So the Tom's article suggest there is a plan for one. I'd have to think they need to revisit that because of the reason I mentioned above: an APU is the centerpiece of a budget gaming build. That's predominantly entry-level gaming markets, home consumers not business. If they can't get motherboards and memory prices down to "budget" levels that plan will have to change.
As that article said it would be a DDR5 device. There is lots of complaining about the motherboard and RAM costs for AM5 builds. There may be no reason to release an APU which won't sell because of the ecosystem costs associated with it. AMD may say they will just keep selling the 5700Gs.
 
...AMD may say they will just keep selling the 5700Gs.
Or another APU series entirely on AM4, to keep it budget. Something that leverages what they can of Zen 4 and maybe RDNA3 in the iGPU. But stays on DDR4...again, to keep it budget.

I could even see making it an AM4+ socket if for no other reason than to force people onto a new motherboard to side-step the horrors of another round of BIOS updates on an old board.

Either that or somebody please crack the code and get costs of AM5/DDR5 back in line.
 

spicy_cat

Prominent
Oct 28, 2022
31
15
535
Since AMD put integrated graphics on all the CPUs in the 7000 series, I don't know if there will be a lot of interest within AMD to create more models.

The iGPU that Raphael uses only has 2 CUs (128 shaders) on RDNA 2. It's a lot wimpier than the 5600/5700G's iGPU and only meant for troubleshooting or being a basic display device.

Or another APU series entirely on AM4, to keep it budget. Something that leverages what they can of Zen 4 and maybe RDNA3 in the iGPU. But stays on DDR4...again, to keep it budget.

iGPUs are really starved for memory bandwidth if they're going to become more powerful and prices for DDR5 are coming down.
 

spicy_cat

Prominent
Oct 28, 2022
31
15
535
APUs are usualy released with previous ryzen generation, which would probably be zen3+

That's where Rembrandt starts to look like an interesting possibility. It's got Zen 3+ cores on 6nm, 12 CUs (768 shading units), and DDR5 support. I suspect it was just a way for AMD to test the new memory/socket before Zen 4 cores were complete the way Bristol Ridge was for DDR4/AM4.
 
Regarding the whole "APU" name, it seems AMD is abandoning the term, if Wikipedia is anything to go by:
Since the introduction of Zen-based processors, AMD renamed their APUs as the Ryzen with Radeon Graphics and Athlon with Radeon Graphics

And while AMD's CPUs with an iGPU were certainly better than Intel's offerings, I'm under no impression that for the general consumer market, AMD is trying to make something like say a PlayStation or Xbox processor for consumers. If anything, Zen 4 processors including a GPU on the I/O die is AMD hoping to cut into the business computer market where most of the demand for business computers is something with a reasonably high performance CPU (for all that Excel number crunching), but they don't need a high-end GPU and not needing a video card helps cut down on system builder costs/support/etc.
 

spicy_cat

Prominent
Oct 28, 2022
31
15
535
That all sounds about right. In the long term the way not having to have a video card cuts down on both system cost and size and that has to be making nVidia nervous. Eventually there will be enough transistor budget to put a "good enough for non-enthusiasts" iGPU in a lot of CPUs. For laptops in particular that would mean dedicated GPUs are only seen in a few % of the top end machines. But that's probably at least 5-7 years away.
 
Regarding the whole "APU" name, it seems AMD is abandoning the term, if Wikipedia is anything to go by...
True enough that... but typing out APU is a whole lot easier than typing out that marketing name when trying to differentiate the much better iGPU's from the basic ones included in Intel and 7000 series CPU's.

... Eventually there will be enough transistor budget to put a iGPU in a lot of CPUs....

I'm not sure there ever will be if you can't consider the 5700G as already "good enough for non-enthusiasts".

But then also, what's "good enough" changes as soon as a new, more intensive feature becomes mainstreamed. Something like ray tracing and the push to 8k resolutions right now. I wonder if anyone ever considers it feasible to implement raytracing in an APU-class processor and expect decent performance. While certainly a problem when you consider the die size of modern GPU silicon even at 7nm geometries, it's not so much the transistor budget but the power/thermal budget that's probably going to prove insurmountable for making that level of performance possible or practical.
 
Last edited:

spicy_cat

Prominent
Oct 28, 2022
31
15
535
While certainly a problem when you consider the die size of modern GPU silicon even at 7nm geometries, it's not so much the transistor budget but the power/thermal budget that's probably going to prove insurmountable.

As far as I can tell the primary reason the thermal budget has been pushed to the limit is because the companies are all trying to push as much performance as they can per square mm of die area to get to the top of the benchmark charts. AMD, nVidia, and Intel are all pushing clock speeds way past the point they're most thermally efficient in order to get a few percentage points more performance. You can see this in how little the performance difference is between 7000X and non-x series CPUs, on GPUs you can bring core clocks down and drop power usage substantially with only a small drop in performance, and I shouldn't even need to bring up Intel though that's somewhat forgivable as they're on such a big node disadvantage.

The tldr is if the optimized the clock speeds for performance per watt instead of trying to push clockspeeds to the maximum to squeeze out every bit of performance they can per transistor there is a lot of efficiency to be gained.
 
As far as I can tell the primary reason the thermal budget has been pushed to the limit is because the companies are all trying to push as much performance as they can per square mm of die area to get to the top of the benchmark charts. AMD, nVidia, and Intel are all pushing clock speeds way past the point they're most thermally efficient in order to get a few percentage points more performance. You can see this in how little the performance difference is between 7000X and non-x series CPUs, on GPUs you can bring core clocks down and drop power usage substantially with only a small drop in performance, and I shouldn't even need to bring up Intel though that's somewhat forgivable as they're on such a big node disadvantage.

The tldr is if the optimized the clock speeds for performance per watt instead of trying to push clockspeeds to the maximum to squeeze out every bit of performance they can per transistor there is a lot of efficiency to be gained.
I totally agree they are pushing performance to the very edge of the thermal envelope, but that's not the point.

Adding more transistors only adds to the heat load since it's the transistors that generate it. It's an elementary physics problem that has become ever more obvious as they passed 12nm: there's simply not enough surface area to transfer the heat out of the die. Shrinking it further, in order to add more transistors in a square mm of die space in order to add all the processing units needed for a full capability GPU in a CPU package, just adds more heat that can't be removed.
 
Last edited:

spicy_cat

Prominent
Oct 28, 2022
31
15
535
I totally agree they are pushing performance to the very edge of the thermal envelope, but that's not the point.

Adding more transistors only adds to the heat load since it's the transistors that generate it. It's an elementary physics problem that has become ever more obvious as they passed 12nm: there's simply not enough surface area to transfer the heat out of the die. Shrinking it further, in order to add more transistors in a square mm of die space, just adds more heat that can't be removed. To get the performance they'll also add design features that make the CPU tolerant of ever higher operating temperatures and, in turn, push it to those higher limits while managing it closely with very clever boosting algorithms and clever packaging but each of those have got to have their limits too.

It's a problem for CPU performance, sure. But how much of what we do is CPU limited right now? The only thing I do that I want more CPU performance for is video conversion. For the other 98% of what I use a computer for I feel like my 3900 (non-x) is overkill.

For an iGPU on AM5 I think you would run into memory bandwidth limits before you would run into thermal limits unless you give it a wide memory bus like Apple's M1. A 12 CU GPU even when clocked to the moon like the RX 6400 still only has a 53W TDP. A 24 CU GPU at half the clockspeed has the same performance with much lower power usage. And we already know how to cool 170W CPUs.
 
It's a problem for CPU performance, sure....
Probably less spoken about than with CPU's but I have to think it's a problem with GPU's too. Just look at the ever increasing high temp operation of each of the latest generations of GPU's. We used to think double-wides were a pain but now triple-wide heatsinks are fairly common, and lower in the stack too. As with CPU's, this has mainly come about since they've gone below 12nm.

I think CPU performance really is less important for most gaming but I can see where the number of cores can be a big help with thermal leveling. The scheduler can move a single heavy thread around to other cores that share resources if there's one available, reducing thermal buildup in each core. If there are a number of threads, say four or six, needing this treatment even if they're not game threads then it's obvious a 12 or even 16 core processor quickly makes itself useful for gaming.

In a light workload like gaming moving the thread back and forth between just two cores effectively doubles the surface area of the die exposed to the heatsink for shedding the heat generated processing the thread's workload. This strategy can keep the CPU cooler and able to keep boosting with maximum clocks in the middle of heavy game play. Obviously, it won't work so well with a heavy workload that's using all the CPU cores equally heavy so the only alternative is to lower clocks down the V/F curve to keep temperature under control.
 
Last edited:

spicy_cat

Prominent
Oct 28, 2022
31
15
535
Probably less spoken about than with CPU's but I have to think it's a problem with GPU's too. Just look at the ever increasing high temp operation of each of the latest generations of GPU's. We used to think double-wides were a pain but now triple-wide heatsinks are fairly common, and lower in the stack too. As with CPU's, this has mainly come about since they've gone below 12nm.

I think CPU performance really is less important for most gaming but I can see where the number of cores can be a big help with thermal leveling. The scheduler can move a single heavy thread around to other cores that share resources if there's one available, reducing thermal buildup in each core. If there are a number of threads, say four or six, needing this treatment even if they're not game threads then it's obvious a 12 or even 16 core processor quickly makes itself useful for gaming.

In a light workload like gaming moving the thread back and forth between just two cores effectively doubles the surface area of the die exposed to the heatsink for shedding the heat generated processing the thread's workload. This strategy can keep the CPU cooler and able to keep boosting with maximum clocks in the middle of heavy game play. Obviously, it won't work so well with a heavy workload that's using all the CPU cores equally heavy so the only alternative is to lower clocks down the V/F curve to keep temperature under control.

We're at the point where even the Ryzen 7600 is giving 100 fps in bottom 1% across nearly all games. Restrict the CPU to 45 or probably even 35 watts and it's still going to be perfectly fine for gaming. You're going to be GPU limited with a discreet GPU. You're absolutely guaranteed to be GPU limited with even the best iGPU you could build with the available memory bandwidth.
 
... You're absolutely guaranteed to be GPU limited with even the best iGPU you could build with the available memory bandwidth.
I wouldn't doubt it. But it does bring into question whether AMD really should do it...create a true APU class processor using Zen 4. It doesn't appear to be a good use of silicon when 7nm and 5nm wafer allotments are going to continue being a rare commodity into the future.

Hypothetically speaking, let's say it can be done with a 4 core CPU assuming that's all required to push performance from the best iGPU they could put on it even with DDR5. Going 6 or 8 cores means a larger die but die size is the key cost driver so a smaller die would be an attractive pairing for cost when it works.

But that makes it a hard sell since there's no point in dropping in a decent dGPU without a new processor, one of the main selling points of an APU as an entry level solution. Any sort of workstation class processing will be ludicrous on 4 cores in an age of 12 and 16 core desktop CPU's and 8 core is considered entry level. That destroys the other selling point, so it's back to a weakened 8 core part, maybe 6 core too, compromising cost for an "entry level" gaming processor that's obviously gimped.

I can't imagine a gimped 8 core part being well received when it's to be put on a $300 motherboard, $200 in memory. I don't see the point in it when the whole AM5 platform right now is premium, top line. 7000 series CPU's seem to be fine with the current iGPU that serves the purpose for the business markets and for people who like the comfort of a backup GPU. Maybe when we start seeing $75 AM5 motherboards that will change.

But what about AMD's Infinity Cache? Wouldn't that be an effective memory bandwidth multiplier if implemented within the package? I'm sure there are limits to it, but that might be useful in pushing Ryzen with Radeon Graphics processors (aka APU) forward into Zen 4, on AM5 while justifying a higher price?
 
Last edited:

spicy_cat

Prominent
Oct 28, 2022
31
15
535
But what about AMD's Infinity Cache? Wouldn't that be an effective memory bandwidth multiplier if implemented within the package? I'm sure there are limits to it, but that might be useful in pushing Ryzen with Radeon Graphics processors (aka APU) forward into Zen 4, on AM5 while justifying a higher price?

It will stretch the bandwidth further but I think there are limits to how much it can help. AM5 on dual-channel DDR5-5200 has something like 83.2 GB/s of memory bandwidth when even a Radeon 6400 has 128.0 GB/s.

In a couple generations I wonder if it would be possible to create an X3D stacked cache memory that's accessible to both the CPU as L3 and the iGPU as Infinity Cache.