News AMD’s Ryzen AI 300-series APUs could offer graphics performance on a par with low-end discrete GPUs

D

Deleted member 2731765

Guest
You missed an important point which needs to be mentioned here.

MSI Laptops which were showcased during Computex were all sporting early ES chips, and we also don't know anything about the TDP, and the test environment.

And since these processors were also not final designs. and were using early engineering silicon of the AMD Ryzen AI 300 chips, so these perf uplift claims are moot at least for now.

The final performance should be different. Also, most importantly, the said Chinese leaker didn't state which specific SKU was tested, since we have two processors which come under the Ryzen AI 300 series, the AI 9 HX 370 and Ryzen AI 9 365.

But anyway, a preliminary score of over 3600 points should put the 890M or 880M iGPU in close distance to the GeForce RTX 2050 (Laptop) GPU as well.

AMD's flagship Ryzen AI 370 AI HX 370 processor sports 12 general-purpose Zen 5-based CPU cores....

Nice double typo. Correct it please. ;)
 
  • Like
Reactions: TechyIT223
D

Deleted member 2731765

Guest
"Assuming that AMD's latest integrated GPU — the Radeon 890M — indeed scores around 3,600 points in 3DMark Time Spy, its performance is more or less in line with the performance of Intel's Arc A370M, GeForce MX 570 for Laptops, or GeForce GTX 1650 for Laptops - or thereabouts."

First off, there is no proof we are looking at the 890M here, as this could also be the 880M iGPU as well. So it would be unwise to claim that this was indeed the 890M SKU. It might also be, but we don't know for sure.

Because the leaker didn't mention which specific SKU was tested, since we have two processors which come under the Ryzen AI 300 series, the AI 9 HX 370 and Ryzen AI 9 365.
 

KnightShadey

Reputable
Sep 16, 2020
147
88
4,670
But anyway, a preliminary score of over 3600 points should put the 890M or 880M iGPU in close distance to the GeForce RTX 2050 (Laptop) GPU as well.
Since we're talking about ES/non-standard... it should be noted that 3600 would put it above the NORMAL GTX2050 mobile , it'd trail the OC'd version (1700Mhz) well above the usual boost clocks of 1477Mhz, which usually falls around 3300pts according to NotebookCheck's list.

https://www.notebookcheck.net/NVIDIA-GeForce-RTX-2050-Mobile-GPU-Benchmarks-and-Specs.586930.0.html

 

oofdragon

Distinguished
Oct 14, 2017
327
292
19,060
You know a laptop RTX 2050 can run any modern game, even at good graphics settings if you don't mind playing it around 30fps, I certainly don't. This is what fires me up when I look at a potato switch 2 rumour, the tech is here already to make GPUs "obsolete". If you can play a game like Horizon West or Plague Requiem at around 35fps low settings without FSR3, what game can't be played in a optimized gaming OS like a Nintendo console with a iGPU like this with good graphics? It would be literally a portable PS5! I get the cost is going to be higher but people do buy PS5s at $500, $550, even up to $1000 if you factor in VR2. Then there's the thing the switch 2 is launching mid cycle for a projected 7 years duty, if it comes really as a lowly portable PS4 it's just too much wastes potential. I for sure won't be buying anything Nintendo ever again until they catch up as I didn't buy into the Switch when they could have made it easily a 1TF mobile console back then
 

oofdragon

Distinguished
Oct 14, 2017
327
292
19,060
You missed an important point which needs to be mentioned here.

MSI Laptops which were showcased during Computex were all sporting early ES chips, and we also don't know anything about the TDP, and the test environment.

And since these processors were also not final designs. and were using early engineering silicon of the AMD Ryzen AI 300 chips, so these perf uplift claims are moot at least for now.

The final performance should be different. Also, most importantly, the said Chinese leaker didn't state which specific SKU was tested, since we have two processors which come under the Ryzen AI 300 series, the AI 9 HX 370 and Ryzen AI 9 365.

But anyway, a preliminary score of over 3600 points should put the 890M or 880M iGPU in close distance to the GeForce RTX 2050 (Laptop) GPU as well.



Nice double typo. Correct it please. ;)
Can you explain if it's possible for them or not to make a 7800X3D class CPU with a iGPU like this? If it is possible, why won't they?
 

usertests

Distinguished
Mar 8, 2013
929
839
19,760
MSI Laptops which were showcased during Computex were all sporting early ES chips, and we also don't know anything about the TDP, and the test environment.
No TDP for this one, but I see "Strix Point (RDNA 3+ 12 CU @22W)" at 3150 on this Wccf article (carried over from this one which puts the TDP at 22-24W and notes it is a leaked number from @Xinoasassin). If the score is accurate, that's 12.5% faster than "Hawk Point (RDNA 3 12 CU @ 55W)".

Beating the previous generation at a reduced TDP and same CU count bodes well for RDNA3.5.
 

usertests

Distinguished
Mar 8, 2013
929
839
19,760
Can you explain if it's possible for them or not to make a 7800X3D class CPU with a iGPU like this? If it is possible, why won't they?
I don't know how much iGPU AMD would be capable of putting in the desktop CPUs, which use up to two core chiplets and then the 6nm I/O chiplet.

Strix Halo would significantly beef up the iGPU and very importantly the memory controller, but it won't be on the AM5 socket, and we don't know about any X3D cache, only Infinity Cache for the graphics. It's probable that L3 cache for powerful integrated graphics is more important than L3 cache for the CPU cores, if you had to pick one. Unified L3 cache for CPU+GPU could be nice but AMD has abandoned that for years.

Mainstream APUs like Strix Point do make their way onto the desktop sockets, but AMD has never given them too much cache and it might be difficult to stack a chiplet on top.

AMD will always make decisions we don't like for practical or dumb reasons, and we have to eat it or move on. Maybe Intel should start putting Adamantine cache on things.
 

oofdragon

Distinguished
Oct 14, 2017
327
292
19,060
I don't know how much iGPU AMD would be capable of putting in the desktop CPUs, which use up to two core chiplets and then the 6nm I/O chiplet.

Strix Halo would significantly beef up the iGPU and very importantly the memory controller, but it won't be on the AM5 socket, and we don't know about any X3D cache, only Infinity Cache for the graphics. It's probable that L3 cache for powerful integrated graphics is more important than L3 cache for the CPU cores, if you had to pick one. Unified L3 cache for CPU+GPU could be nice but AMD has abandoned that for years.

Mainstream APUs like Strix Point do make their way onto the desktop sockets, but AMD has never given them too much cache and it might be difficult to stack a chiplet on top.

AMD will always make decisions we don't like for practical or dumb reasons, and we have to eat it or move on. Maybe Intel should start putting Adamantine cache on things.
Well.. since a 9800X is almost a 7800X3D gaming wise, it don't even need to be a X3D processor. I get it that for such a powerful CPU someone will end up with a powerful GPU as well, but if it's possible by today stabdards to make a iGPu with a 7600XT like performance why not? Wouldn't that be too bad ass to pass up even if you want the high end discrete option too? Back when Cross Fire was a thing there was such a thing as pairing the discrete with the integrated if I remember it correctly, so imagine how cool it could be if that come back again :) Can't beat the RTX 5090? No problem, just CF 8900XTX.

Btw why did they even stop with CF???
 
Last edited:
iGPU catching up with discrete graphic cards pretty well thanks to new and refined be GPU architectures.

Ehh not really the architectures, it's that DDR5 was a large increase over the previous generations in memory bandwidth.

Vector processing, which is what graphics processors do, is capable of ridiculous amounts of parallelism and consequently needs equally ridiculous amounts of graphics bandwidth. Command latency isn't very important when you are processing thousands of data elements all at once and there is little to no dependency between them. Scalar processing OTOH, which is what general purpose CPU's do, processes ridiculously long sequences of data and frequently the next instruction depends on the outcome of the previous instructions and thus command latency is very important. Each type of processing prefers a different type of memory, GPU's want fat but sluggish memory pipes while CPU's want thin but responsive memory. iGPU's have to share the same memory architecture as the CPU and thus are frequently forced to be bandwidth starved. DDR5 is finally providing enough bandwidth that iGPU's aren't starved.


Traditionally I run an APU inside my HTPC and get to witness this al first hand. From the 2400G to the 5600G and recently the 8600G there was this massive boost every time increased memory bandwidth.
 
Last edited:
  • Like
Reactions: TechyIT223

usertests

Distinguished
Mar 8, 2013
929
839
19,760
Well.. since a 9800X is almost a 7800X3D gaming wise, it don't even need to be a X3D processor. I get it that for such a powerful CPU someone will end up with a powerful GPU as well, but if it's possible by today stabdards to make a iGPu with a 7600XT like performance why not? Wouldn't that be too bad ass to pass up even if you want the high end discrete option too? Back when Cross Fire was a thing there was such a thing as pairing the discrete with the integrated if I remember it correctly, so imagine how cool it could be if that come back again :) Can't beat the RTX 5090? No problem, just CF 8900XTX.

Btw why did they even stop with CF???
Crossfire has issues, and the industry has moved on. Kind of, because when multi-chiplet GPUs reach consumers there will be some internal magic to get that to work.

You want the good iGPU, go for Strix Halo in a mini PC. That should have up to 16x Zen 5 cores, up to 40 RDNA3.5 CUs, a 256-bit memory controller for more memory bandwidth, and 32 MiB of Infinity Cache to further stretch that bandwidth.

Such a system will almost certainly be more expensive than an equivalent performance upgradable system with a socketable CPU and discrete GPU, so that will narrow the appeal. But better things are coming and you can vote with your wallet.
 
  • Like
Reactions: oofdragon

KnightShadey

Reputable
Sep 16, 2020
147
88
4,670
Well.. since a 9800X is almost a 7800X3D gaming wise, it don't even need to be a X3D processor. I get it that for such a powerful CPU someone will end up with a powerful GPU as well, but if it's possible by today stabdards to make a iGPu with a 7600XT like performance why not?

Probably if they were to add anything it would be to add more Ai/NPU not iGPU , and they aren't doing that initially either. Since both have better add-in options, kind of makes a good iGPU a subset of a small group of buyers for that market. 2CUs for desktop UI , especially if the add-in card is doing GPGPU/Ai workloads makes sense.
Beyond that the majority of people buying that segment likely would prefer to save a buck or two or have higher stable clocks if they had their choice.... edit and also not impact memory resources.
Btw why did they even stop with CF???
Lots of things, cost for AIBs to build both cards and boards on already expensive hardware, bandwidth limitations at the PCie side where SSDs/NVMe started ramping up fast as well as the link side as VRAM size increased. Plus it was a pain at the time to support in fragmented CF/SLi groups and subset of owners at the time just before Mantle optimizations got rolled into DX12&OGL/V which better address resource management and pipline dependant micro-stuttering & lag.

Ironically, we now have interfaces that are fast enough, and VRAM that is large/cheap enough ( plus toward the end you could finally combine memory not just duplicate VRAM resources ) to benefit some of the physical weaknesses.
It's realistically too late for most , but could come back if multi GPUs for graphics+Ai create multi-card interest similar to that of the PhysX generation that came right before Xfire/SLi. If game Ai can be improved dramatically by adding in a GTX 4070 / RX7700, then you might see some demand especially since DX12/V mulit-GPU support is specifically optimized for heterogeneous GPU configurations.

Likely it becomes of interest/discussed again when the new chiplet solution comes out as UT mentioned above.

Although, I doubt it will use Xfire so much as use package interconnect speed & memory access plus front end management to make the dies look transparently monolithic when necessary (and also separate when desired from what's in the patent).
 
I wonder if using only Zen 4C cores will reduce the heat output and power draw of the CPUs? and handhelds would be fine with 6 cores. Does not need the 8 + 4 config for the CPU.

Might be a good choice for the Z1 extreme successor and some light weight laptops.
 
D

Deleted member 2731765

Guest
I won't be upgrading to any discrete GPU anytime soon.

Would directly go for an iGPU setup like the upcoming Strix HALO APU lineup, or Intel's next-gen Panther Lake series, or whichever lineup offers higher igpu performance.

Given how powerful integrated graphics has become in recent years, and the type of games I usually play, a powerful high CU/XE core count APU should be more than enough for my casual gaming needs.

Currently rocking an RTX 4060 8GB GPU, so this will keep me busy for next few years or so, unless of course it dies (**touches wood*).

The AMD Radeon 780M iGPU can already deliver roughly 60 FPS at 1080p in games such as Cyberpunk 2077. This is a huge leap for integrated graphics as they are getting close to the point where they can replace entry-level discrete solutions.

There's a reason why NVIDIA isn't expected to invest any more in its MX line of GPUs, since both AMD & Intel are prepping some powerful integrated chips for future laptop designs.

780M iGPU results:
  • Vanilla Skyrim (1080P / High) - 120 FPS Average
  • Crysis Remastered (1080P / High) - 71 FPS Average
  • World of Warcraft (1080P / Max) - 98 FPS Average
  • Genshin Impact (1080P / High) - 60 FPS Average
  • Spiderman Miles Morales (1080P / Low) - 74 FPS Average
  • Dirt 5 (1080P / Low) - 71 FPS Average
  • Ghostwire Tokyo (1080P / Low) - 58 FPS Average
  • Borderlands 3 (1080P / Med) - 73 FPS Average
  • God of War (1080P / Original / FSR Balanced) - 68 FPS Average
  • God of War (1080P / Original / Native Res) - 58 FPS Average
  • Mortal Kombat 11 (1080P / High) - 60 FPS Average
  • Red Dead Redemption 2 (1080P / Low / FSR Performance) - 71 FPS Average
The previous gaming benchmark results from ETA Prime's last video:
  • CSGO (1080P / High) - 138 FPS (Average)
  • GTA V (1080P / Very High) - 81 FPS (Average)
  • Forza Horizon 5 (1080P / High) - 86 FPS (Average)
  • Fortnite (1080P / Medium) - 78 FPS (Average)
  • Doom Eternal (1080P / Medium) - 83 FPS (Average)
  • Horizon Zero Dawn (1080P / Perf) - 69 FPS (Average)
  • COD: MW2 (1080P / Recommended / FSR 1) - 106 FPS (Average)
  • Cyberpunk 2077 (1080P / Med+Low) - 65 FPS (Average)

View: https://www.youtube.com/watch?v=KKaoxe5dd3M
 
D

Deleted member 2731765

Guest
---------OFF TOPIC----------------

It appears preliminary or placeholder prices have been spotted for the upcoming Ryzen 9000 series Granite Ridge ZEN 5 CPUs. Not sure whether these are placeholder prices though, but the flagship processor seems to be have a lower price tag.

Canada Computers has listed the AMD Ryzen 9 9950X chip for CAD 839.00, which converts to $610 US.

$90 US lower than the MSRP of the Ryzen 9 7950X which cost $699 US at launch, later replaced by the Ryzen 9 7950X3D for the same price. This listing was taken down though.

AMD-Ryzen-9-9950X-CPU-Price.png



Philippines retailer Bermorzone has also listed the preliminary prices:

Ryzen-9-9950X.png
 
Last edited by a moderator:

usertests

Distinguished
Mar 8, 2013
929
839
19,760
I wonder if using only Zen 4C cores will reduce the heat output and power draw of the CPUs? and handhelds would be fine with 6 cores. Does not need the 8 + 4 config for the CPU.

Might be a good choice for the Z1 extreme successor and some light weight laptops.
Zen 5 looks much more efficient than Zen 4, so there's no use sticking with the old cores.

For handhelds, Kraken sounds good. That's 4x Zen 5 + 4x Zen 5c, similar to Lunar Lake.
So, roughtly about 250 for 9600X, 350 for 9700X, 550 for 9900X and 600 for 9590X?

the ryzen 9 pricing seems skewed. The R5 and R7 seems to follow previous launch prices...
7600X was $299, 7700X was $399, 7900X was $549, 7950X was $699. I think your 9900X price needs to go to $500 and then it's fine.