News AMD Ryzen 4000 iGPU Almost Catches Nvidia's GeForce MX250

InvalidError

Titan
Moderator
Will be interesting to see what it can do with LPDDR4X-4266. That would give the APUs another 33% increase in RAM bandwidth.
The price premium of 4266 vs 3200 would pay for most of the MX250, so I doubt many manufacturers(if any) will bother exploring that option.

Unless AMD decides to introduces IGPs with HBM first, the next major IGP performance bump will likely come from DDR5 bringing 4000+MT/s down to the entry-level.
 

alextheblue

Distinguished
Apr 3, 2001
3,078
106
20,970
2
Will be interesting to see what it can do with LPDDR4X-4266. That would give the APUs another 33% increase in RAM bandwidth.
That was my immediate thought as well. My second thought was that I wish they had a 10+ CU design, but I understand their desire to spend their transistor (and power) budget heavily on the CPU side this time, since they have a competitive and efficient core. That being said, the 8 CU models are performing quite well.
The price premium of 4266 vs 3200 would pay for most of the MX250, so I doubt many manufacturers(if any) will bother exploring that option.
Not that I think anyone here really knows exactly how much more an OEM would pay for LPDDR4-4266 vs 3200, nor what the same OEM would pay for an MX250 (preferably a 4GB version, as 2GB can hinder some games even at lower settings), but that aside:

A high-end ultrathin design might very well favor the space, thermal, and power savings. There are already Intel-powered designs with LPDDR4, it's not exactly breaking new ground... but the high-end Renoir chips can make better use of the extra bandwidth.
 
Apr 18, 2020
2
1
15
0
That was my immediate thought as well. My second thought was that I wish they had a 10+ CU design, but I understand their desire to spend their transistor (and power) budget heavily on the CPU side this time, since they have a competitive and efficient core. That being said, the 8 CU models are performing quite well.
Not that I think anyone here really knows exactly how much more an OEM would pay for LPDDR4-4266 vs 3200, nor what the same OEM would pay for an MX250 (preferably a 4GB version, as 2GB can hinder some games even at lower settings), but that aside:

A high-end ultrathin design might very well favor the space, thermal, and power savings. There are already Intel-powered designs with LPDDR4, it's not exactly breaking new ground... but the high-end Renoir chips can make better use of the extra bandwidth.
I really don't think the extra bandwidth would help it much, or even much by extra cu. It looks to me to be all dependant on the power target. Nothing was written about this in the review, and it seems to me that it is set to 15watt, versus I believe 15watt+15watt Intel+Nvidia combo. (Keep in mind, that this is tdp and not representative of actual power usage) but it gives us a hint.

Several people have tested the Igpu on the 4900HS and 4800HS, which is only a 7CU Vega chip, on 3200mhz ram and those are running away from the mx250 and even mx 350, prompted Nvidia to rush out mx450 (touring based).

Findings from lowering the tdp on 4800Hs for the Igpu clocks. But keep in mind, that this also depends on how stressed out the CPU is and how recourses are allocated.
15watt it runs at about 950-1000 MHz
20 watt it runs at about 1250-1300 MHz
25 watt it runs at about 1400-1500 MHz
35 watt it runs at a locked 1600 MHz

Based upon this, the chip should get roughly 50% increase in clockspeeds when using the full 25watt capable on the 4800U ( assuming it is running 15watt)

This power scaling is based of this YouTube channel.
https://www.youtube.com/channel/UCV_FbbkkWz4KHNzMlmYO04A
 
Reactions: TJ Hooker

ron baker

Distinguished
Mar 13, 2013
54
3
18,535
0
would love to get something like that in a NUC type either Intel or AMD ..basically a laptop without screen or keybard.. but cost is a killer , intels ghost canyon is too rich . AsRock Deskmini ??
 

alextheblue

Distinguished
Apr 3, 2001
3,078
106
20,970
2
I really don't think the extra bandwidth would help it much, or even much by extra cu. It looks to me to be all dependant on the power target.
Even their older APUs (with slower graphics) continued to scale with memory speed past 3200. The tuned CUs in Renoir will also scale, even at 15W. The question is how much. The actual frequencies will also vary by cooling solution, load, and the load on the CPU cores. I would agree that if you're looking at a top-line "U" series chip as an alternative to a 15W+dGPU combo, 25W sounds pretty reasonable. OEMs don't usually look at it that way, but I think it would be fantastic to see these with LPDDR4 and a 25W TDP.
 
Reactions: TJ Hooker
Apr 18, 2020
2
1
15
0
Even their older APUs (with slower graphics) continued to scale with memory speed past 3200. The tuned CUs in Renoir will also scale, even at 15W. The question is how much. The actual frequencies will also vary by cooling solution, load, and the load on the CPU cores. I would agree that if you're looking at a top-line "U" series chip as an alternative to a 15W+dGPU combo, 25W sounds pretty reasonable. OEMs don't usually look at it that way, but I think it would be fantastic to see these with LPDDR4 and a 25W TDP.
Don't get me wrong, I in no way intended to make it seem like the memory was redundant. But if running at only 1000mhz it is much less of an issue. My point was, that the only reason the chip did not reach mx250 and above is because of the power limit.

One of the main reasons this generation of Vega is faster than the last, is that it is way better at utilizing the bandwidth, thereby it being less of a bottleneck. And if you scale up higher than 8 cu,(25 watt or more) more bandwidth would definitely be needed.

Of course this depends on what sort of texture you are trying to load in, what resolution. In some cases it might not mean anything, in others a lot.
 

ASK THE COMMUNITY

TRENDING THREADS