News Meet The Ryzen 9 4900H & Ryzen 7 4800H: The Rumored 8C/16T APUs

bit_user

Polypheme
Ambassador
It would be cool if it had integrated VEGA 56 or 64. I dreaming but it's not a crime, lol
The thing to keep in mind is cooling. Those GPUs use 200 - 300 W, by themselves. Add the CPU cores and you're well into water cooling territory, which would limit the market, considerably. People who could afford such a setup could also afford just to use a dGPU, which would still probably perform better. So, there's basically no market left.

The other issue is potentially memory bandwidth (unless they use HBM2). DDR4 or even DDR5 wouldn't have nearly enough bandwidth, for graphics on that level. But HBM2 would probably push the chip into a much bigger socket (if the mere presence of the GPU didn't, already). So, even further out of the mainstream.

Nothing wrong with dreaming, but it's also good to be realistic about the prospects of it coming true.
 
  • Like
Reactions: tingety

PCMDDOCTORS

Distinguished
Aug 31, 2015
192
17
18,715
The thing to keep in mind is cooling. Those GPUs use 200 - 300 W, by themselves. Add the CPU cores and you're well into water cooling territory, which would limit the market, considerably. People who could afford such a setup could also afford just to use a dGPU, which would still probably perform better. So, there's basically no market left.

The other issue is potentially memory bandwidth (unless they use HBM2). DDR4 or even DDR5 wouldn't have nearly enough bandwidth, for graphics on that level. But HBM2 would probably push the chip into a much bigger socket (if the mere presence of the GPU didn't, already). So, even further out of the mainstream.

Nothing wrong with dreaming, but it's also good to be realistic about the prospects of it coming true.


Well what GPU do you think they could cram into a CPU without it being a problem, going beyond VEGA 11?
 
on integrated side those would be VEGA 5.6 or 6.4 but still...
8/16 APU is a dream of any SFF build... with full 300W vega integrated that would make perfect hovercraft!

As mentioned:
I expect 6/12C 10-11CU unit @45W. (15W 8CU would be perfect, but lets not be greedy) I want it.
 
Last edited:
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Well what GPU do you think they could cram into a CPU without it being a problem, going beyond VEGA 11?
Let's look at Intel, for a moment. On their 10 nm process (which is pretty equivalent to the 7 nm TSMC process that AMD is using) the Gen11 iGPUs max out at 64 EU. Each EU has 8 of what AMD tearms "shaders". So, that works out to roughly what AMD would term a 512-shader iGPU.

A Vega CU (which I guess they call NCU, because it was "new", at one point) has 64 shaders. So, a 11-CU Vega iGPU has 704 shaders. That suggests that Intel's Gen11 is mostly a catch-up exercise. Of course, it's a bit simplistic to compare GPUs on shaders, alone. Anyone following the AMD vs. Nvidia race could tell you that raw compute isn't the whole story. Even within AMD's products, Navi shows that Vega didn't use its compute very efficiently. But, I digress.

So, we see a rough convergence around 8-11 CUs. Maybe that suggests that memory is a bottleneck, if you go much higher. That shouldn't be surprising, if you look at the compute vs. bandwidth ratio of their dGPUs. Below are numbers derived from the base clocks of the respective reference card specs.

Polaris (GDDR5):
ModelBus WidthGFLOPSGB/secFLO/B
RX 550
128​
1126​
112​
10.1​
RX 560
128​
2406​
112​
21.5​
RX 570
256​
4784​
224​
21.4​
RX 580
256​
5792​
256​
22.6​
RX 590
256​
6769​
256​
26.4​

Vega (HBM2):
ModelBus WidthGFLOPSGB/secFLO/B
RX Vega 56
2048​
8286​
410​
20.2​
RX Vega 64
2048​
10215​
484​
21.1​
Radeon VII
4096​
11136​
1028​
10.8​

Navi (GDDR6):
ModelBus WidthGFLOPSGB/secFLO/B
RX 5500 XT
128​
4703​
224​
21.0​
RX 5700
256​
6751​
448​
15.1​
RX 5700 XT
256​
8218​
448​
18.3​

Now, what patterns do we see? Well, first, we should probably ignore the RX 550 (indeed, there are supposedly 64-bit bus versions of that GPU, which have FLO/B right in line with its siblings). Also, we should ignore Radeon VII, since it wasn't originally intended to be a gaming GPU. As for the rest, we see FLO/B range from 15.1 to 26.4, with a definite clustering around 21.

So, what does this tell us about GPU bandwidth requirements? Well, at least for Vega, I think we can say that you probably don't want much more than 21 GFLOPS per GB/sec. Given that the Ryzen 5 3400G supports up to dual-channel DDR4-2933, which I think is good for somewhere in the neighborhood of 47 GB/sec, that would suggest about 987 GFLOPS of compute. Dividing that by 2 gives us a target shader * GHz product. With typical AMD iGPU clocks ranging from 1.1 to 1.4 GHz, that yields 449 to 353 shaders. So, that's actually between about 6 and 7 CUs. This suggests their APUs are already compute-heavy.

For a sanity-check, consider that Ryzen 5 3400G's GPU reportedly delivers 1971 GFLOPS (though I'm not sure if the 1.4 GHz figure is technically base or boost clocks). Divided by 47 GB/sec, we get about 42 FLO/B. So, it's a real outlier. Very compute-heavy, for the amount of bandwidth it has. And it has to share that bandwidth with the CPU! You can further validate this, by checking out various benchmarks people have run on it, where they vary the memory frequency.

In conclusion, I'd say that iGPUs are already rather maxed, unless/until you do something to alleviate memory bottlenecks. Maybe put some HBM2, in package. However, that would require a bigger package and significantly increase price. Anyway, once you can scale up the bandwidth, then your bottlenecks become power and thermals.

Where I would expect such a product to possibly make sense is in the laptop sector. There, you could potentially reap some cost savings from the higher-level of integration. What's tricky about that proposition is that high-end laptops are in the 16 - 32 GB range. Certainly, you want at least 8 GB for the GPU, so 16 GB would be a minimum. However, HBM2 isn't cheap and the CPU doesn't really need so much bandwidth. So, at that point, it's tempting just to move the GPU into its own package, with its own memory. Furthermore, the thinnest laptops, that place the highest premium on integration, are also the most power & thermally-restricted - so, not great candidates to feature an extremely high-powered iGPU. And in anything bigger, then it doesn't seem such an issue to have a dGPU.

As a bonus, let's try the same analysis on some recent consoles (both Polaris + GDDR5):
ModelBus WidthGFLOPSGB/secFLO/B
PS4 Pro
256​
4198​
218​
19.3​
Xbox One X
384​
6001​
326​
18.4​

This seems to confirm that you really want somewhere in the realm of 1 GB/sec of bandwidth, for every 20 GFLOPS of GPU horsepower.
 
Last edited:

bit_user

Polypheme
Ambassador
extra cores are ok, but Intel integrates wifi6, thunderbolt3, avx512, Optane support, and the new Tiger Lake laptop chips will have Xe graphics.
Granted, wifi 6 and thunderbolt 3 have obvious value (especially in a laptop or case without a slot for a graphics card).

Optane... is something Intel keeps talking about. Let's see it gain some real traction, before counting that as a win. AVX-512... again, I'm skeptical it's going to see much use, in consumer apps. You're better off just using the GPU for most things that would benefit from it. As I outlined, above, I think Intel is still in the catch-up phase, with their iGPUs. I'll want to see somebenchmarks, before I count a Xe iGPU as a win (vs. simply achieving parity with AMD).

That makes this 28W NUC box more interesting. It also has a slot for a discrete GPU. Might be late this year for the NUC.

https://www.tomshardware.com/news/i...uc-tiger-lake-xe-graphics-pcie-4.0,40140.html
That's not a NUC, in any meaningful sense. NUCs have a 4" x 4" motherboard. Intel invented that term to describe computers that were far smaller than traditional PCs. However, mini PCs of that size even predate NUCs.

That thing is basically (if not actually) a mini-ITX PC that Intel's brain-dead marketing department just decided to brand as a NUC. That they didn't think they could sell it simply as an Intel PC says a lot about how little regard they have for that product's target customers. Basically, once you can plug in 160 W PCIe graphics cards, it's definitely no longer a NUC. I guess the only remaining difference would probably be its non-socketed CPU.
 
  • Like
Reactions: TJ Hooker

TRENDING THREADS