News AMD Ryzen 4000 Renoir Desktop Benchmarks Show Up To 90 Percent Uplift Over Last-Gen Flagship

May 28, 2020
1
1
15
0
I'm a bit confused by this article. I know the mobile Ryzen 4000 series chips are still zen2, but I thought the upcoming 4000 series desktop chips were all zen3?
 
Reactions: Rdslw

caqde

Distinguished
May 31, 2007
1,281
0
19,960
296
A bit disappointing - I was for a graphics increase, maybe 20 CUs for a tiny game machine.
That 3400G is overclocked the stock GPU clockspeed is 1400Mhz that one was running at 1700Mhz (21% higher clockspeed). The 4400G/4700G are likley running at stock. It should also be noted that 2933 is the official stock 3400G memory speed.

Given Zen 2's improved memory controller it should be able to use higher clocked memory at or above 3800Mhz which would improve performance. And it might be possible to clock the GPU above the 4700G's 2100Mhz.

It should be noted that a stock 3400G with 2800Mhz memory would score around ~3800 in the Graphics Test and ~4000 with 3200Mhz memory. Although it is disappointing that the leaked 4x00 series results are that close to the 3x00 series results hopefully the results will be higher when it is released.
 
These will be awesome chips for use in computers like the HP Elitedesk 705 G4 Mini. My office is slowly going to these since you can mount them behind your monitor on a VESA mount. The ones we have include the 2400G and that is already decent, getting a 4400G or better would be nice for the Virtualbox VMs.
 
Reactions: Mandark

st379

Distinguished
BANNED
Aug 24, 2013
169
66
18,660
0
T
That 3400G is overclocked the stock GPU clockspeed is 1400Mhz that one was running at 1700Mhz (21% higher clockspeed). The 4400G/4700G are likley running at stock. It should also be noted that 2933 is the official stock 3400G memory speed.

Given Zen 2's improved memory controller it should be able to use higher clocked memory at or above 3800Mhz which would improve performance. And it might be possible to clock the GPU above the 4700G's 2100Mhz.

It should be noted that a stock 3400G with 2800Mhz memory would score around ~3800 in the Graphics Test and ~4000 with 3200Mhz memory. Although it is disappointing that the leaked 4x00 series results are that close to the 3x00 series results hopefully the results will be higher when it is released.
The results won't be higher. The bottleneck is ddr4 not the gpu. Many benchmarks show more than 10% increase in performence with faster ddr 4.
Amd need to put HBM memory on the apu itself if you want to see ps5/xbox x level of performence.
The number of cu won't change the slow ddr 4 and will be bottlenech because of it.
 
Reactions: Mandark

sleepyskies

Prominent
Jan 12, 2019
6
0
510
0
It is really disappointing to see the iGPU not making any marked improvements. I've been really eager build a second APU alongside my 2400G and or upgrade it, but GPU is really one of the most important things for me as far as this chips are concerned. On the CPU side of things they perform so well I wasn't really hoping for a big CPU improvement.

Good on AMD for the gains made, but I'll probably be passing up this release unless reviews show promising benchmarks.
 

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
511
395
760
0
It is really disappointing to see the iGPU not making any marked improvements. I've been really eager build a second APU alongside my 2400G and or upgrade it, but GPU is really one of the most important things for me as far as this chips are concerned. On the CPU side of things they perform so well I wasn't really hoping for a big CPU improvement.

Good on AMD for the gains made, but I'll probably be passing up this release unless reviews show promising benchmarks.
Integrated graphics has really hit the point of diminishing returns. You need a lot more memory bandwidth to feed more shader cores, and with the mainstream desktops all on dual-channel DDR4, the best you can reasonably hope for is official DDR4-3200 support. That's 51.2 GBps of theoretical bandwidth, shared between GPU and CPU use. For comparison, even a relatively low-end dedicated GPU like an RX 560 has more than double that bandwidth (112 GBps), all for the GPU.

So on a theoretical performance level, the RX 560 has twice the bandwidth, with CUs running at 1175 MHz. That's 2.1 TFLOPS of compute. Performance is almost double that of the Vega 11 in the Ryzen 5 3400G. The 3400G is 11 CUs at 1400 MHz, or 1.97 TFLOPS, so the majority of the performance difference is thanks to memory bandwidth.

How do you improve integrated graphics, then? You need more cores and more bandwidth. The first is easier than the second. Could you do an HBM2 stack? Sure -- Intel teamed up with AMD for such a processor with Kaby Lake G and Vega M Graphics. Performance was relatively close to an RX 570 4GB ... and the cost for the processor was more than double the cost of a 3400G.

The next-gen consoles are doing massive GPUs with CPUs, plus 16GB of GDDR6 memory. But that's a special case, as console hardware has a static spec and sells tens of millions of units, so economies of scale come into play. No CPU or GPU upgrades, static platform, etc. But no company has even attempted a PC "APU" with anything close to console levels of GPU performance and memory bandwidth.
 

sleepyskies

Prominent
Jan 12, 2019
6
0
510
0
Integrated graphics has really hit the point of diminishing returns. You need a lot more memory bandwidth to feed more shader cores, and with the mainstream desktops all on dual-channel DDR4, the best you can reasonably hope for is official DDR4-3200 support. That's 51.2 GBps of theoretical bandwidth, shared between GPU and CPU use. For comparison, even a relatively low-end dedicated GPU like an RX 560 has more than double that bandwidth (112 GBps), all for the GPU.

So on a theoretical performance level, the RX 560 has twice the bandwidth, with CUs running at 1175 MHz. That's 2.1 TFLOPS of compute. Performance is almost double that of the Vega 11 in the Ryzen 5 3400G. The 3400G is 11 CUs at 1400 MHz, or 1.97 TFLOPS, so the majority of the performance difference is thanks to memory bandwidth.

How do you improve integrated graphics, then? You need more cores and more bandwidth. The first is easier than the second. Could you do an HBM2 stack? Sure -- Intel teamed up with AMD for such a processor with Kaby Lake G and Vega M Graphics. Performance was relatively close to an RX 570 4GB ... and the cost for the processor was more than double the cost of a 3400G.

The next-gen consoles are doing massive GPUs with CPUs, plus 16GB of GDDR6 memory. But that's a special case, as console hardware has a static spec and sells tens of millions of units, so economies of scale come into play. No CPU or GPU upgrades, static platform, etc. But no company has even attempted a PC "APU" with anything close to console levels of GPU performance and memory bandwidth.
Fair points sadly.
 
Integrated graphics has really hit the point of diminishing returns. You need a lot more memory bandwidth to feed more shader cores, and with the mainstream desktops all on dual-channel DDR4, the best you can reasonably hope for is official DDR4-3200 support. That's 51.2 GBps of theoretical bandwidth, shared between GPU and CPU use. For comparison, even a relatively low-end dedicated GPU like an RX 560 has more than double that bandwidth (112 GBps), all for the GPU.

So on a theoretical performance level, the RX 560 has twice the bandwidth, with CUs running at 1175 MHz. That's 2.1 TFLOPS of compute. Performance is almost double that of the Vega 11 in the Ryzen 5 3400G. The 3400G is 11 CUs at 1400 MHz, or 1.97 TFLOPS, so the majority of the performance difference is thanks to memory bandwidth.

How do you improve integrated graphics, then? You need more cores and more bandwidth. The first is easier than the second. Could you do an HBM2 stack? Sure -- Intel teamed up with AMD for such a processor with Kaby Lake G and Vega M Graphics. Performance was relatively close to an RX 570 4GB ... and the cost for the processor was more than double the cost of a 3400G.

The next-gen consoles are doing massive GPUs with CPUs, plus 16GB of GDDR6 memory. But that's a special case, as console hardware has a static spec and sells tens of millions of units, so economies of scale come into play. No CPU or GPU upgrades, static platform, etc. But no company has even attempted a PC "APU" with anything close to console levels of GPU performance and memory bandwidth.
Very well stated. Thank you
 
Reactions: JarredWaltonGPU

Schlachtwolf

Notable
Jun 22, 2019
491
111
1,040
47
This in my opinion are still not really hard facts as to what we will get to buy at the end of the day, until the official launch it is all peek-a-boo guesswork and shadow dancing. I don't want or need a APU chip but am bursting to see what a 4900x for example will bring to the table..... that is what I am waiting for, along with a 3080ti or whatever comes from Big Navi to go with it... 1500€ already put away (don't tell the wife...)LOL
 

gg83

Honorable
Jul 10, 2015
130
18
10,585
0
Huh, why come the Ryzen 7 4700G scores better than the Ryzen 9 4900HS? I am more confused by AMD's moble numbering now.
 

gg83

Honorable
Jul 10, 2015
130
18
10,585
0
The 4700G is a 65W desktop part and the the 4900HS is 35W laptop part. The added TDP allows for higher and longer clocks.
Yeah but why Ryz 9 bellow Ryz 7? Is it a scale similar to Golf? Where lower the number higher the score? Lol. Thanks for the answer. I wouldn't have made that connection. So maybe a Ryzen 9 4900G have 65W?
 

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
511
395
760
0
Yeah but why Ryz 9 bellow Ryz 7? Is it a scale similar to Golf? Where lower the number higher the score? Lol. Thanks for the answer. I wouldn't have made that connection. So maybe a Ryzen 9 4900G have 65W?
Ryzen 9 4900HS is a mobile 8-core/16-thread part that's basically the same as the Ryzen 7 4700G desktop, but with a lower TDP and different clocks. The 4900HS is a special chip for Asus that's 8-core/16-thread and a 35W TDP. Ryzen 7 4700U meanwhile is an 8-core/8-thread part with a 15W TDP, while the Ryzen 7 4800H is an 8-core/16-thread part with a 45W TDP.

There will not be a Ryzen 9 desktop APU, as far as I can tell -- 8-core is the highest AMD can go with the current design. Unlike the desktop CPUs, where there's a cIOD (chiplet IO die?) combined with one or two CCDs (core complex die), Renoir is a monolithic design where package contains a single die under the heatspreader.
 
Reactions: gg83

MasterMadBones

Distinguished
Dec 26, 2012
413
61
19,090
53
So on a theoretical performance level, the RX 560 has twice the bandwidth, with CUs running at 1175 MHz. That's 2.1 TFLOPS of compute. Performance is almost double that of the Vega 11 in the Ryzen 5 3400G. The 3400G is 11 CUs at 1400 MHz, or 1.97 TFLOPS, so the majority of the performance difference is thanks to memory bandwidth.
This is actually interesting, because we know that Navi needs half the memory bandwidth for the same performance, compared to Vega. Renoir is stuck on Vega, but it sounds like Van Gogh, which seems closer than we initially expected, could be able to put more CUs to work effectively within the limitations of DDR4.
 

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
511
395
760
0
This is actually interesting, because we know that Navi needs half the memory bandwidth for the same performance, compared to Vega. Renoir is stuck on Vega, but it sounds like Van Gogh, which seems closer than we initially expected, could be able to put more CUs to work effectively within the limitations of DDR4.
I don't think that's quite accurate. I think Vega was wasteful of bandwidth on the desktop HBM cards, and Navi improves some things. The 5700 XT certainly does fine with (less than) half the bandwidth of the Radeon VII. But I suspect Navi with only 112 GBps would have some issues. Well, depending on the number of CUs. 5500 XT Navi 14 still has 224 MBps (probably more than it needs by quite a bit), while 5600 XT Navi 10 is at 288 MBps (12 Gbps GDDR6) and bumping that to 336 GBps (14 Gbps) boosted performance about 10%.

Better delta color compression means Navi might get 30-50% more effective bandwidth from its memory (maaaybe -- probably 15-30% is more likely in the real world). That would still only equate to at most 76.8 GBps effective bandwidth for dual-channel DDR4-3200. So performance could be up to 50% higher than current integrated graphics on the bandwidth side, which would probably be the bottleneck if AMD doubled to 20 CUs.
 

shady28

Distinguished
Jan 29, 2007
56
18
18,535
0
The headline on this article is extremely misleading.

This is not over the 'last generation flagship' from AMD which most would consider the 3800X/3900X or one of the threadripper variants.

This is over the fastest of the APU lineup which itself is a low end line in the first place.

Being a car enthusiast, if this headline were about the fastest Ford subcompact on the road but had a headline that said it's '30% faster than their flagship sports car from last year' you would be getting ripped apart.
 

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
511
395
760
0
The headline on this article is extremely misleading.

This is not over the 'last generation flagship' from AMD which most would consider the 3800X/3900X or one of the threadripper variants.

This is over the fastest of the APU lineup which itself is a low end line in the first place.

Being a car enthusiast, if this headline were about the fastest Ford subcompact on the road but had a headline that said it's '30% faster than their flagship sports car from last year' you would be getting ripped apart.
Context is everything, as usual. If you know what Renoir is, you know it's the new APU and thus the former flagship would be Picasso. If you don't know that, the article clearly explains the comparison point. If Honda said its new Odyssey minivan was up to twice as efficient as its previous flagship, obviously the comparison point would be the previous Odyssey, not the Accord or Civic or Pilot.
 

shady28

Distinguished
Jan 29, 2007
56
18
18,535
0
Context is everything, as usual. If you know what Renoir is, you know it's the new APU and thus the former flagship would be Picasso. If you don't know that, the article clearly explains the comparison point. If Honda said its new Odyssey minivan was up to twice as efficient as its previous flagship, obviously the comparison point would be the previous Odyssey, not the Accord or Civic or Pilot.

Companies do not have 5 flagships, just as navies don't (which is where the word comes from). There is one flagship. The Odyssey is not Honda's flagship. Their Flagship is the Honda Legend, which is not sold in the USA. So your analogy is factually incorrect.

All you have to do is read this thread to see how many people got confused over the title. All you would need to do to clear this up is change the article to read "...Last-Gen APU ".


"AMD Ryzen 4000 Renoir Desktop Benchmarks Show Up To 90 Percent Uplift Over Last-Gen Flagship : "
 

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
511
395
760
0
Companies do not have 5 flagships, just as navies don't (which is where the word comes from). There is one flagship. The Odyssey is not Honda's flagship. Their Flagship is the Honda Legend, which is not sold in the USA. So your analogy is factually incorrect.

All you have to do is read this thread to see how many people got confused over the title. All you would need to do to clear this up is change the article to read "...Last-Gen APU ".

"AMD Ryzen 4000 Renoir Desktop Benchmarks Show Up To 90 Percent Uplift Over Last-Gen Flagship : "
I actually have read the entire thread, and responded to several people. You're the only one complaining about the title. The confusion is people wondering why it's faster than a Ryzen 9 4900HS, or wondering why this isn't Zen 3 -- both questions which have been answered. Anyway, I didn't write the title, or the article, and I disagree with your semantics on what "flagship" is supposed to mean. Sorry.
 

ASK THE COMMUNITY

TRENDING THREADS