News AMD Ryzen 7 5700G Review: Fastest Integrated Graphics Ever

It may be the fastest iGPU but what I want to know is how does it compare to discrete GPUs?

For example I want to build a new PC and I have a Nvidia 970. iI this is slower than that I would be better off getting a normal CPU and using my old card rather than getting this CPU. This article while full of information does not tell me, all it tells me it is the fastest iGPU.

If you have a GPU you can use and it is not to old with the limits placed on this CPU I do not think it is worth getting over other CPUs, you are just limiting what you can do later on.
 
  • Like
Reactions: Soaptrail
It may be the fastest iGPU but what I want to know is how does it compare to discrete GPUs?

For example I want to build a new PC and I have a Nvidia 970. iI this is slower than that I would be better off getting a normal CPU and using my old card rather than getting this CPU. This article while full of information does not tell me, all it tells me it is the fastest iGPU.

If you have a GPU you can use and it is not to old with the limits placed on this CPU I do not think it is worth getting over other CPUs, you are just limiting what you can do later on.
Exactly. The most important part is missing. Why compare this to RTX 3090? Compare it to GT 1030, GTX 1050, GTX 1650. I want to know how fast this iGPU really is.
 
Not only frequencies, timings and memory channels matter, but memory ranks have a significant impact on performance in the case of multithreading and iGPU use.
Ceteris paribus dual ranked modules always outperform single ranked.
 
Last edited:
  • Like
Reactions: ottonis
According to these tests, the iGPU performance is about 7-8% faster compared with its 4xxx predecessor, while RAM speed has undergone an increase from 2900 MHz to 3200 MHz.

Assuming that RAM speed may be the single most relevant factor influencing the iGPU performance, I wonder if there's a possibility to use higher clockspeed rates RAM sticks, e.g. 3600 and overclock just the RAM speed. Shouldn't this improve IGPU perf even further?
 
Assuming that RAM speed may be the single most relevant factor influencing the iGPU performance, I wonder if there's a possibility to use higher clockspeed rates RAM sticks, e.g. 3600 and overclock just the RAM speed. Shouldn't this improve IGPU perf even further?

I remember reading these APU's can hit high FLCK so would be curious to see a follow up with DDR4 memory in the 4000+ range.

I would still probably stop at FCLK of 3800 MT/s and go for the lowest stable timings after that.

There's not really much reason to go past 3800 due to 1 vs 2 CLK Infinity Fabric Memoryc Controller to RAM timing penalties.
 
  • Like
Reactions: ottonis
It may be the fastest iGPU but what I want to know is how does it compare to discrete GPUs?

For example I want to build a new PC and I have a Nvidia 970. iI this is slower than that I would be better off getting a normal CPU and using my old card rather than getting this CPU. This article while full of information does not tell me, all it tells me it is the fastest iGPU.

If you have a GPU you can use and it is not to old with the limits placed on this CPU I do not think it is worth getting over other CPUs, you are just limiting what you can do later on.
I think Gamer's Nexus and Hardware Unboxed have some videos on how they do compared to low end discrete GPUs like the GT-1030. I performs slower on average but in certain games can actually match a GT-1030 which is pretty impressive imo. It's still well behind GPUs like the GTX-970 or R9-390 though.
 
While it is the fastest igpu, its still slower than a decent $200 chip (i5-10600k) and GT1030 in gaming, productivity, thats a different story. the GT1030 which can be had for $130 and a i5-10600k ($214) is a better option for gaming. Cost for the 5700G is just a little to steep for a gamer to use the igpu.
 
  • Like
Reactions: thisisaname
It may be the fastest iGPU but what I want to know is how does it compare to discrete GPUs?

For example I want to build a new PC and I have a Nvidia 970. iI this is slower than that I would be better off getting a normal CPU and using my old card rather than getting this CPU. This article while full of information does not tell me, all it tells me it is the fastest iGPU.

If you have a GPU you can use and it is not to old with the limits placed on this CPU I do not think it is worth getting over other CPUs, you are just limiting what you can do later on.

The igpu is well ... Still slow...1080p show results slightly below 40fps.

So, dont expect too much from it.

https://www.tomshardware.com/reviews/nvidia-geforce-gt-1030-2gb,5110-5.html

Review of the gt1030.... Looks like its pretty much on par or maybe just a tiny bit faster.
 
  • Like
Reactions: Soaptrail
While it is the fastest igpu, its still slower than a decent $200 chip (i5-10600k) and GT1030 in gaming, productivity, thats a different story. the GT1030 which can be had for $130 and a i5-10600k ($214) is a better option for gaming. Cost for the 5700G is just a little to steep for a gamer to use the igpu.
gt 1030 is slower than vega 8, also you get 6 old skylake cores vs 8 zen 3 cores? How you get better productivity with cinebench for example - 3629 for 10600k vs 5465 for 5700g? When the intel's fanboys speak the facts are silent
 
Last edited:
  • Like
Reactions: prtskg
It may be the fastest iGPU but what I want to know is how does it compare to discrete GPUs?

The article tells us that when run in dual-channel, it's about twice as fast as an 11700K's iGPU. You can make some reasonably-informed guesses based on that. Intel's site says the 11700K uses UHD 750 graphics. userbenchmark.com (I know) rates the 1650S as being about 650% faster than the UHD 750, so spitball the 1650S as being ~300% faster as a SWAG. It turns out, though, that userbenchmark has an entry for "rx vega 8 5000 iGPU", and if we assume that's approximately the same as what's in the 5700G, the site shows the discrete GPU as ~200% faster than the Vega 8.

Further, you can compare the vega 8 5000 against other entries in the UB database: it's 161% slower than the 970 and 32% faster than the 1030. The 1050 ti is 60% faster than the vega.

Take all this with whatever size grain of salt you feel appropriate for userbenchmark's scores.
 
I pulled one from a prebuilt and built a VFIO workstation around the Asus Pro WS X570-ACE motherboard. It's an amazing chip; with the power limits tweaked and curve optimizer (mostly) dialed in it pulls down about 14,700 in R23 multi. The iGPU is more than enough for Wayland. I don't have the greatest kit of RAM in there but I managed to get 32GB—at two ranks per channel—running at 3933 CL20 and 1:1 FCLK.

I wish it was 12+ cores, I wish it was RDNA graphics, I wish it was PCIe Gen 4.0 (at least to the chipset), but an APU with those specs doesn't exist. Short of jumping to HEDT, this is a great solution for the price.
 
Last edited:
It may be the fastest iGPU but what I want to know is how does it compare to discrete GPUs?

For example I want to build a new PC and I have a Nvidia 970. iI this is slower than that I would be better off getting a normal CPU and using my old card rather than getting this CPU. This article while full of information does not tell me, all it tells me it is the fastest iGPU.

If you have a GPU you can use and it is not to old with the limits placed on this CPU I do not think it is worth getting over other CPUs, you are just limiting what you can do later on.

normally it perform around GT1030. with heavy tweak like using faster ram and the GPU being overclocked to the limit it might perform near to GTX1050 performance. while many people are very interested to see how far AMD can push APU graphic performance but if we look what AMD has been doing for the last few years they did not improve the graphic portion that much. most of the improvement were all given towards CPU instead. hence to this day the latest APU still using Vega based GPU.
 
According to these tests, the iGPU performance is about 7-8% faster compared with its 4xxx predecessor, while RAM speed has undergone an increase from 2900 MHz to 3200 MHz.

Assuming that RAM speed may be the single most relevant factor influencing the iGPU performance, I wonder if there's a possibility to use higher clockspeed rates RAM sticks, e.g. 3600 and overclock just the RAM speed. Shouldn't this improve IGPU perf even further?
I wonder how much iGPU performance would gain from a quad-channel design, especially since the gain from single-to-dual channel tremendous (74%). If a doubling gives another 30%, that'd be huge.
 
  • Like
Reactions: ottonis
Fastest Integrated Graphics Ever is the iGPU in M1, there's the whole question of games that have to be emulated, but it doesn't change that it's the most powerful.
 
Fastest Integrated Graphics Ever is the iGPU in M1, there's the whole question of games that have to be emulated, but it doesn't change that it's the most powerful.
Barely faster than gt 1030 and lose to overclocked vega 8, no to mention how games are actualy available.
 
24 lanes of PCI-E 3.0...is at least better than the 20 lanes of PCI-E 3.0 forced into a x8/x4/x4/x4 GPU/m.2/m.2/chipset configuration on the mobile platform. Did those include the chipset lanes on the desktop, as well?

But yes, from my experience with mostly the same thing in a different package, a Ryzen 7 5800H, it was an amazing CPU otherwise - Pair it with a good kit of dual-rank memory (yes, dual rank, meaning 32GB/64GB these days, which gives a few more percents of performance on memory bandwidth constrained workloads) and you are good to go. Disable the iGPU for even better performance, more thermal headroom, and less reserved RAM when you do have a good graphic card at last.
 
in raw power the iGPU of the M1 is superior, 2.6 vs 1.5tflops
If you planning to play synthetic benchmarks yes, but if you plan to play games the raw perfromance doesn't really matter, of course for productivity could help, but m1 is not for gaming.
 
  • Like
Reactions: larsv8
Reread my post.
Ah sorry, i missreaded, you are not intel troll, you just give advice when you doesn't know anything for the topic, maybe will be good to watch same 4750g vs 1030 benchmarks. Gt 1030 lose always, and this is 4750g, not 5700g
 
Exactly. The most important part is missing. Why compare this to RTX 3090? Compare it to GT 1030, GTX 1050, GTX 1650. I want to know how fast this iGPU really is.

I read this 'review' yesterday and decided I should take a deep breath and come back to it before commenting. You're exactly right. How the author and the THG editorial staff couldn't see fit to provide some... ANY... sort of 'meaningful' baseline comparison between previous-gen discrete graphics and 5700g integrated graphics is so... just so... insert expletive riddled screed here...

The idea that anyone that could possibly justify shelling out enough cash for a RTX-3090 (at list or otherwise) would consider integrated graphics as a viable alternative is absolutely laughable. If you're willing to cough up $1,500 for an RTX-3090. then you do have options, whether it be new or used that are going to be a much better alternative to 5700g integrated graphics (not that this 'review' even tries to answer that question).

What is the 5700g integrated graphics comparable to exactly?... RTX-480?... something older?... something newer? That being said, I haven't had time to look, but I do expect other review sites to at least try and answer this meaningful and relevant question. rather than just saying (more or less) 'it ain't no RTX-3090, but hey! it's sumthin'.
 
I would still probably stop at FCLK of 3800 MT/s and go for the lowest stable timings after that.

There's not really much reason to go past 3800 due to 1 vs 2 CLK Infinity Fabric Memoryc Controller to RAM timing penalties.

There are reports of the APU's doing FCLK well over 4000 MT/s the limitations of the desktop chips are not that same for the apu's as its a monolithic design and not chiplet based. That extra bandwidth should help alot with the IGPU and lower general lantency if you get it 1:1 with some higher clocked ddr4 sticks.
 
  • Like
Reactions: ottonis