Review AMD Ryzen 7 8700G Review — 1080p-Capable Gaming Comes to Integrated Graphics

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
AMD's new $329 Ryzen 7 8700G and $229 Ryzen 5 8600G "Phoenix" chips

If you are going to buy a console CPU, why not just buy a console?
I guess games can be more expensive but you can also rent games from everywhere or use a gamepass or ps network subscription.

Switch $150-200
Xbox series x $400
PS5 $400-450
 
  • Like
Reactions: Order 66
"Competent" and "Reasonable" are perfect ways to describe an iGPU that can't do 60fps on anything but minimum detail levels (if then) while costing more than a setup with even an entry level dGPU with far more performance (as pointed out at the end of the article, and in most others as well) and with a total system cost that exceeds a console.
Exactly. Competent and capable are synonyms. That's my point. Nowhere does the word "capable" suggest that it's going to provide significantly greater than the minimum required effort to deliver acceptable gaming performance. It's why we used the word. Sorry to pull out the dictionary, but...

1706716830494.png
 

abufrejoval

Reputable
Jun 19, 2020
400
276
5,060
I like flexibility.

I've built nearly all my PCs since 1984, and early on piecemeal was the only choice, because a complete PC was a big box full of parts and more expensive than a new car or a used Mercedes.

Those times are long past, but much of the stuff I build for one purpose, gets reused for other stuff or family later on, perhaps several times over. Perhaps it's the Lego passion from when I was really young, but it's also how we did software later...

So APUs are a bit daft, because they tie tons of stuff that used to be a box full of add-on cards into a single chip where you can't swap out the parts.

And APUs are genius, because -"- and you don't need to add the parts ...although you could, if you really wanted to.

Of course I don't like to pay twice nor do I love to compromise...

When APUs are new, they feel like paying twice the price.
And then they have smaller caches, no V-Cache and/or fewer lanes at lower speeds because limited transistor budget demand a compromise.

At this point the 8700G APU looks pretty ok--except for the price. But that will change. So lets's just hope that its successor isn't too much better when the price has become attractive.

Yesteryear's Cezannes got a good price now, but an iGPU that is quite officially passé.

I'd first use APUs to build compact µ-servers that can double duty as competent desktops, the equivalent of the basic 8-stud Lego brick.

Mini-ITX was my form factor of choice for a long time, because with a case not much taller than an original CD-ROM drive, it made for very compact systems that are good for many things.

When NUCs made Mini-ITX hard to get, I found they cost notebook prices yet lacked all of the extra value: color me not interested.

But eventually NUCs got older and had surplus. Sometimes Intel would use them to sell parts that Apple no longer wanted (NUC8 with a bigger iGPU). In other words: they became interesting.

And that brings me to the Enthusiast NUCs, specifically the G11 Phantom Canyon and the G12 Serpent canyon, quite ridiculous if not outrageous at their original prices.

The G11 Phantom Canyon sports a Tiger Lake i7-1165G7 with an Nvidia RTX 2060m (6GB VRAM), yet was sold at around €400 including taxes, or very nearly the same price as the NUC without that dGPU when I got mine.

The box is very compact, extremely well built and mostly quieter than the equivalent standard NUC. At €30 more than just a Ryzen 8700G, the APU is hard to recommend, but the quad-core NUC is obviously not quite as fast when your workload manages to fill twice the cores on the APU.

The G12 Serpent Canyon has the 6+8 core i7-12700H, adds an ARC A700m with 16GB of VRAM and sold at €700 when I got mine a year or two ago. That's probably rather close to the 8700G in terms of CPU performance, but should easily beat the iGPU part with a dGPU and 16GB of VRAM. Again, that's a full system with lots of ports and interfaces, somewhat thicker than its predecessor and not quite as quiet on peak loads but perhaps difficult to beat with something built on the 8700G APU when €350 need to buy all other parts.

In my book, I got these Enthusiast NUCs very close to bare bone prices, but with a competent dGPU thrown in for free. Of course that's not how Intel tried to design or sell them originally.

So if you're looking for something small and economical to build today with such an APU, you may get better value elsewhere.

If you come back in a year or 18 months, there is a good chance someone will be putting the same chip perhaps together with a mobile dGPU into a small box at a price where it looks much more attractive.

If you just want to build a base system that will work out of the box today, but can be expanded with a beefy dGPU without sucking at CPU a year or 18 month down the road, this APU may be just the base to give you that flexibility.

It's clearly not 'the best' at everything. But I also like having choices and this chip will fill into my mental cabinet of options, even if I happen to never buy one.
 
Last edited:
  • Like
Reactions: bit_user and Flayed

Cooe

Prominent
Mar 5, 2023
19
20
515
🤦...😑 This review COMPLETELY ignores the MASSIVE long-term benefits of future in-socket CPU/APU upgrades that are made possible by going with the still quite new (only 1x gen) AM5 platform! 🤷

Those LGA-1700 & AM4 recommendations might be a bit faster or cheaper right now, but they're completely dead-end systems on dead-end platforms, and that's a SIGNIFICANT buying consideration for a new PC builder/owner!

Considering the platform you are buying into, and when you're hoping on, this really is a 4/5★ product IMO. 🤷
 
  • Like
Reactions: bit_user
Those LGA-1700 & AM4 recommendations might be a bit faster or cheaper right now, but they're completely dead-end systems on dead-end platforms, and that's a SIGNIFICANT buying consideration for a new PC builder/owner!
This has been talked about so many times, they only are dead end platforms if you intend to upgrade within 2-3 years and are willing to stay on an old platform in 2-3 years.
If you make the right selection of CPU now you can keep the whole system, except for the GPU, for at least 5-6 years at which point it would be a very hard sell to stay on an 5-6 year old platform for another few years.
 

bit_user

Polypheme
Ambassador
Thanks for the review, @PaulAlcorn . Here's a bit of late feedback about something I noticed, upon referring back to the section on power efficiency.

AMD's chiplet-based processors not only have one large central die on a slightly more inefficient node, but they also have to spend quite a bit of their power budget shuffling data between the dies. Moving data across the Infinity Fabric consumes about 1.5 picojoules per bit (pJ/b), whereas on-die data transfers require roughly 0.1 pJ/b. That extra power consumption adds up for the chiplet-based designs.
Such numerical claims tend to attract scrutiny, since they're easy to sanity-check and doing so can sometimes turn up results that defy such intuitive explanations.

In the case of CPUs with a single compute die, the traffic over the Infinity fabric link will consist almost exclusively of DRAM reads & writes. If we use DDR5-5600 as the memory speed, that works out to a nominal 0.7168 * 10^12 b/s. Since a pJ is 1 * 10^-12 W*s, it works out that such a data rate should consume only 1.0752 W. Let's go ahead and round that up to 1.1 Watts, but remember that this is a worst-case scenario, where the CPU is hammering main memory, which isn't usually the case (otherwise, we'd see CPU performance scale almost linear with memory speed, and we definitely don't!).

Granted, the Infinity Fabric data rate will be higher for multiple compute dies, since there's sometimes communication between them, but I think we can safely say that power savings isn't the primary reason why their APUs are monolithic. I think the main reason is one of cost savings. I'm pretty sure I've read as much, but I can't cite a source on that.


BTW, on the topic of energy efficiency, I'd also like to say that while the H.264 and H.265 encodes per day per Watt were nice, it would've been even better to have additional per-Watt graphs, like for Y-Cruncher or Blender.


Anyway, thanks & keep up the good work!
 

bit_user

Polypheme
Ambassador
If you are going to buy a console laptop CPU, why not just buy a console laptop?
Fixed that, for you!

If you make the right selection of CPU now you can keep the whole system, except for the GPU, for at least 5-6 years
What about someone who doesn't have money for a faster CPU or nice dGPU, right now? Getting one of these APUs gets their foot in the door, but leaves an upgrade path where they could later add a dGPU and eventually swap in a faster CPU.
 

bit_user

Polypheme
Ambassador
16 TOPS is weak. It has an efficiency advantage in laptops when it can be used. I don't think there's any chance that it can make upscaling or raytracing better.
I mostly agree, except about upscaling. Consider that you could run DLSS on a lowly RTX 2060, which had a nominal performance of about 42 TOPS (base freq.). However, if you look at the way Tensor "cores" were implemented by Turing, they're just tensor product pipelines fed by the same SIMD registers used to feed the CUDA cores and controlled by instructions from the same warps that drive the CUDA cores.

Therefore, if 85% of the GPUs time is spent on primary rendering, that leaves only about 15% for DLSS. 15% of 42 TOPS is just 6.3 TOPS, or less than half what the XDNA engine in the 8700G packs. The key difference is that the XDNA engine probably isn't doing anything else, so you can devote most or all of its 16 TOPS to upscaling.

While I don't want to minimize the difficulty of upscaling, I'd point out that it only needs to beat the quality of conventional scaling algorithms, and I think that's an achievable target for such an amount of compute.
 
Fixed that, for you!
Gaming is not an integral part of a laptop, you can do laptopping with an iGPU that barely provides any display, but it is integral for consoles/gaming handhelds.
What about someone who doesn't have money for a faster CPU or nice dGPU, right now? Getting one of these APUs gets their foot in the door, but leaves an upgrade path where they could later add a dGPU and eventually swap in a faster CPU.
Yes, people that don't have money will waste a lot of money on something they will not be using as soon as they upgrade (the iGPU)
Getting an APU with the intent of getting an GPU later is the most wasteful thing ever.
 

usertests

Distinguished
Mar 8, 2013
573
543
19,760
While I don't want to minimize the difficulty of upscaling, I'd point out that it only needs to beat the quality of conventional scaling algorithms, and I think that's an achievable target for such an amount of compute.
Well this story wasn't out when I wrote that comment:

https://www.theverge.com/2024/2/12/24070426/microsoft-windows-11-dlss-ai-super-resolution-feature


Not that we have enough information to know how useful these NPUs will be with this unannounced feature, but someone will figure it out eventually.

I still don't think that AMD or Nvidia are going to try to harness an NPU with their discrete graphics, which is what the comment I was replying to was talking about.
 
  • Like
Reactions: bit_user
I mostly agree, except about upscaling. Consider that you could run DLSS on a lowly RTX 2060, which had a nominal performance of about 42 TOPS (base freq.). However, if you look at the way Tensor "cores" were implemented by Turing, they're just tensor product pipelines fed by the same SIMD registers used to feed the CUDA cores and controlled by instructions from the same warps that drive the CUDA cores.

Therefore, if 85% of the GPUs time is spent on primary rendering, that leaves only about 15% for DLSS. 15% of 42 TOPS is just 6.3 TOPS, or less than half what the XDNA engine in the 8700G packs. The key difference is that the XDNA engine probably isn't doing anything else, so you can devote most or all of its 16 TOPS to upscaling.

While I don't want to minimize the difficulty of upscaling, I'd point out that it only needs to beat the quality of conventional scaling algorithms, and I think that's an achievable target for such an amount of compute.
But this is why DLSS doesn't offer anything close to linear scaling in performance. Realistically, on something like an RTX 2060, the shader cores and execution units are likely at closer to 99% utilization for most workloads. Cut the resolution in half, then dedicate a chunk of the processing to DLSS, and you still come out ahead.

In practice, outside of RT games running at higher resolutions, DLSS Quality mode generally gives you about a 30~40 percent boost in performance compared to native, in GPU-limited situations. So, on a 2060 that has a theoretical 51 TFLOPS of FP16 compute (because the boost clock is absolutely more useful than the base clock), cut the shader ops in half by dropping the resolution and you have more like 25 TFLOPS to do DLSS upscaling.

This is also why DLSS upscaling on things like RTX 3050/3050 Ti laptops can be more problematic. Well, that and their limited VRAM for sure! But you can still cut the shader calcs in half and then use the remainder to upscale to get an overall boost in FPS. Fundamentally, if you can reduce the total amount of compute being done (shaders plus tensor) by 25%, you can get a 33% boost in FPS.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
cut the shader ops in half by dropping the resolution and you have more like 25 TFLOPS to do DLSS upscaling.
Good points, but at least it sounds like we're in agreement that you don't need all of a GPU's tensor performance to do AI upscaling. Perhaps 16 TOPS would be enough to upscale from 720p to 1080p? That sounds like a plausible setup, for an iGPU-based laptop.
 
Good points, but at least it sounds like we're in agreement that you don't need all of a GPU's tensor performance to do AI upscaling. Perhaps 16 TOPS would be enough to upscale from 720p to 1080p? That sounds like a plausible setup, for an iGPU-based laptop.
I've asked Nvidia about this in various ways over the years, and it never really says precisely how much compute is needed. Which, on the one hand, is understandable. It uses as much as it needs, based on the framerate. But at some level, there's a maximum throughput in FPS from the GPU for a given resolution upscale. I don't know if it's different for various games, though.

I suspect ultimately, faster GPUs can do higher FPS with upscaling. A faster NPU would likewise be able to hit a higher max throughput.
 
Mar 3, 2024
1
0
10
And those savings come before we add the extra platform costs associated with the AM5 platform. As recently as this month, AMD has clearly stated that it will continue to bring new value processors to the AM4 platform due to the continued high pricing for DDR5, hinting that this condition will persist for some time. While DDR5 pricing has fallen from the stratospheric heights we saw at launch, it remains significantly more expensive than DDR4, and all signs point to it jumping in price again due to market conditions.

This is false information at best and completely biased against AMD at worst. Sure, the AMD APU platform is kind of questionable for the price the performance you get versus even an Intel i3-12100 plus an Intel Arc A750 GPU or Nvidia GeForce RTX 4060, but to falsely claim in 2024 that it's due to AM5 and DDR5 RAM is not true. I have found only a $40 difference between comparable DDR4 & DDR5 RAM from G.Skill (DDR4) vs Corsair (DDR5). This isn't the first time the people at TH have been biased against AMD in favor of Intel's power hungry and inefficient options (like the AMD Ryzen 9 7950X vs Intel Core i9-13900K). At the time, sure DDR5 was more expensive. But as I have just shown, DDR5 has come down so far in terms of price that it's next to impossible to recommend AM4, it is also bascially a stagnant platform now, with almost all of AMD's focus on AM5, and Intel will almost certainly be dropping DDR4 with Arrow Lake later this year like they already did with Meteor Lake earlier this year. Besides, on desktop PCs, Intel still lacks a compelling high-end option to counter even the Ryzen 7 7800X3D in terms of gaming and the CPUs that are even close in terms of gaming use much more power to hit that performance. The i7-14700K and i9-14900K both use way too much power for what you get and still fall short against even the Ryzen 7 7800X3D in terms of gaming performance.

But back to the Ryzen 7 8000G APUs, I concur with the opinion of not recommending them to most people, but the reasons are completely different to TH. And you guys  really need go stop criticizing what you're clearly wrong about. (The DDR4 vs DDR5 pricing, especially when that won't be an option very soon.)
 

rsquared

Distinguished
Jul 10, 2005
36
4
18,535
"though the A620 is also attractive"

Not sure about that. I put an 8700G on an A620 motherboard, with the intention of making an HTPC that could also do some basic gaming. It's totally unusable—the iGPU crashes over and over and over. ("Display driver amduw23g stopped responding and has successfully recovered", which seems like a not-uncommon problem on AMD graphics hardware.)

Maybe it would work properly on a B650 mobo, but I don't really want to spend US$200 minimum for something that might possibly work, maybe.
 
  • Like
Reactions: Order 66