News AMD vs Intel Integrated Graphics: Can't We Go Any Faster?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Are we forgetting who the APU market is? ... The most demanding games that they play is the variations of solitaire, or an updated version of minesweep...
That's not always true.

Perhaps nobody buys Intel APUs intending to game, however plenty of people buy Ryzen APUs to play games on the cheap since the integrated graphics are easily capable of running many popular but non-demanding games like fortnite or csgo.
Intel HD really cannot run those titles nearly as well.
 
That's not always true.

Perhaps nobody buys Intel APUs intending to game, however plenty of people buy Ryzen APUs to play games on the cheap since the integrated graphics are easily capable of running many popular but non-demanding games like fortnite or csgo.
Intel HD really cannot run those titles nearly as well.
AMD APU's would be good for "FortNite", CS:GO, LoL, older games.

But that's about it.

There is a demand for that level of performance on a big screen TV.
 
Just an FYI, DDR5 provides more effective bandwidth than DDR4 even at the same frequency/data rate. So DDR5-6400 ought to provide more than double the effective bandwidth of DDR4-3200. I haven't read enough yet to understand why that is though 😛
https://www.micron.com/-/media/clie...r/ddr5_more_than_a_generational_update_wp.pdf
Prefetch buffer will be up to double the size, which can translate into 25-36% performance improvement at the same transfer rate (depending on where you get your information). I'm not sure about the impact on latency (or how the buffer affects latency).

If you build an AMD APU, or any AMD/Intel dedicated GPU, system right now, DDR5 (USB4, PCI-e 5, etc.) will be the party crasher 1-2 years from now. That is ok, you will still be able to use today's system for more than just watching (as another here put it) kitten videos on YouTube. By then Nvidia should have newer AI that 'is' more than just Pacman (per recent news), so we can watch those videos instead. 😉
 
  • Like
Reactions: TJ Hooker
I'm surprised that AMD doesn't sell a similiar apu to what they put into the Xbox or PS. They should make a 100-150watt TDP apu. In the process, they would canibalize their low end market, but also in the process, they would dominate the PC gaming market.
Likely they couldn't find evidence for a market niche worth pursuing. Keep in mind such a PC would be using much slower PC type RAM and thus would perform substantially worse than the consoles. You could perhaps have a dedicated slot for higher performance video memory but this gets way too close to the price of a discrete video card. If Microsoft thought there was a market for this they could have product out fairly quickly but over the last seven years haven't seen a reason to do so.
 
These APU's are last-gen. Why even write about it, unless Intel wants them downplayed.

APU's are AWESOME. Not only because you can troubleshoot problems quickly because you run 2 graphics outputs with secondary graphics.

You can run a multicore setup like a 2nd computer with VMware or IBIK Aster.
That's right, grab a Raspberry Pi 4 and stream Steam from your PC without sharing your screen.

Most games when the A10 released were OK to play, which I can say as a current user of one, Mr. I can't test everything and write a biased article.
 
It's all about money for sure -- but not the way you're suggesting. No one has successfully made a fast iGPU solution for PCs. You're saying Microsoft paid off not just AMD, but every other company that might think about doing such a thing. Not a chance. And equally unlikely MS was able to pay AMD off.

Easiest way to disprove that assertion: If MS could pay AMD to not make a faster iGPU for PC ... wouldn't it make far more sense to pay AMD to not make such a thing for Sony? Microsoft isn't worried about competition from PCs killing the market for Xbox at all. It hasn't been a problem since the original Xbox; why would things change now?

And $50 for 4GB of HBM2 is extremely expensive on a component level. I gave math earlier, but basically a chip like Picasso costs AMD around $35 to make, and Renoir is maybe $50-$60. So if AMD put a $60 chip with $100 of HBM2, it would need to either sell tens of millions of the chips at around $200 each, or else price would have to be much, much higher -- like $500+.

There simply isn't enough demand or profit potential in making an extreme performance integrated graphics solution right now. Only custom designs that are basically guaranteed to sells tens of millions of units over time (Xbox, PlayStation, or Apple) can justify the cost.
Not to mention, Microsoft is a PC vendor too these days. (That still feels strange.) If there was a perceived market for such a PC, Microsoft would be in an ideal position to produce it. But equipping it with the same kind of RAM used in the Xbox (or take a major performance hit) would drive up the cost to the point of competing with small systems that use laptop parts in non-mobile designs. It would be too narrow a niche to bother.
 
This article really needs a followup, as I find running current high end games on IGA systems to be missing the point. A person who can only afford an inexpensive APU box isn't likely to spend $30-60 on recent releases when bundles of older games abound for just a few dollars with much more modest hardware requirements.

Some of these games were demanding when they were new, serving as benchmarks for the then most powerful GPUs. Remember when a GeForce 4 with a costly high-end product? I have a Ryzen APU laptop that came in around $600 once I was done upgrading the RAM and storage. (I kind of went overboard and could have spent at least $150 less to still have an effective machine for work and entertainment on the road The base unit was only $300.) It's at least on par with an Xbox 360 or PS3 (if those had PCIe SSDs) for running games from my Steam account. I avoid stuff I know is going to be too demanding but that leaves a vast amount of possible items to play.

So, an article comparing current IGA with the top GPU choices of ten years ago, running the games from that era that can now be had for cheap, would be of interest to me and those supporting users on modest budgets. I'm not interested in whether the cheap machine runs Just Cause 4 well but rather how it does on Just Cause 1 or 2. Is it on par with the high-end hardware of yore, or possibly better?
 
I have tried Ori and the Blind Forest on A10-6800B (I believe it was an HP Elitedesk 750-G1) with an HD display and the game was runing fine, I only had to turn down to medium, but there was no stutter or frame drops that I can remember.

Im guessing for a really low entry level gaming/working/studying PC AMD apus do make some sense.
 
I totally disagree with the Article about the reasons behind the lack of faster integrated GPU.

It is all about the Money and I think that Microsoft made a deal with AMD not to release similar hardware for desktops so their console wont get a huge hit in the market share and to make AMD Agree they just gave them better volume orders than the desktop market will ever give them. thats all about it.

GDDR5 expensive ? not THAT expensive .... 8GB of GDDR5 costs around $50 , $100 for 16GB of GDDR5 ... and I dont see this a problem at all ...

HBM2 is double the price of GDDR5 so 4GB HBM2 will be around $50 and I think this is not bad price at all .. and will find a market for it in APU's ...

Way too simplistic to make it all about money. Memory bottleneck is one thing as was mentioned, but another thing people often overlook is the power consumption. Dedicated GPU's have slowly but steadily kept increasing the power target. When you buy a high-end GPU today, it comes with one kilogram of cooling solution, three fans and 200 W or above peak power consumption. When GPU's started becoming popular in early 2000's, they were no-fan solutions and later on single-fan cards with a small heatsink, with the maximum power consumption not exceeding 100 W.

As long as dedicated GPU's are getting bigger and heavier with every release, there is no way for an integrated solution to ever catch up, it's a simple matter of physics. Even if they'd theoretically make a 200 W integrated GPU it would run into even more cooling issues due to the heat from the CPU side. As the article briefly mentioned, perhaps a more useful way to measure integrated solutions' performance would be performance per watt.
 
  • Like
Reactions: JarredWaltonGPU
I rather see the glass half full. Vega 8/11 make a lot of sense in budget gaming laptops where options for discrete GPU are much more limited and expansive. For a few hundred dollars you can easily find a Ryzen laptop with Vega8 that will play most games just fine.
Even if enthusiasts will not like it, the truth is that most games will be quite enjoyable with a solid 30FPS, particularly if you’re just a casual gamer or playing for fun. Not everyone is playing competitive 3D shooters 24/7.... Or, said otherwise, people having high end needs already have plenty of options. The Vega8/11 target other people.
 
Another completely crap comparison where do I begin...

Price: (From Newegg.com)
Intel 9700k - $379.00
AMD 3400g - $159.00
GTX 1050 - $159.99 + Intel 9700k($379.00) = $539
Why wouldn't this comparison have the Intel 9400 @ $165.99, with its UHD 630

Conclusion: If you're going to spend alot more, you should get more performance. A Pentium G5400 ($61+1050 = $220 )would have been much closer in dollars spent.

Methodology:
Basically running these tests on only low settings and at such low resolutions is how they test CPUs. This is because by putting the bottleneck on the CPU and not GPU, this artificially makes the 1050 look better than it would under conditions more realistic to what a consumer would actually purchase.

What Tom's isn't saying:
If you do anything else with your computer, the AMD solution completely destroys every other CPU comparison there is when looking at dollars spent.

What others are saying:
Look a videos on YT:
Ryzen 5 3400G & RX Vega 11 vs Pentium G5400 & GTX 1050 (CS:GO, Fortnite, LoL, GTA 5...)
View: https://www.youtube.com/watch?v=Y8y4aJfpF_I

(Shows about a 25% to 50% to the 1050 improvement dependent on games, not a 100% improvement like Tom's is showing)

Ryzen 5 2400G vs. GTX 1050 vs. RX 560 (DDR4-3400) Gaming Benchmark Test
View: https://www.youtube.com/watch?v=QQRO0KfUoAE

(Shows about a 20% to 40% improvement dependent on games, not a 100% improvement like Tom's is showing)

I'm not sure why Toms would want to cast the AMD solution in bad light and why trying to make the Intel / Nvidia solution look better than what it would/should be for the average consumer.

Losing credibility again Tom's... I guess we should just buy it!
 
Methodology:
Basically running these tests on only low settings and at such low resolutions is how they test CPUs. This is because by putting the bottleneck on the CPU and not GPU, this artificially makes the 1050 look better than it would under conditions more realistic to what a consumer would actually purchase.
Nobody that buys and APU uses it to run games at 4k high detail as these APUs are not capable of running at such settings. Why would you test settings that you know most of these GPUs cannot handle?
What Tom's isn't saying:
If you do anything else with your computer, the AMD solution completely destroys every other CPU comparison there is when looking at dollars spent.
This isn't a CPU comparison, it is an igpu comparison with a DGPU for reference. If you want to know how good the CPU side is, you look at a CPU review, not this one. I would not expect someone to talk a ton about CPU performance and value when the whole test is on Igpus.

Price: (From Newegg.com)
Intel 9700k - $379.00
AMD 3400g - $159.00
GTX 1050 - $159.99 + Intel 9700k($379.00) = $539
Why wouldn't this comparison have the Intel 9400 @ $165.99, with its UHD 630
Likely what CPUs Toms had on hand.

Also, nobody buys the i5 9400 except prebuilts since it is not often in stock at a reasonable price. Most people buy the 9400f.

Does it matter? No.

Intel's i7 igpu loses to basically every other GPU tested, so it's not like the i5s igpu is going to change anything.
 
Prefetch buffer will be up to double the size, which can translate into 25-36% performance improvement at the same transfer rate

Prefetch buffers in DDR memory technology has little to do with caches. It's there to enable the increased transfer rates. Doubled prefetch allows for double the maximum transfer rates(6400MT/s versus 3200MT/s).
 
  • Like
Reactions: TJ Hooker
This article really needs a followup, as I find running current high end games on IGA systems to be missing the point. A person who can only afford an inexpensive APU box isn't likely to spend $30-60 on recent releases when bundles of older games abound for just a few dollars with much more modest hardware requirements.

Some of these games were demanding when they were new, serving as benchmarks for the then most powerful GPUs. Remember when a GeForce 4 with a costly high-end product? I have a Ryzen APU laptop that came in around $600 once I was done upgrading the RAM and storage. (I kind of went overboard and could have spent at least $150 less to still have an effective machine for work and entertainment on the road The base unit was only $300.) It's at least on par with an Xbox 360 or PS3 (if those had PCIe SSDs) for running games from my Steam account. I avoid stuff I know is going to be too demanding but that leaves a vast amount of possible items to play.

So, an article comparing current IGA with the top GPU choices of ten years ago, running the games from that era that can now be had for cheap, would be of interest to me and those supporting users on modest budgets. I'm not interested in whether the cheap machine runs Just Cause 4 well but rather how it does on Just Cause 1 or 2. Is it on par with the high-end hardware of yore, or possibly better?

Not only is running high-end games missing the point, the other missed point is that there are many applications that utilize the GPU to improve or optimize performance. I don't use it for gaming of any sort, but the only real complaint I have with my '18 Mac Mini is the lack of iGPU horsepower. A USB-C / TB3 eGPU is a satisfactory solution for a lot of folks, but it's certainly not an elegant or cost-effective solution.
 
Nobody that buys and APU uses it to run games at 4k high detail as these APUs are not capable of running at such settings. Why would you test settings that you know most of these GPUs cannot handle?

This isn't a CPU comparison, it is an igpu comparison with a DGPU for reference. If you want to know how good the CPU side is, you look at a CPU review, not this one. I would not expect someone to talk a ton about CPU performance and value when the whole test is on Igpus.

At no time did I ever say 4k gaming, not sure where that is coming from. Making an outrageous statement and implying that is what I meant is not accurate. 1080p gaming at low or medium settings as found in the links originally provided would be more representative of what actual consumers would use. You should check them out as they better represent what these chips are capable of.

Likely what CPUs Toms had on hand.
That is just a cop-out statement... If Tom's can't make a proper comparison maybe they shouldn't misrepresent their testing results.

Also, nobody buys the i5 9400 except prebuilts since it is not often in stock at a reasonable price. Most people buy the 9400f.
Its again back to price, if you overpower the CPU, you can get more out of your GPU but that's not how the average consumer would see it, they would not make a match up of a $379 CPU with a $159 GPU.

Does it matter? No.
Well yeah it does, this unequal demonstration makes it so the Intel/Nvidia solution is double the performance while the average consumer would actually only see about a 20-50% performance increase.

Intel's i7 igpu loses to basically every other GPU tested, so it's not like the i5s igpu is going to change anything.
Unless you match it up with a dGPU like a 1050, that's the part that is a poor apples to apples comparison. The fact that they do the testing below 1080p is another problem that exacerbates this mismatched comparison and misrepresents how an Nvidia 1050's level of performance relative to the AMD APU.
 
At no time did I ever say 4k gaming, not sure where that is coming from. Making an outrageous statement and implying that is what I meant is not accurate. 1080p gaming at low or medium settings as found in the links originally provided would be more representative of what actual consumers would use. You should check them out as they better represent what these chips are capable of.

That is just a cop-out statement... If Tom's can't make a proper comparison maybe they shouldn't misrepresent their testing results.

Its again back to price, if you overpower the CPU, you can get more out of your GPU but that's not how the average consumer would see it, they would not make a match up of a $379 CPU with a $159 GPU.

Well yeah it does, this unequal demonstration makes it so the Intel/Nvidia solution is double the performance while the average consumer would actually only see about a 20-50% performance increase.

Unless you match it up with a dGPU like a 1050, that's the part that is a poor apples to apples comparison. The fact that they do the testing below 1080p is another problem that exacerbates this mismatched comparison and misrepresents how an Nvidia 1050's level of performance relative to the AMD APU.
This isn't a review, and it's not meant to show every possible variation of performance for a specific amount of money. There is no "misrepresentation" of performance or other data, as I explicitly list every piece of hardware used for the testing. It is purely a look at integrated graphics solutions and how they compare with a basic budget GPU. No recommendation is made as to whether you should buy the 9700K over the 3400G, or vice versa -- and no testing of CPU performance was conducted.

The main motivation behind this was to illustrate, with current games, just how big the gap is between even budget GPUs and 'fast' integrated graphics. That also led into a discussion on why high performance integrated graphics makes sense on consoles, but has never been attempted on PCs. It's an interesting topic, I think, and so I conducted some tests and discussed what everything means.

The 9700K was used in order to have the highest performance UHD 630 configuration, and because I had it (and it was installed in a PC already). I do not have every other Intel CPU, but I can guarantee that if I were to use something like the Pentium Gold G5600, Intel's GPU performance wouldn't change much. (I've tested this in the past.) It has one less EU and is clocked 50MHz slower, So it might be 5-10% slower. However, memory bandwidth is still a bottleneck, and the CPU certainly isn't holding back the GPU.

Also, that goes for the GTX 1050. The CPU might make a very small difference in performance at 720p minimum, but in practice the GTX 1050 is so slow that just about any CPU will easily hit maximum performance on the GPU. Here's an example where I tested GTX 1080 performance with a Pentium Gold G4560 three years ago: https://www.pcgamer.com/intel-pentium-g4560-review/ With a GTX 1080 at 1080p, a high-end CPU was 35% faster than the G4560. But with a GTX 1060, the high-end CPU was only 11% faster. I'm using a GTX 1050, and the GTX 1060 6GB is on average 60% faster at 1080p, which means that 11% boost from the CPU performance is going to be gone.

You've apparently completely missed the point of this article. It wasn't to show what you can do with integrated graphics, or where Intel (or AMD) integrated graphics is 'good enough.' That has been covered plenty of times elsewhere. Here, I'm showing what you cannot do (with some integrated graphics solutions). You can play a lot of games from 2010 (and earlier) just fine on Intel UHD 630. But what about modern games?

I guess no one should buy a high-end graphics card, or a 4K display, or a 144Hz display because games from 10 years ago run just fine. Meanwhile, AMD's Vega 11 is two to three times as fast, and a GTX 1050 is anywhere from 35% to 150% faster than Vega 11 (depending on the game and settings used).

If you only want to play lightweight games, this article was never meant to cover that testing. If you want to play older games, likewise: not for you. But Intel's Xe Graphics is supposed to bring a big boost in performance when it launches, and I need to have a baseline measurement to see whether that's actually the case or not. Likewise, I need something to compare it against from AMD, and right now on desktops that means Vega 11.

I'm planning to test GPU performance of Ice Lake and Renoir shortly as a follow up. It will be done in a similar fashion. Those will both be in laptops, and I'll be running at 25W cTDP Up to ensure performance is as high as possible.

As for your assertion that the 720p low testing somehow makes the 1050 look better, you're absolutely wrong. I tested the AMD 3400G and Vega 11 at 1080p medium, and I also tested the GTX 1050 at those settings. I just didn't report those results.

In overall performance at 720p and minimum quality, Vega 11 was 42% slower than the GTX 1050 -- or if you prefer, the GTX 1050 was 72% faster. The raw numbers are 116.2 fps for GTX 1050 vs. 66.3 fps for Vega 11 (and that's running several games in DX12 mode, even though DX11 mode would have performed better for Nvidia).

At 1080p and medium quality, Vega 11 was also 42% slower -- or the 1050 was 73% faster. As in, even at 1080p, the scaling of performance from Vega 11 to GTX 1050 was virtually identical! What's more, only three of the games broke 30 fps on Vega 11, whereas every game hit 30 fps or more on the 1050. The overall performance: 27 fps average on Vega 11, 46.6 fps on GTX 1050.

TL;DR: Just because an article isn't about what you want it to be about doesn't make the article wrong, or mean we shouldn't write the article. This isn't a buying guide, or a best integrated graphics guide, or why AMD APUs are the greatest and Intel iGPUs are the worst. It's just a look at how things currently stand, and proof that there's a huge gap in performance between integrated and discrete GPUs.

Edit: fixed a typo where I said "GTX 1050" but meant "GTX 1080."
 
Last edited:
Its again back to price, if you overpower the CPU, you can get more out of your GPU but that's not how the average consumer would see it, they would not make a match up of a $379 CPU with a $159 GPU.
As was pointed out, these demanding games would be primarily graphics-performance limited even at low resolutions on this hardware.

But even if one were building a low-end system with a low-end graphics card, there's no reason they couldn't pair it with something like a Ryzen 3300X, which costs less than the 3400G tested here, trading integrated graphics for a single-CCX Zen 2 design. Even in primarily CPU-limited gaming tests (like being paired with a 2080 Ti at 1080p), the 3300X tends to typically perform within around 10% of an i7-9700K in today's games. And even Zen+ processors like the 3400G are not all that much further behind. There's not exactly a huge rift in gaming performance between the current highest-end CPUs and much lower-end models.
 
The 9700K was used in order to have the highest performance UHD 630 configuration, and because I had it (and it was installed in a PC already). I do not have every other Intel CPU, but I can guarantee that if I were to use something like the Pentium Gold G5600, Intel's GPU performance wouldn't change much. (I've tested this in the past.) It has one less EU and is clocked 50MHz slower, So it might be 5-10% slower. However, memory bandwidth is still a bottleneck, and the CPU certainly isn't holding back the GPU.

Next time, you don't have to go that far low. You can go with Core i3 chips.

I've seen benchmarks of the lower end chips before on low end Intel iGPUs. It's 15-20% slower.

The thing is, the older games that can run on the HD 630 iGPU will run at high enough frame rates to be impacted by the CPU.

Yes, not many are going to be getting a 9700K for that, but there are Core i3s.

HD 630 isn't really bottlenecked by memory bandwidth. Yes, faster memory will help but DDR4-3200 is pretty much at the peak.

Looking forward to Icelake/Renoir reviews. I hope you use HWInfo64 too because it shows PL1/PL2 for Intel chips(unfortunately no equivalent for AMD) so you can be sure it's set at the TDP its claimed to be at.
 
A more fair comparison would've been to use the Intel Iris GPUs since the UHD 630 is a mid range Intel iGPU. The Iris Pro 580 from 6th gen Skylake is still Intel's most power iGPU to date, still a bit more powerful than even the latest Iris Plus iGPU in Coffee Lake. But either way, even the Iris Pro 580 is still way behind the GTX 1050 in performance by all measures.
 
This isn't a review, and it's not meant to show every possible variation of performance for a specific amount of money. There is no "misrepresentation" of performance or other data, as I explicitly list every piece of hardware used for the testing. It is purely a look at integrated graphics solutions and how they compare with a basic budget GPU. No recommendation is made as to whether you should buy the 9700K over the 3400G, or vice versa -- and no testing of CPU performance was conducted.

The 9700K was used in order to have the highest performance UHD 630 configuration, and because I had it (and it was installed in a PC already). I do not have every other Intel CPU, but I can guarantee that if I were to use something like the Pentium Gold G5600, Intel's GPU performance wouldn't change much. (I've tested this in the past.) It has one less EU and is clocked 50MHz slower, So it might be 5-10% slower. However, memory bandwidth is still a bottleneck, and the CPU certainly isn't holding back the GPU.

Also, that goes for the GTX 1050. The CPU might make a very small difference in performance at 720p minimum, but in practice the GTX 1050 is so slow that just about any CPU will easily hit maximum performance on the GPU. Here's an example where I tested GTX 1080 performance with a Pentium Gold G4560 three years ago: https://www.pcgamer.com/intel-pentium-g4560-review/ With a GTX 1050 at 1080p, a high-end CPU was 35% faster than the G4560. But with a GTX 1060, the high-end CPU was only 11% faster. I'm using a GTX 1050, and the GTX 1060 6GB is on average 60% faster at 1080p, which means that 11% boost from the CPU performance is going to be gone.

You've apparently completely missed the point of this article. It wasn't to show what you can do with integrated graphics, or where Intel (or AMD) integrated graphics is 'good enough.' That has been covered plenty of times elsewhere. Here, I'm showing what you cannot do (with some integrated graphics solutions). You can play a lot of games from 2010 (and earlier) just fine on Intel UHD 630. But what about modern games?

I guess no one should buy a high-end graphics card, or a 4K display, or a 144Hz display because games from 10 years ago run just fine. Meanwhile, AMD's Vega 11 is two to three times as fast, and a GTX 1050 is anywhere from 35% to 150% faster than Vega 11 (depending on the game and settings used).

If you only want to play lightweight games, this article was never meant to cover that testing. If you want to play older games, likewise: not for you. But Intel's Xe Graphics is supposed to bring a big boost in performance when it launches, and I need to have a baseline measurement to see whether that's actually the case or not. Likewise, I need something to compare it against from AMD, and right now on desktops that means Vega 11.

I'm planning to test GPU performance of Ice Lake and Renoir shortly as a follow up. It will be done in a similar fashion. Those will both be in laptops, and I'll be running at 25W cTDP Up to ensure performance is as high as possible.

As for your assertion that the 720p low testing somehow makes the 1050 look better, you're absolutely wrong. I tested the AMD 3400G and Vega 11 at 1080p medium, and I also tested the GTX 1050 at those settings. I just didn't report those results.

In overall performance at 720p and minimum quality, Vega 11 was 42% slower than the GTX 1050 -- or if you prefer, the GTX 1050 was 72% faster. The raw numbers are 116.2 fps for GTX 1050 vs. 66.3 fps for Vega 11 (and that's running several games in DX12 mode, even though DX11 mode would have performed better for Nvidia).

At 1080p and medium quality, Vega 11 was also 42% slower -- or the 1050 was 73% faster. As in, even at 1080p, the scaling of performance from Vega 11 to GTX 1050 was virtually identical! What's more, only three of the games broke 30 fps on Vega 11, whereas every game hit 30 fps or more on the 1050. The overall performance: 27 fps average on Vega 11, 46.6 fps on GTX 1050.

TL;DR: Just because an article isn't about what you want it to be about doesn't make the article wrong, or mean we shouldn't write the article. This isn't a buying guide, or a best integrated graphics guide, or why AMD APUs are the greatest and Intel iGPUs are the worst. It's just a look at how things currently stand, and proof that there's a huge gap in performance between integrated and discrete GPUs.

Jarred thank you for responding directly to my post.

Again using an one of the best gaming CPU in existence, the 9700k ($379) with a GTX 1050 2GB vs the AMD APUs isn't a fair comparison for showing the difference between the iGPU and dGPU. I'm sure you can see how that is not a fair comparison.

I don't think it would really matter with testing of the UHD 630: I3-9400, Pentium G5400, I7-9700k, etc. The difference would be negligible. I think we can agree that 20FPS vs 18 FPS is not an appreciable difference.

I propose a solution: Post the testing of the 3400g with the GTX 1050 2GB, same settings. If it doesn't show a 20%-30% difference vs the 9700k configuration + GTX 1050 2GB, I will concede that I was mistaken, but I think it will.

If it doesn't make a difference post the results of using the 3400g and the dGPU, the GTX 1050 2GB. It wouldn't be that hard to add in and since you have all the boards setup it would be easy to do for a more apples to apples comparison and probably should have been included from the beginning.

Just an aside but not really what we are discussing:
When it comes to API, they can be different, as it is a simple click of the mouse, pick the best one that shows best performance for that hardware configuration.
AMD seems better with Vulcan/DX12, Nvidia is better with DX11 and getting better with DX12.

Another question based on your last post:
Another question: If you tested at 1080p, why didn't you include that in your article?

Again thank you for directly trying to address my concerns, I haven't seen anything that would change my position on this topic, and I do hope you take me up on testing the 3400g with the GTX 1050 2GB and adding it to the article.
 
Jarred thank you for responding directly to my post.

Again using an one of the best gaming CPU in existence, the 9700k ($379) with a GTX 1050 2GB vs the AMD APUs isn't a fair comparison for showing the difference between the iGPU and dGPU. I'm sure you can see how that is not a fair comparison.

I don't think it would really matter with testing of the UHD 630: I3-9400, Pentium G5400, I7-9700k, etc. The difference would be negligible. I think we can agree that 20FPS vs 18 FPS is not an appreciable difference.

I propose a solution: Post the testing of the 3400g with the GTX 1050 2GB, same settings. If it doesn't show a 20%-30% difference vs the 9700k configuration + GTX 1050 2GB, I will concede that I was mistaken, but I think it will.

If it doesn't make a difference post the results of using the 3400g and the dGPU, the GTX 1050 2GB. It wouldn't be that hard to add in and since you have all the boards setup it would be easy to do for a more apples to apples comparison and probably should have been included from the beginning.
Ryzen 5 3400G has other factors, namely it runs the PCIe with an x8 link width, but I still don't think it will make much if any difference. I could test with Pentium Gold G5400, or even a Ryzen 3 3200G, and I'm sure the GTX 1050 will be the bottleneck in most games. Which do you want, then: G5400, 3200G, or just stick with the 3400G? (I've swapped the CPU out already, so regardless of which I test, I'll need to change the CPU.)

Just an aside but not really what we are discussing:
When it comes to API, they can be different, as it is a simple click of the mouse, pick the best one that shows best performance for that hardware configuration.
AMD seems better with Vulcan/DX12, Nvidia is better with DX11 and getting better with DX12.
Yes, generally speaking -- and the GTX 1050 is definitely hindered any time I run it in DX12 mode if DX11 is an option. Metro Exodus, Shadow of the Tomb Raider, Borderlands 3, The Division 2 all run better in DX11. So if you really want to see GTX 1050 performance in the best light, I'll test at 720p using the 'best' API as a comparison point. I haven't done that lately because often it doubles the amount of testing -- I need to check both APIs to see which is faster, and sometimes it's resolution/setting dependent as well. (That mostly only applied to ultra-fast GPUs, however, like RTX 2070 Super and above -- the GTX 1050 likely never benefits from DX12 vs. DX11.)

Another question based on your last post:
Another question: If you tested at 1080p, why didn't you include that in your article?

Again thank you for directly trying to address my concerns, I haven't seen anything that would change my position on this topic, and I do hope you take me up on testing the 3400g with the GTX 1050 2GB and adding it to the article.
I didn't include 1080p testing because I only ran it on the Vega 11 and GTX 1050. Since this article was about integrated graphics, and since Intel's UHD 630 clearly can't handle 720p minimum settings in most games, trying to run it at 1080p medium settings either results in crashes, failure to run due to framerate requirements, or performance that's so slow as to be meaningless. Many games also choke if they fall below 10 fps -- everything slows down, physics gets wonky, etc. So a benchmark that takes 60-90 seconds normally might take five minutes or more at 5 fps. That's a lot of time wasted just to prove what is already known. So, including 1080p medium, I'd only have two results in the charts: GTX 1050 and Vega 11 3400G, which isn't very useful.