News Intel Arc Graphics Performance Revisited: DX9 Steps Forward, DX12 Steps Back

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Glock24

Distinguished
Sep 5, 2014
91
13
18,635
"Dedicated Graphics, Round Two" and they talk something about a product (or prototype) from 2020.

It seems everybody already forgot about the Intel i740.
 

Glock24

Distinguished
Sep 5, 2014
91
13
18,635
The reason I stick with Nvidia is because Nvidia has native DX9.0 support. AMD and Intel do not.

It has nothing to do with performance for me.

There are so many older DX9.0 titles you can pick up for $1-$5. They always work great on Nvidia, on AMD it is hit or miss. It has nothing to do with performance, many are locked at 60fps anyway and the ones that aren't run at 100+ fps on potato hardware. It is about compatbility.

The Nvidia control panel allows you to set V-sync per game, per application, it works wonderfully on older DX9.0 titles also.

And it doesn't really matter to me that Intel gets better at DX9.0 performance. It is not about that, it is still not native support. Why would I pick a translation layer over actual native support Nvidia offers. There's no reason to pick Intel.

This whole "drivers are getting better, just wait" situation is the exact same reason I switched from AMD to Nvidia back in the day and never went back. I really loathed AMD driver issues and I now base my GPU decisions on driver quality first, and performance second. And Nvidia wins on this every single time. And I am not alone in this, AMD and Intel tend to offer more FPS per $, but Nvidia wins out on stability, compatibility, NVENC, CUDA, etc, and Nvidia dominates as a result.
A Ryzen 5 5600U plays all old games I've trown at it under Linux using Steam's Proton or Wine, most of them work better with "emulation" under Linux than under Windows 10.
 
I want a gtx 980 to use in windows XP. Nostalgic reasons

View: https://youtu.be/KM59UYGoZls
i preffer win 9x over win xp, much better driver performace...but 980 is too new for old detonators which had better performance than what they did with xp drivers and later with 9x drivers aswell :/ (well on one hand they were improving geometry but on other hand who cared if performance was lost)
unofficial z buffered drivers somewhat returned performance to old days
 
  • Like
Reactions: Amdlova
On the performance bar charts the colors for average and 99% frame rates are switched on most graphs.
Oops, you're right. It's because my chart template normally has AMD and Nvidia GPUs in it, and for color blind reasons I try to avoid doing green and red. So Nvidia is blue with black, AMD is red with black, and on Intel I reversed the colors and made it black with blue.
 

MrStillwater

Prominent
Jun 4, 2022
7
7
515
The reason I stick with Nvidia is because Nvidia has native DX9.0 support. AMD and Intel do not.

It has nothing to do with performance for me.

There are so many older DX9.0 titles you can pick up for $1-$5. They always work great on Nvidia, on AMD it is hit or miss. .

News to me. I switched nearly two years ago from Nvidia to AMD and haven't had any problems with any DX9 games. I get 400 fps at 1440p in CS:GO. Haven't found any games that haven't run absolutely fine and I play plenty of older ones.
 
  • Like
Reactions: Thunder64

abufrejoval

Reputable
Jun 19, 2020
333
231
5,060
One of my main systems is a "desktop PC" (ATX mid sized tower) running Windows 2022 server. Designed to be quiet and relatively low power it needs to do a little bit of everything. It packs lots of storage HDD RAID6, SSD RAID0 (Steam cache) and NVMe (OS & VMs) with two dozen terabytes, but it also sports a decent 5800X3D CPU and 64GB of ECC RAM.

The main screen is a 42" 4k "work optimized" IPS LG monitor, just like verything else here gaming is second fiddle: I got my machine learning workstations for serious gaming double duty, if I ever find the time. The main screen goes through a 4 port display 1.2 port capable KVM, as do the two 1080p secondaries, which then are switched by another set of KVMs. That setup works fine with pretty much every Intel and AMD iGPU as well as the various Nvidia dGPUs btw.

The ARC target system currently runs a GTX 1080ti, which has travelled downward from the workstations as those were upgraded to beefier RTX brethren and while that was a perfectly capable GPU at 1080p at 4k you have to dial down settings or resolution, by which time I might have just fired up one of the big machines.

Hardware support for AV1 encoding got the ball rolling and with an A770 selling below €400 (with taxes) I decided to give it a try. The result was an entirely mixed bag and a return of the card within the legal refund window prescribed for e-commerce in Europe.

Replacing hardware on Windows has become wonderfully easy, compared to NT times, so I just swapped out the cards and rebooted Windows 2022... but got a black screen. The system was visibly booting and via RDP I saw it had configured a basic display device. I installed the latest Intel ARC drivers and restarted: still a blank display.

To make a long story short: while I managed to coerce the ARC into running via a DP port (with the normal KVM setup), a reboot typically destroyed that: the only reliable way to get a display output was to use the HDMI port on both the ARC and the monitor.

There are plenty of messages on the internet that hint at massive problems with multi-monitor setups or increased problems with DP monitors, so I stopped searching for my mistake and concentrated on benching the card a bit.

AV1 encoding was easy with Handbrake 1.6.1, but the results were somewhat similar as with most hardware encoders: their primary design goal is the best encoding performance at better than real-time, mine is really smallest size at near perfect quality at much better than software encoding speeds (and energy requirements). Still I got 25% size at good enough quality and 500 FPS, so that was encouraging.

While I have a huge gaming library for the kids, I rarely get to play, but mostly just run the benchmarks to ensure my "clients" will be happy. And those benchmarks were rather encouraging, too, pretty much everything worked reasonably well at 4k, most imporantly ARC Survival Evolved, the only game I've spend serious time in with my daughter, trying to squeeze in a bit of "basic husbandry training" before bed time. That's definitely not a 4k setup on the GTX 1080ti, honestly even the RTX 3090 struggles to deliver the monitor's 60Hz constantly at epic settings on the Ryzen 9 5950X it sports below. The ARC won't do quite 60Hz on ARK, but it's much better than 30 and a clear knock-out for against the GTX 1080ti.

The various Tomb Raiders, Doom, Control, even Cyberpunk are all quite playable at 4k, only Microsoft's Flight simulator is a total dud again: After 190GB of downloads (quite a few options) it will simply fail to launch because it insists on logging into xbox: that's not supported on Windows 2022 server but only came since one update or another. I am quite sure I did run it on the machine previously, as well as about a dozen other flight simulators, who don't have such a silly restriction. I bought the game on Steam to run it via Steam and Microsoft should either be kicked out or conform!

Well AMD drivers, both for GPUs and mainboards also are quite abnoxious about Windows server editions, one of the few things that that company is doing seriously wrong.

Generally I was quite happy with the performance, especially given the price. But with the display port outputs not properly working, it's a no-go, especially since there is no guarantee this can actually be fixed e.g. via a driver or firmware update. Actually, since the card has been in the market for months now and this issue seems to be frequent enough to be noticed by QA engineers, if it isn't fixed by now I gauge the chances of it ever getting fixed as rather low.

So back it goes!

P.S. Thankfully the LED light show won't shine through my chassis to add to the general light pollution of my home lab. But I'd really rather have the default to be off, before having to install a cable and software just to shut it up.
 
Last edited:

msroadkill612

Distinguished
Jan 31, 2009
202
29
18,710
An OT query from a newb if i may...

As a preamble - it is a big ask for all those decades of games & rig configurations to behave.

Of course the big seller has had more problems reported & acted upon

Hypothetically tho, what if a configuration was so bog standard, that it enjoyed similar advantages to popular Nvidia - IE - what about AMD APUs

They really are a poor man's 1080p gamer if set up right instead of a fill in til i get a dgpu.

My guess is u have few driver problems gaming on an amd apu.
 
An OT query from a newb if i may...

As a preamble - it is a big ask for all those decades of games & rig configurations to behave.

Of course the big seller has had more problems reported & acted upon

Hypothetically tho, what if a configuration was so bog standard, that it enjoyed similar advantages to popular Nvidia - IE - what about AMD APUs

They really are a poor man's 1080p gamer if set up right instead of a fill in til i get a dgpu.

My guess is u have few driver problems gaming on an amd apu.
AMD's best integrated GPU solution right now generally won't beat even an Arc A380. They were competitive with the DG1 card, last I checked, and were certainly a better experience at the time (2021), but A380 is a big jump in performance compared to DG1. A Ryzen 7 5700G (fastest AMD integrated solution, other than the Ryzen 6000 mobile chips) delivered just 29 fps in Borderlands 3 at 1080p medium. The Arc A380 by contrast delivers 69 fps in that same test.

Other results: 5700G got 129 fps in CSGO, A380 got 277 fps with the latest drivers. Flight Simulator 14 fps on 5700G vs. 39 fps on A380. Horizon Zero Dawn 24 fps on 5700G, 58 fps on A380. Red Dead Redemption 2 at 27 fps compared to 63 fps on A380. Last but not least, Watch Dogs Legion at 26 fps versus 60 fps with the A380.

So in effect an A380 delivers over twice the performance of the fastest Vega 8 integrated solution. AMD says the 6000-series APU graphics was up to 2X the 5000-series graphics, and it's more power efficient for sure. But sharing system bandwidth and various other factors means it's probably still slower than an A380. For a laptop, the Radeon 680M is a good option. Unfortunately, it's just not on any desktop solutions.
 
  • Like
Reactions: Memnarchon

msroadkill612

Distinguished
Jan 31, 2009
202
29
18,710
AMD's best integrated GPU solution right now generally won't beat even an Arc A380. They were competitive with the DG1 card, last I checked, and were certainly a better experience at the time (2021), but A380 is a big jump in performance compared to DG1. A Ryzen 7 5700G (fastest AMD integrated solution, other than the Ryzen 6000 mobile chips) delivered just 29 fps in Borderlands 3 at 1080p medium. The Arc A380 by contrast delivers 69 fps in that same test.

Other results: 5700G got 129 fps in CSGO, A380 got 277 fps with the latest drivers. Flight Simulator 14 fps on 5700G vs. 39 fps on A380. Horizon Zero Dawn 24 fps on 5700G, 58 fps on A380. Red Dead Redemption 2 at 27 fps compared to 63 fps on A380. Last but not least, Watch Dogs Legion at 26 fps versus 60 fps with the A380.

So in effect an A380 delivers over twice the performance of the fastest Vega 8 integrated solution. AMD says the 6000-series APU graphics was up to 2X the 5000-series graphics, and it's more power efficient for sure. But sharing system bandwidth and various other factors means it's probably still slower than an A380. For a laptop, the Radeon 680M is a good option. Unfortunately, it's just not on any desktop solutions.
I am flattered by a reply from such an august personage :).... I have enjoyed ur stuff over the years. Thanks.

Sorry - I gave the wrong impression. I know a dgpu has unbeatable inherent strengths but.... 2 things

Walmart wages would seem a fortune to probably over 5 of the 7 billion people on the planet (& they love gaming from what i see). An APU sharing system memory & serving the whole range of other cost justifying apps, gives these people their ~only chance of affording the hardware & power costs of home gaming.

Second, the key hurdle is slow system memory. Only amd have a decent IGP, but ryzen desktop used chiplets & has a crappy 12nm IO/mem controller right?
Yes, but 5700g APUs e.g., are different & this is little noted. They are monolithic 7nm & so so is the mem controller.

Instead of 3600 ram (& 1800 Fabric bus) , 4000 & 2000 or above are very doable, & the ram isn't much dearer.

I found this v interesting:

sadly no link

(btw - I see new 4700g (5700g w/ less l3 cache afaict) for ~$130 US on aliexpress)

"You can push the ram to 4400 and FCLK to 2200. GPU clock to 2400.

Set the RAM to 1.49v

depending on whether you go all the way to the limit or not, you may have to manually set your CPU, SoC and GPU voltage to 1.29 and just lock it there.

You'll notice a small problem at the top end of the overclock using CPU and SoC voltage offsets and maintaining both sane voltages and stability. You can still use offsets barely at or just below these settings if you want to maintain automatic voltage reduction under low loads.

It's not such a big deal to just lock them compared to any other CPU I've overclocked... the CPU still throttles speed and is very low power draw anyway.

Then just reduce the CAS latency as far as it can go and still be stable.

End result is approx 30% +/- (depends greatly on which benchmark or game) higher iGPU performance than stock BIOS defaults (this is massive), and you're still only pulling around 100w on the CPU in total and only under very heavy loads.

The performance is always above a GT 1030 and sometimes approaches a GTX 1050.

Superposition benchmark at 1080p medium will show a GT 1030 at about 2500, the 5700G at 3400 and GTX 1050 OC at 4500.

Superposition at 720p low is approx 9700.

Since a GT 1030 card is minimum $120 and a 1050 used is $200+... you're getting a pretty darn good value in graphics performance built into that 8 core CPU.

It has a good chance of being a very long lived set of hardware that will be useful for something for a decade or so.

Rocket League was actually playable at 1440p... and on low settings 1080p was playable competitively.
"
 

abufrejoval

Reputable
Jun 19, 2020
333
231
5,060
One issue I found with AMD APUs is that their iGPU part tends to suffer from far shorter support periods than their CPU part. I had Trinity and Richland APUs whose iGPUs were basically out of driver support while they were still selling these APUs as new hardware.

But yeah, I've been eying the 5700G for a long time as a low-power server part, but got a bit put off by the fact that it doesn't support ECC unless you buy the 5750G (which costs extra and lacks a fan) and that it only offers 8 PCIe 3.0 lanes. I eventually went with a 5800X3D for less than what the 5750G would have been and got 20 PCIe 4.0 lanes and free ECC at affordable DDR4-3200 prices.

I got a 5800U in a laptop and I believe the iGPU is pretty near the same performance, because the main bottleneck is always DRAM bandwidth not wattage. Yeah, it's pretty ok for an iGPU, much better than my Kaveri A10 was, with similar DRAM bandwidth. But Intel's Xe from a Tiger Lake NUC with an i7-1165G7 actually does better.

I'd love to test Ryzen 4 APUs, but unless they pull something like HBM or on-die extra RAM channels like Apple, iGPU performance just won't ever be really where start enjoying myself.
 
Thinking about getting the Arc 770 for my wife's machine to satisfy my curiosity...
Biggest reason to get an Arc GPU is for the video encoding performance and quality, and AV1 support in particular. Nvidia may come out ahead in a few cases, like AV1 quality, but it's not by much and you need at least an RTX 4070 Ti (for now) to get the AV1 support. AMD trails video encoding quality in all codecs, even with the 7900 series, plus they also cost $900 or more. So for $350, you can get the A770 16GB and have performance roughly on par with the RTX 3060 but with better video codec support.
 

abufrejoval

Reputable
Jun 19, 2020
333
231
5,060
Biggest reason to get an Arc GPU is for the video encoding performance and quality, and AV1 support in particular. Nvidia may come out ahead in a few cases, like AV1 quality, but it's not by much and you need at least an RTX 4070 Ti (for now) to get the AV1 support. AMD trails video encoding quality in all codecs, even with the 7900 series, plus they also cost $900 or more. So for $350, you can get the A770 16GB and have performance roughly on par with the RTX 3060 but with better video codec support.
That was my thinking precisely when I went ahead and bought an A770 to replace a GTX1080ti Zotac mini on my 24x7 server...

Before that I had also considered using a free M.2 slot to wire an A380 to it, given that I was only really interested to use the AV1 code part of it, M.2 to PCIe x4 adapters can be had for very little and bandwidth isn't a concern for media stuff.

Anyhow, the A770 had to go back for unsurmountable issues with proper display port support, but the codec story remains.

These days reviewers mostly focus on the quality of real-time encoding, ideally even faster and multiple streams, because that's what they need. For me it's always been about archiving my media in the densest possible format, that retained both visual quality and the ability to play back on present or future devices. And there I had been expecting something like half size without quality loss for each codec generation, perhaps even at the same encoding power budget.

Yeah, intellectually I know it's unrealistic, but that keeps being the message I read from press headlines! So you may be partially at fault...

I didn't have that much time to play around with AV1 before returning the card, but it seemed a bit similar to the H.265 vs. H.264 experience where H.265 NVEC hardware encoding couldn't really gain any significant headwind against H.264 software encoding, neither in quality nor in density.

It was similar with AV1: using a relatively low quality setting of 30 on Handbrake on a THD source resulted in files that seemed rather acceptable in visual quality but were actually a bit bigger than Quality 21 and H.264 software encoding, while the speed difference wasn't outlandish, at least with a Ryzen 9 5950X running the software codec. Using the "magic" 3500 mbit/s setting for AV1 instead achieved similar file size, while the quality setting within AV1 only seemed to change encoding speed, but not the quality of the result.

When comparing hardware encoding generation by generation, the quality differences for a given bandwidth may be obvious.

When you add software encoding to the mixture, things get a little more complicated as software encoding tends to match hardware encoding of the next generation in quality and density. H.264 on a powerful modern multi-core CPU yields acceptable FPS and excellent quality, matching H.265 hardware. Likewise H.265 software encoding can achieve AV1 hardware encoding quality and density, but at time and energy expenses few may want to suffer. AV1 in software is realistically only a validation tool.

It would seem that hardware encoding pipeline are too fixed-function to allow significant trading of speed vs density. In a way it stands to reason; you probably can't fit things like the really deep look-ahead of dual pass encoding into a pipeline that's designed for real-time and to support live streaming, live video splicing and multi-res transcoding.

It doesn't mean such hardware could not be built, I guess, but the primarily density focussed approach only pays off in canned material that has a large viewership to amortize the super optimized software encoding cost. But with live generated or individually mixed content pushing out pre-produced content, that's no longer cost efficient.

AV1 may be a significant gain for real-time quality at lower bandwidth, but I can't quite see that as the average consumer benefit that you seem to suggest, as long as not everyone is live-streaming in THD or 4k.

When it comes to video editing and production, it would be interesting to know how well that works with AV1 and if it implies a full update of the toolset and if the casual open source user base have to wait it out.

Perhaps that could be an article?
 
  • Like
Reactions: JarredWaltonGPU
That was my thinking precisely when I went ahead and bought an A770 to replace a GTX1080ti Zotac mini on my 24x7 server...

Before that I had also considered using a free M.2 slot to wire an A380 to it, given that I was only really interested to use the AV1 code part of it, M.2 to PCIe x4 adapters can be had for very little and bandwidth isn't a concern for media stuff.

Anyhow, the A770 had to go back for unsurmountable issues with proper display port support, but the codec story remains.

These days reviewers mostly focus on the quality of real-time encoding, ideally even faster and multiple streams, because that's what they need. For me it's always been about archiving my media in the densest possible format, that retained both visual quality and the ability to play back on present or future devices. And there I had been expecting something like half size without quality loss for each codec generation, perhaps even at the same encoding power budget.

Yeah, intellectually I know it's unrealistic, but that keeps being the message I read from press headlines! So you may be partially at fault...

I didn't have that much time to play around with AV1 before returning the card, but it seemed a bit similar to the H.265 vs. H.264 experience where H.265 NVEC hardware encoding couldn't really gain any significant headwind against H.264 software encoding, neither in quality nor in density.

It was similar with AV1: using a relatively low quality setting of 30 on Handbrake on a THD source resulted in files that seemed rather acceptable in visual quality but were actually a bit bigger than Quality 21 and H.264 software encoding, while the speed difference wasn't outlandish, at least with a Ryzen 9 5950X running the software codec. Using the "magic" 3500 mbit/s setting for AV1 instead achieved similar file size, while the quality setting within AV1 only seemed to change encoding speed, but not the quality of the result.

When comparing hardware encoding generation by generation, the quality differences for a given bandwidth may be obvious.

When you add software encoding to the mixture, things get a little more complicated as software encoding tends to match hardware encoding of the next generation in quality and density. H.264 on a powerful modern multi-core CPU yields acceptable FPS and excellent quality, matching H.265 hardware. Likewise H.265 software encoding can achieve AV1 hardware encoding quality and density, but at time and energy expenses few may want to suffer. AV1 in software is realistically only a validation tool.

It would seem that hardware encoding pipeline are too fixed-function to allow significant trading of speed vs density. In a way it stands to reason; you probably can't fit things like the really deep look-ahead of dual pass encoding into a pipeline that's designed for real-time and to support live streaming, live video splicing and multi-res transcoding.

It doesn't mean such hardware could not be built, I guess, but the primarily density focused approach only pays off in canned material that has a large viewership to amortize the super optimized software encoding cost. But with live generated or individually mixed content pushing out pre-produced content, that's no longer cost efficient.

AV1 may be a significant gain for real-time quality at lower bandwidth, but I can't quite see that as the average consumer benefit that you seem to suggest, as long as not everyone is live-streaming in THD or 4k.

When it comes to video editing and production, it would be interesting to know how well that works with AV1 and if it implies a full update of the toolset and if the casual open source user base have to wait it out.

Perhaps that could be an article?
I'm actually working on a video encoding article. Your thoughts regarding AV1 and HEVC pretty much line up exactly with what I've seen. There are lots of ways of doing encoding, some sacrificing speed for quality, others opting for quality over speed and doing multiple passes. But generally speaking, I see very little difference between HEVC, AV1, and even VP9. The biggest benefit of AV1 is that it's royalty free, which is also the biggest drawback to HEVC. VP9 may also be royalty free, but it hasn't seen as much uptake in general, and AV1 seems to be the way forward.

Multi-pass always improves the quality but it's not really possible with real-time streaming. I mean, you could theoretically have a setup where the stream gets delayed by 60 seconds and the hardware blocks off something like 30 second chunks to analyze for where higher vs lower bitrate would be most beneficial and then do the final encode that way and send it out to the stream. But then people watching would always be a minute behind or whatever, and there are probably other difficulties in such an approach.
 
  • Like
Reactions: abufrejoval

abufrejoval

Reputable
Jun 19, 2020
333
231
5,060
"Dedicated Graphics, Round Two" and they talk something about a product (or prototype) from 2020.

It seems everybody already forgot about the Intel i740.
That wasn't Intel's first graphics adapter, either. I do seem to remember a 80286 class device, which was basically designed as a windowed frame buffer assembly device: Instead of using optimized software BitBlt loops on 32-bit CPUs to assemble windows into a screen layout on the physical frame buffer, this would allow every application to act as if it had a frame buffer of its own and then do the tile assembly on the fly, much like an MMU, but scan line by scan line.

In a way it felt a lot like the 8086 to 80286 transition where numerically consecutive 64k blocks could suddenly be anywhere in the vast 24-bit address space.

As far as I remember it didn't really like lots of windows that were overlapping with say 1 pixel of each layer showing, because it would obviously try to read-ahead at least full words (16-bit?), which could result in 16x16 bits being read for 16 pixels showing, so certain window layouts weren't possible in hardware.

I don't know if this silicon became even less popular than the 80432, even first generation SPARC CPUs did all of that in software with plenty of power to spare.

There is a lot of cringy hardware in Intel's closets!
 

abufrejoval

Reputable
Jun 19, 2020
333
231
5,060
...
Multi-pass always improves the quality but it's not really possible with real-time streaming. I mean, you could theoretically have a setup where the stream gets delayed by 60 seconds and the hardware blocks off something like 30 second chunks to analyze for where higher vs lower bitrate would be most beneficial and then do the final encode that way and send it out to the stream.
...
That would be the sort of pipeline I'd have in mind for optimal quality batch encoding, but I'm pretty sure you can't fit that into a single ASIC block that isn't completely custom. And there again, the trend towards real-time stitching and live generated content kills any economy of scale which that might get.

Well, the good news is I don't really need to by new hardware, because AV1 had suddenly become essential ;)

Looking forward to your article, especially if it has something on video editing with AV1, too.
 
Looking forward to your article, especially if it has something on video editing with AV1, too.
I hadn't really thought of doing that. But yeah, getting support for AV1 into some applications can be a royal pain. I haven't tried with Premiere Pro lately, but at one point the easiest solution was to just use ffmpeg to recode a video to a higher bitrate H.264 or HEVC format in order to get it to import.
 
  • Like
Reactions: abufrejoval

xauberer

Distinguished
May 21, 2015
6
3
18,515
If DX9 compatibility is such a desirable feature, why not revert to benchmarking oldies like Fallout 3/New Vegas or the old version of Skyrim Legendary Edition? That has to have some following among truly hardcore gamers of old. It would be interesting to see them run for at least an hour on Intel ARC without crashing, if we're talking about a good challenge.
 
Last edited:

TRENDING THREADS