A Ryzen 5 5600U plays all old games I've trown at it under Linux using Steam's Proton or Wine, most of them work better with "emulation" under Linux than under Windows 10.The reason I stick with Nvidia is because Nvidia has native DX9.0 support. AMD and Intel do not.
It has nothing to do with performance for me.
There are so many older DX9.0 titles you can pick up for $1-$5. They always work great on Nvidia, on AMD it is hit or miss. It has nothing to do with performance, many are locked at 60fps anyway and the ones that aren't run at 100+ fps on potato hardware. It is about compatbility.
The Nvidia control panel allows you to set V-sync per game, per application, it works wonderfully on older DX9.0 titles also.
And it doesn't really matter to me that Intel gets better at DX9.0 performance. It is not about that, it is still not native support. Why would I pick a translation layer over actual native support Nvidia offers. There's no reason to pick Intel.
This whole "drivers are getting better, just wait" situation is the exact same reason I switched from AMD to Nvidia back in the day and never went back. I really loathed AMD driver issues and I now base my GPU decisions on driver quality first, and performance second. And Nvidia wins on this every single time. And I am not alone in this, AMD and Intel tend to offer more FPS per $, but Nvidia wins out on stability, compatibility, NVENC, CUDA, etc, and Nvidia dominates as a result.
i preffer win 9x over win xp, much better driver performace...but 980 is too new for old detonators which had better performance than what they did with xp drivers and later with 9x drivers aswell :/ (well on one hand they were improving geometry but on other hand who cared if performance was lost)
Oops, you're right. It's because my chart template normally has AMD and Nvidia GPUs in it, and for color blind reasons I try to avoid doing green and red. So Nvidia is blue with black, AMD is red with black, and on Intel I reversed the colors and made it black with blue.On the performance bar charts the colors for average and 99% frame rates are switched on most graphs.
The reason I stick with Nvidia is because Nvidia has native DX9.0 support. AMD and Intel do not.
It has nothing to do with performance for me.
There are so many older DX9.0 titles you can pick up for $1-$5. They always work great on Nvidia, on AMD it is hit or miss. .
AMD's best integrated GPU solution right now generally won't beat even an Arc A380. They were competitive with the DG1 card, last I checked, and were certainly a better experience at the time (2021), but A380 is a big jump in performance compared to DG1. A Ryzen 7 5700G (fastest AMD integrated solution, other than the Ryzen 6000 mobile chips) delivered just 29 fps in Borderlands 3 at 1080p medium. The Arc A380 by contrast delivers 69 fps in that same test.An OT query from a newb if i may...
As a preamble - it is a big ask for all those decades of games & rig configurations to behave.
Of course the big seller has had more problems reported & acted upon
Hypothetically tho, what if a configuration was so bog standard, that it enjoyed similar advantages to popular Nvidia - IE - what about AMD APUs
They really are a poor man's 1080p gamer if set up right instead of a fill in til i get a dgpu.
My guess is u have few driver problems gaming on an amd apu.
I am flattered by a reply from such an august personage .... I have enjoyed ur stuff over the years. Thanks.AMD's best integrated GPU solution right now generally won't beat even an Arc A380. They were competitive with the DG1 card, last I checked, and were certainly a better experience at the time (2021), but A380 is a big jump in performance compared to DG1. A Ryzen 7 5700G (fastest AMD integrated solution, other than the Ryzen 6000 mobile chips) delivered just 29 fps in Borderlands 3 at 1080p medium. The Arc A380 by contrast delivers 69 fps in that same test.
Other results: 5700G got 129 fps in CSGO, A380 got 277 fps with the latest drivers. Flight Simulator 14 fps on 5700G vs. 39 fps on A380. Horizon Zero Dawn 24 fps on 5700G, 58 fps on A380. Red Dead Redemption 2 at 27 fps compared to 63 fps on A380. Last but not least, Watch Dogs Legion at 26 fps versus 60 fps with the A380.
So in effect an A380 delivers over twice the performance of the fastest Vega 8 integrated solution. AMD says the 6000-series APU graphics was up to 2X the 5000-series graphics, and it's more power efficient for sure. But sharing system bandwidth and various other factors means it's probably still slower than an A380. For a laptop, the Radeon 680M is a good option. Unfortunately, it's just not on any desktop solutions.
Let us know if the ARC GPU puts your marriage at risk, LOL.Thinking about getting the Arc 770 for my wife's machine to satisfy my curiosity...
"Dedicated Graphics, Round Two" and they talk something about a product (or prototype) from 2020.
It seems everybody already forgot about the Intel i740.
Biggest reason to get an Arc GPU is for the video encoding performance and quality, and AV1 support in particular. Nvidia may come out ahead in a few cases, like AV1 quality, but it's not by much and you need at least an RTX 4070 Ti (for now) to get the AV1 support. AMD trails video encoding quality in all codecs, even with the 7900 series, plus they also cost $900 or more. So for $350, you can get the A770 16GB and have performance roughly on par with the RTX 3060 but with better video codec support.Thinking about getting the Arc 770 for my wife's machine to satisfy my curiosity...
That was my thinking precisely when I went ahead and bought an A770 to replace a GTX1080ti Zotac mini on my 24x7 server...Biggest reason to get an Arc GPU is for the video encoding performance and quality, and AV1 support in particular. Nvidia may come out ahead in a few cases, like AV1 quality, but it's not by much and you need at least an RTX 4070 Ti (for now) to get the AV1 support. AMD trails video encoding quality in all codecs, even with the 7900 series, plus they also cost $900 or more. So for $350, you can get the A770 16GB and have performance roughly on par with the RTX 3060 but with better video codec support.
I'm actually working on a video encoding article. Your thoughts regarding AV1 and HEVC pretty much line up exactly with what I've seen. There are lots of ways of doing encoding, some sacrificing speed for quality, others opting for quality over speed and doing multiple passes. But generally speaking, I see very little difference between HEVC, AV1, and even VP9. The biggest benefit of AV1 is that it's royalty free, which is also the biggest drawback to HEVC. VP9 may also be royalty free, but it hasn't seen as much uptake in general, and AV1 seems to be the way forward.That was my thinking precisely when I went ahead and bought an A770 to replace a GTX1080ti Zotac mini on my 24x7 server...
Before that I had also considered using a free M.2 slot to wire an A380 to it, given that I was only really interested to use the AV1 code part of it, M.2 to PCIe x4 adapters can be had for very little and bandwidth isn't a concern for media stuff.
Anyhow, the A770 had to go back for unsurmountable issues with proper display port support, but the codec story remains.
These days reviewers mostly focus on the quality of real-time encoding, ideally even faster and multiple streams, because that's what they need. For me it's always been about archiving my media in the densest possible format, that retained both visual quality and the ability to play back on present or future devices. And there I had been expecting something like half size without quality loss for each codec generation, perhaps even at the same encoding power budget.
Yeah, intellectually I know it's unrealistic, but that keeps being the message I read from press headlines! So you may be partially at fault...
I didn't have that much time to play around with AV1 before returning the card, but it seemed a bit similar to the H.265 vs. H.264 experience where H.265 NVEC hardware encoding couldn't really gain any significant headwind against H.264 software encoding, neither in quality nor in density.
It was similar with AV1: using a relatively low quality setting of 30 on Handbrake on a THD source resulted in files that seemed rather acceptable in visual quality but were actually a bit bigger than Quality 21 and H.264 software encoding, while the speed difference wasn't outlandish, at least with a Ryzen 9 5950X running the software codec. Using the "magic" 3500 mbit/s setting for AV1 instead achieved similar file size, while the quality setting within AV1 only seemed to change encoding speed, but not the quality of the result.
When comparing hardware encoding generation by generation, the quality differences for a given bandwidth may be obvious.
When you add software encoding to the mixture, things get a little more complicated as software encoding tends to match hardware encoding of the next generation in quality and density. H.264 on a powerful modern multi-core CPU yields acceptable FPS and excellent quality, matching H.265 hardware. Likewise H.265 software encoding can achieve AV1 hardware encoding quality and density, but at time and energy expenses few may want to suffer. AV1 in software is realistically only a validation tool.
It would seem that hardware encoding pipeline are too fixed-function to allow significant trading of speed vs density. In a way it stands to reason; you probably can't fit things like the really deep look-ahead of dual pass encoding into a pipeline that's designed for real-time and to support live streaming, live video splicing and multi-res transcoding.
It doesn't mean such hardware could not be built, I guess, but the primarily density focused approach only pays off in canned material that has a large viewership to amortize the super optimized software encoding cost. But with live generated or individually mixed content pushing out pre-produced content, that's no longer cost efficient.
AV1 may be a significant gain for real-time quality at lower bandwidth, but I can't quite see that as the average consumer benefit that you seem to suggest, as long as not everyone is live-streaming in THD or 4k.
When it comes to video editing and production, it would be interesting to know how well that works with AV1 and if it implies a full update of the toolset and if the casual open source user base have to wait it out.
Perhaps that could be an article?
That wasn't Intel's first graphics adapter, either. I do seem to remember a 80286 class device, which was basically designed as a windowed frame buffer assembly device: Instead of using optimized software BitBlt loops on 32-bit CPUs to assemble windows into a screen layout on the physical frame buffer, this would allow every application to act as if it had a frame buffer of its own and then do the tile assembly on the fly, much like an MMU, but scan line by scan line."Dedicated Graphics, Round Two" and they talk something about a product (or prototype) from 2020.
It seems everybody already forgot about the Intel i740.
That would be the sort of pipeline I'd have in mind for optimal quality batch encoding, but I'm pretty sure you can't fit that into a single ASIC block that isn't completely custom. And there again, the trend towards real-time stitching and live generated content kills any economy of scale which that might get....
Multi-pass always improves the quality but it's not really possible with real-time streaming. I mean, you could theoretically have a setup where the stream gets delayed by 60 seconds and the hardware blocks off something like 30 second chunks to analyze for where higher vs lower bitrate would be most beneficial and then do the final encode that way and send it out to the stream.
...
I hadn't really thought of doing that. But yeah, getting support for AV1 into some applications can be a royal pain. I haven't tried with Premiere Pro lately, but at one point the easiest solution was to just use ffmpeg to recode a video to a higher bitrate H.264 or HEVC format in order to get it to import.Looking forward to your article, especially if it has something on video editing with AV1, too.
LoLz, well she's using the integrated graphics on her 10900k, so she's already using Intel graphics. ; )Let us know if the ARC GPU puts your marriage at risk, LOL.
Regards.