News Arc A770 Beats RTX 4090 In Early 8K AV1 Decoding Benchmarks

The small difference between Nvidia and Intel is not the story.

The real story is AMD with a $640 GPU being unable to properly decode a 4k youtube video, even though AMD claims it can decode AV1.
You don't think Nvidia's premier card getting beat by one that costs over a thousand dollars less is a story? I'd say you're unnecessarily trying to take only one piece of information from this; both can be a big deal.
 
You don't think Nvidia's premier card getting beat by one that costs over a thousand dollars less is a story? I'd say you're unnecessarily trying to take only one piece of information from this; both can be a big deal.

Don't think it's a big deal. 4090 is marketed as a gaming card, not a workstation card nor dedicated video encoder/decoder card.

6900xt still has some driver issues. It's average is good but the 1% is bad.
 
  • Like
Reactions: artk2219
Don't think it's a big deal. 4090 is marketed as a gaming card, not a workstation card nor dedicated video encoder/decoder card.

6900xt still has some driver issues. It's average is good but the 1% is bad.
Last time I checked, neither was the A770. Yet here we are. Back to square one.

That said, Intel's cards do seem to be very suited for productivity tasks, while still being decent enough for gaming, especially in 1440p and 4k. So if that is your aim, it looks like Intel might be the way to go, especially considering costs as well.
 
  • Like
Reactions: artk2219
Being able to watch quality video from youtube and netflix on my gpu to save my battery or power bill isn't a "workstation" use case, and should be prioritised on all new gpus!

Pls do linux! Not quite there yet also when it comes to hardware support for video, especially in browsers
 
  • Like
Reactions: artk2219
I have to say I'm a little surprised by these results but I wouldn't get too upset about it unless you're that 1% of the people out there probably less than 1% who actually have a 8k monitor I wouldn't be too concerned about these results.
Nvidia probably just needs to update their software and hopefully AMD have fixed their hardware/software for this new generation.
At least Intel has something to be proud of now 😅😅
I still wouldn't buy one but for some people I'm sure it would work just fine.
 
  • Like
Reactions: artk2219
AV1 decoding has been present since Ampere for Nvidia, though I'm happy to see Intel ensuring it works well.

What I'm really curious about is encoding. The 4090/4080 16GB have dual encoders on the die.. I wonder how well it does vs Intel's encoder.
 
Last edited:
  • Like
Reactions: cyrusfox
I wonder if maybe an OpenCL version of an AV1 decoder/encoder might be a better idea.
My understanding is OpenCL is broke, very few use it compared to CUDA (CUDA is much faster and is widely used). CUDA is so dominate that Intel is opting to emulate CUDA it in there own GPU endeavors (ZLUDA) to be relevant to the current GPU workload. But fixed hardware accelerator will always beat software encoding, an OpenCL implementation would be great but currently think it is more likely A310 cards will flood the market and remove the need, as even the fixed function encoder in those cards I have heard is pretty decent.
 
Exactly, CPU encoding takes forever on AV1, really looking to see how they all stack on encoding (AMD can't do this and no plans to add???)
I've tested software encoding with the CPU using both my laptop (5900HX) and HTPC (5800X3D). I have to say AV1 does put the 8c/16th to good use for sure. It does cause stutters in my case, but I can dial the settings to a point where it's less noticeable and the encoding doesn't look like garbage. I have samples if you want to have a look. Hardware AV1 is at the same point in time when H264 wasn't fully HW "accelerated" and we still had 4c/8t CPUs on the high end (of mainstream). Using H264 via CPU, even with fast/normal presets, the CPU is ok and doesn't drop frames (in my experience), but you'd always want the GPU/iGPU/dedicated-HW to manage most of it because of the danger of stutters anyway.

EDIT: OpenCL is not broken... It's just not as used as CUDA in, well, everything.

Regards.
 
  • Like
Reactions: cyrusfox