Regarding 8K AV1 decoding, Intel's Arc A770 may be better than Nvidia's GeForce RTX 4090.
Arc A770 Beats RTX 4090 In Early 8K AV1 Decoding Benchmarks : Read more
Arc A770 Beats RTX 4090 In Early 8K AV1 Decoding Benchmarks : Read more
You don't think Nvidia's premier card getting beat by one that costs over a thousand dollars less is a story? I'd say you're unnecessarily trying to take only one piece of information from this; both can be a big deal.The small difference between Nvidia and Intel is not the story.
The real story is AMD with a $640 GPU being unable to properly decode a 4k youtube video, even though AMD claims it can decode AV1.
You don't think Nvidia's premier card getting beat by one that costs over a thousand dollars less is a story? I'd say you're unnecessarily trying to take only one piece of information from this; both can be a big deal.
Last time I checked, neither was the A770. Yet here we are. Back to square one.Don't think it's a big deal. 4090 is marketed as a gaming card, not a workstation card nor dedicated video encoder/decoder card.
6900xt still has some driver issues. It's average is good but the 1% is bad.
Since when is watching youtube a workstation task?Don't think it's a big deal. 4090 is marketed as a gaming card, not a workstation card nor dedicated video encoder/decoder card.
6900xt still has some driver issues. It's average is good but the 1% is bad.
Since when is watching youtube a workstation task?
Exactly, CPU encoding takes forever on AV1, really looking to see how they all stack on encoding (AMD can't do this and no plans to add???)What I'm really curious about is encoding.
I wonder if maybe an OpenCL version of an AV1 decoder/encoder might be a better idea.Exactly, CPU encoding takes forever on AV1, really looking to see how they all stack on encoding (AMD can't do this and no plans to add???)
My understanding is OpenCL is broke, very few use it compared to CUDA (CUDA is much faster and is widely used). CUDA is so dominate that Intel is opting to emulate CUDA it in there own GPU endeavors (ZLUDA) to be relevant to the current GPU workload. But fixed hardware accelerator will always beat software encoding, an OpenCL implementation would be great but currently think it is more likely A310 cards will flood the market and remove the need, as even the fixed function encoder in those cards I have heard is pretty decent.I wonder if maybe an OpenCL version of an AV1 decoder/encoder might be a better idea.
I've tested software encoding with the CPU using both my laptop (5900HX) and HTPC (5800X3D). I have to say AV1 does put the 8c/16th to good use for sure. It does cause stutters in my case, but I can dial the settings to a point where it's less noticeable and the encoding doesn't look like garbage. I have samples if you want to have a look. Hardware AV1 is at the same point in time when H264 wasn't fully HW "accelerated" and we still had 4c/8t CPUs on the high end (of mainstream). Using H264 via CPU, even with fast/normal presets, the CPU is ok and doesn't drop frames (in my experience), but you'd always want the GPU/iGPU/dedicated-HW to manage most of it because of the danger of stutters anyway.Exactly, CPU encoding takes forever on AV1, really looking to see how they all stack on encoding (AMD can't do this and no plans to add???)