News Deep Render Says Its AI Video Compression Tech Will 'Save the Internet'

Status
Not open for further replies.
This is great! One of the top 2 things I hate about streaming media is the compression. When you've got a big screen, 4k projector, good refresh rate, and serious sound system compression becomes more apparent. This is why I prefer to buy physical media despite how annoying 4k blu rays are to play.

My hope is that this new AI solution improves the quality of media. I worry that a new compression method will be used to further shrink files instead of improving the fidelity of streamed content at the same file size as older compression methods.
 
  • Like
Reactions: salgado18
This is great! One of the top 2 things I hate about streaming media is the compression. When you've got a big screen, 4k projector, good refresh rate, and serious sound system compression becomes more apparent. This is why I prefer to buy physical media despite how annoying 4k blu rays are to play.

My hope is that this new AI solution improves the quality of media. I worry that a new compression method will be used to further shrink files instead of improving the fidelity of streamed content at the same file size as older compression methods.
I play 1080p content at standard refresh rates and the limited colors already bug me. Just look ata dark scene with fog, and you can see the few shades of gray in the video. The screen can handle a lot more colors than that, but the compression reduces the amount of color values. I think that a higher compression rate will lead to better quality, since today's streaming speed is good. More colors and less artifacts at the same (or better) speeds.

This is one use of AI that I think is awesome, and may indeed lead to a big step forward in tech.
 
  • Like
Reactions: bigdragon
New video format standards should include a neural network specification to upscale, interpolate and guess frames, so only the part non hallucinated by the neural network should be transmitted.
 
H.264/AVC is how old now? Its Wikipedia article states its specifications were published 19 years ago. Just because something is commonly used does not mean it is or should be used as the de facto standard of the industry. Its successor, H.265/HEVC, has been around for 10 years now and its royalty-free competition surpassed it ages ago. Directly competing with H.264/AVC was VP7 (2005) and VP8 (2008). In 2013, VP9 and H.265/HEVC. VP9 is already on its way out as support for its successor, AV1, becomes more prevalent. And, as stated by @usertests, the successors to AV1 and H.265/HEVC have been in development for some time now.

Comparing a next-gen video codec to a last-gen codec is like comparing a brand new NVMe SSD to any ol' SATA HDD. Though it feels more like comparing an NVMe SSD to an IDE HDD.

This is great! One of the top 2 things I hate about streaming media is the compression. When you've got a big screen, 4k projector, good refresh rate, and serious sound system compression becomes more apparent. This is why I prefer to buy physical media despite how annoying 4k blu rays are to play.

My hope is that this new AI solution improves the quality of media. I worry that a new compression method will be used to further shrink files instead of improving the fidelity of streamed content at the same file size as older compression methods.
You say that as if Blu-Ray can only store lossless multimedia. You don't mean it that way, right? It's just an example of (re-encoded) streamed multimedia's quality being compromised for data savings as opposed to stored multimedia, which is typically encoded more slowly for better quality and greater storage efficiency, right?
 
  • Like
Reactions: usertests
New video format standards should include a neural network specification to upscale, interpolate and guess frames, so only the part non hallucinated by the neural network should be transmitted.
So you're saying these features should be built into the decoder rather than the encoder and its video data output?
That would restrict the decoder to function on hardware supporting the necessary instructions. That's partly why H.264 is still so prevalent despite being 20 years old—compatibility.
 
So you're saying these features should be built into the decoder rather than the encoder and its video data output?
Both encoder and decoder need the NN.

The encoder uses the NN to get the predicted frame, and then encodes only the error |Predicted-ActualFrame|.

The decoder uses the same NN to predict the frame and then apply the encoded error correction.

Of course, it would need to be suboptimal so it runs in a wide range of hardware, including phones.

Also all hardware should be designed to run NN efficiently, which they will need to do anyway, because AI is getting everywhere.

Any hardware, including phones, already has graphics hardware capable of accelerating NN.

Youtube already encodes video in different formats for different platforms anyways.
 
Status
Not open for further replies.