Question 4k hdr data rate math doesn't add up

You need to increase the resolution to account for blanking intervals, and you also need to multiply by a factor to account for line code.

D = (V + V_blank) × (H + H_blank) × F × Cd
B = D × E

Where:
  • B is the bit rate of the transmission in bits per second
  • D is the data rate in bits per second
  • E is the overhead factor of the line code
  • V and H are the vertical and horizontal pixel counts
  • V_blank and H_blank are the vertical and horizontal blanking intervals in pixels
  • F is the refresh frequency in Hz
  • Cd is the color depth in bits per pixel
In this case:

D = (3840 + 560) px × (2160 + 90) px × 60 Hz × 30 bit/px​
= 4400 px × 2250 px × 60 Hz × 30 bit/px​
= 17,820,000,000 bit/s​
= 17.82 Gbit/s​

This is the number of bits per second required for the video/audio stream. The blanking intervals for 4K 60 Hz of 560 px and 90 px are defined by the CTA-861 standard, VIC 97. This is the standard used for TVs, but monitors may use different timings (such as the CVT-RB formula)

HDMI (prior to version 2.1) uses TMDS encoding, which breaks the data stream into 8-bit blocks and transforms them into 10-bit sequences for transmission. Therefore, the bitrate required for transmission is 10/8 or 1.25× higher than the data rate. Different interfaces have different line codes, so may not be the same. This is why it is best to calculate data rates and compare them to the maximum data rate of each interface, rather than calculating bit rate, since bit rate will change for each interface. But regardless:

B = (D × E) = (17.82 Gbit/s × 1.25)​
= 22.23 Gbit/s​

Not sure where 22.98 Gbit/s comes from, but it's fairly close to the calculated figure anyway.
 
Last edited:
Jul 20, 2019
5
0
10
B = (D × E) = (17.82 Gbit/s × 1.25)​
= 22.23 Gbit/s​

Not sure where 22.98 Gbit/s comes from, but it's fairly close to the calculated figure anyway.

I miss-copied it from the link, it should be 22.28 Gbit/s. And your answer also matches 17.82 × 1.25 = 22.28.

To confirm, regardless of whether the color depth is 8 bit, 10 bit, or 12 bit, HDMI before 2.1 would break it into 8 bit blocks and transform them into 10-bit sequences. So, we always to need to multiply by 1.25.

HDMI 2.0 has a data rate of 14.4 Gbit/s. Since, this is less than 17.82, we cannot use this interface. But if we go to 422 or to 8bit, data rate required is only 14.26 Gbit/s - and this interface would work.

Wikipedia says the encoding scheme for HDMI 2.1 is 16b/18b. So, E would be 1.125 in this case. DP 2.0 is 128b/132b, so we would have to multiply by 1.03125 for bit rate.
(DP before 2.0 is the same as HDMI before 2.1 for the line code overhead.)

Am I correct so far?

How about 422 and 420? 420 use a quarter of the data for color. And doing that, my answer matches with https://www.extron.com/product/videotools.aspx. (What they call data rate is actually bit rate.)

But for 422, which should use only half the data for color. But I have to use a factor of 0.7 instead of 0.5 to get my answer to match with the extron or the cedia link. Why the extra 0.2 for 422, when it is not needed for 444 or 420?
 
I miss-copied it from the link, it should be 22.28 Gbit/s. And your answer also matches 17.82 × 1.25 = 22.28.

To confirm, regardless of whether the color depth is 8 bit, 10 bit, or 12 bit, HDMI before 2.1 would break it into 8 bit blocks and transform them into 10-bit sequences. So, we always to need to multiply by 1.25.

HDMI 2.0 has a data rate of 14.4 Gbit/s. Since, this is less than 17.82, we cannot use this interface. But if we go to 422 or to 8bit, data rate required is only 14.26 Gbit/s - and this interface would work.

Wikipedia says the encoding scheme for HDMI 2.1 is 16b/18b. So, E would be 1.125 in this case. DP 2.0 is 128b/132b, so we would have to multiply by 1.03125 for bit rate.
(DP before 2.0 is the same as HDMI before 2.1 for the line code overhead.)

Am I correct so far?
That is all correct. And DVI also uses the same design as HDMI 1.0. But like I said, it is much simpler to calculate data rate and compare to the max data rate of each interface rather than calculate separate figures for each one using the different line code factors.

https://www.extron.com/product/videotools.aspx. (What they call data rate is actually bit rate.)

Unfortunately bit rate, data rate, bandwidth, etc. are all used quite interchangeably. But the real "correct" definitions are data rate = rate of bits representing data, bit rate = rate of all bits, regardless of what they represent.

But for 422, which should use only half the data for color. But I have to use a factor of 0.7 instead of 0.5 to get my answer to match with the extron or the cedia link. Why the extra 0.2 for 422, when it is not needed for 444 or 420?
Video is 1/3 luminance, 2/3 color. The subsampling ratios (1/2 for 4:2:2, 1/4 for 4:2:0) are only applied to the color part.

4:2:2 cuts the color data in half (but not the luminance). The total becomes:
[1/3 + (half of 2/3)] = (1/3 + 1/3) = 2/3

4:2:2 only requires 2/3 (0.667) the bit rate of RGB or 4:4:4.

4:2:0 reduces the color data to 1/4, so the total becomes:
[1/3 + (quarter of 2/3)] = (1/3 + 1/6) = (2/6 + 1/6) = (3/6) = (1/2)

4:2:0 requires half the bit rate of RGB or 4:4:4.
 
Last edited:
Jul 20, 2019
5
0
10
Video is 1/3 luminance, 2/3 color. The subsampling ratios (1/2 for 4:2:2, 1/4 for 4:2:0) are only applied to the color part.

4:2:2 cuts the color data in half (but not the luminance). The total becomes:
[1/3 + (half of 2/3)] = (1/3 + 1/3) = 2/3

4:2:2 only requires 2/3 (0.667) the bit rate of RGB or 4:4:4.

4:2:0 reduces the color data to 1/4, so the total becomes:
[1/3 + (quarter of 2/3)] = (1/3 + 1/6) = (2/6 + 1/6) = (3/6) = (1/2)

4:2:0 requires half the bit rate of RGB or 4:4:4.

So, according to this, to calculate the data rate for 4k 10 bit , you would have to do
4400 px × 2250 px × 60 Hz × 10 × (1 + 2 × c)
where c = 1 for 4:4:4,
c = 0.5 for 4:2:2,
c = 0.25 for 4:2:0

In your way of calculating for 4:2:2,
4:2:2 cuts the color data in half (but not the luminance). The total becomes:
[1/3 + (half of 2/3)] = (1/3 + 1/3) = 2/3

But I found that 4:2:2 calculations only match with the Cedia link if the color data is cut only 30%, i.e.
[1/3 + (70% of 2/3)] = (1/3 + 2/3 × 7/10)

That 7/10 should have been 1/2 or 5/10. Why do I have to add extra 2/10 to it? Or is that Cedia link wrong? Or does the blanking interval increase for 4:2:2?

It looks like Cedia is wrong. My answers now match with AcousticFrontiers using only half the color data for 4:2:2.
 
Last edited:
So, according to this, to calculate the data rate for 4k 10 bit , you would have to do
4400 px × 2250 px × 60 Hz × 10 × (1 + 2 × c)
where c = 1 for 4:4:4,
c = 0.5 for 4:2:2,
c = 0.25 for 4:2:0
Yes.
But I found that 4:2:2 calculations only match with the Cedia link if the color data is cut only 30%, i.e.
[1/3 + (70% of 2/3)] = (1/3 + 2/3 × 7/10)

That 7/10 should have been 1/2 or 5/10. Why do I have to add extra 2/10 to it? Or is that Cedia link wrong? Or does the blanking interval increase for 4:2:2?
Their number doesn't make sense, it must be a typo. 4:2:2 multiplies the overall result by 2/3, 4:2:0 multiplies the overall result by 1/2.
 
  • Like
Reactions: vargee
Jul 20, 2019
5
0
10
How does compression work for live-streaming?

Here are some Netflix calculations:
ResolutionFPSChromaHDR/SDRData Rate
4K24hz4:2:010 bit4.46 Gb/s
1080p8 bit0.89 Gb/s
Netflix requires 25 Mbps and 5 Mbps respectively. So, each uses 178.2x compression.

The numbers for Google Stadia are:
Req.Compression
4K60hz4:2:010 bit8.91 Gb/s35 Mb/s254.6x
1080p8 bit1.78 Gb/s10 Mb/s178.2x

Using 4:2:2, the data rate and compression are:
4k -- 11.88 Gb/s -- 339.4x
HD -- 2.38 Gb/s -- 237.6x

Assuming 4:2:0 for both, Netflix and Stadia use the same compression for HD.
To use the same compression for 4K, Stadia would require 50 Mb/s.

Netflix probably streams using pre-compressed video files. They were probably compressed using HEVC.

With live sporting event, a couple of seconds of delay would be alright. This will allow the frames to be stored and compressed with a hardware encoder.

But playing a streaming game should not preferably have a latency of more than 100 ms. Assuming 50 ms network latency, compression latency should be 50 ms. This would mean compressing 3 frames of 1080p to 0.5 Mb in 50 ms and pushing it thru (for 60 fps). Does this seem right? How might it be done?
 
Last edited:
Jul 20, 2019
5
0
10
This is the number of bits per second required for the video/audio stream. The blanking intervals for 4K 60 Hz of 560 px and 90 px are defined by the CTA-861 standard, VIC 97. This is the standard used for TVs, but monitors may use different timings (such as the CVT-RB formula)

Glenwing, I have found this file very helpful. What about 1440p monitors? Is there a file which lists its blanking intervals?

I came across this monitor. I was wondering if it can do hdr with hdmi 1.4 or is it just simulating it. Based on 1080p and 2160p, it seems that the data rate at 4:4:4 would be 8.91 Gbps which is above the hdmi 1.4 bandwidth of 8.16 Gbps.

It looks like it might be like these monitors that have an "HDR effect" through "algorithm" - a software based implementation with DP 1.2.
 
Last edited:
Glenwing, I have found this file very helpful. What about 1440p monitors? Is there a file which lists its blanking intervals?

I came across this monitor. I was wondering if it can do hdr with hdmi 1.4 or is it just simulating it. Based on 1080p and 2160p, it seems that the data rate at 4:4:4 would be 8.91 Gbps which is above the hdmi 1.4 bandwidth of 8.16 Gbps.

It looks like it might be like these monitors that have an "HDR effect" through "algorithm" - a software based implementation with DP 1.2.
You can use the CVT-RB formula.

You may find this calculator helpful:
https://linustechtips.com/main/topi...0&V=1440&F=60&calculations=show&formulas=show

The BenQ GW2765HT does not support HDR. It does support 10 bpc color, which is not synonymous with HDR.
 
Last edited: