Nvidia Blackwell and GeForce RTX 50-Series GPUs: Rumors, specifications, release dates, pricing, and everything we know

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
My next card will be a igpu :)
The way this is all headed, I'm starting to agree. I MAY have bought my last dedicated GPU already.

Gaming, in general, is becoming less and less important to me. I literally can't stomach more than a few minutes of gaming most days.

And I'm pretty sure local models like DeepCode (more importantly, future ones) will run on m-class.

Who knows. Maybe my next "gaming" computer will be the next Mac Studio.
 
  • Like
Reactions: Amdlova
Still DP 1.4?
There have been many rumors about the arch, but the most promising seems to be not an increase in cores, but an increase in L1 for increase the occupancy of the execution units. Low occupancy on a huge die seems to be pushing Ada to its limits.
 
Last edited:
So, when Tom's refers to "Bandwidth (GBps) 2304" that has nothing to do with video output bandwidth? How do we know that? You're thinking it is memory bandwidth? So, why doesn't Tom's tell us something that is relevant, like the video output bandwidth?
I already answered your question. Absolutely the bandwidth refers to memory bandwidth. Video output bandwidth isn't even close to hitting the same numbers as raw memory bandwidth. But also, video output bandwidth will be determined by the supported outputs.

HDMI 2.1? It has a maximum output bandwidth of 48 Gbps, but it also uses 16b/18b encoding — meaning that for signal integrity and such, for every 16 bits of data, 18 bits are transmitted. That gives actual data bandwidth of just 42.667 Gbps (48 * 16 / 18 = 42.666...).

DisplayPort 2.1 has the fastest output bandwidths, but it's also not required to support them all. So UHBR10 has 40 Gbps, UHBR13.5 has 54 Gbps, and UHBR20 has 80 Gbps. It uses 128b/132b encoding, so the data to maximum bandwidth is higher, and the 80 Gbps output has real-world data bandwidth capped at 77.58 Gbps (80 * 128 / 132).

(There's actually some of the data bandwidth reserved for audio, but that's beside the point.)

So, that's gigabits per second, and memory bandwidth typically gets reported in gigabytes per second, so divide those two figures by eight. That means with current video standards, the fastest output bandwidth for a display on HDMI 2.1 is only 5.33 GB/s, and for DisplayPort 2.1 it tops out at 9.70 GB/s.

Is that "relevant" enough for you? Video standards are very slow to change and so other than probably supported DP2.1, Blackwell won't really do anything more than other graphics solutions when it comes to video output bandwidth.

You're actually so far out in la la land that it's hardly worth my time, but hopefully you've learned something. Again, this is why even integrated graphics can drive 4K 10-bit HDR displays at high refresh rates with modern solutions (I think Meteor Lake may support up to 240 Hz via DSC, and the same for AMD's latest Ryzen AI and prior generation Ryzen 7000/8000 series parts). Modern dual-channel system memory configurations have 102.4 GB/s of memory bandwidth with DDR5-6400 as an example, which is over ten times higher than the video output bandwidth requirement of DP2.1 UHBR20.
What I want to know is how fast can a Nvidia GeForce RTX 5090 run a 10-bit, 4k monitor.
Assuming it has DisplayPort 2.1 UHBR20 support, which I'm pretty sure it will, it will be able to display uncompressed 4K 10-bit color at up to 267 Hz, or 8-bit color at up to 323 Hz. DSC (Display Stream Compression) can offer 2:1 and 3:1 compression ratios with "visually lossless" compression, and would thus allow up to a theoretical 4K 10-bit at up to 800 Hz. Naturally, no modern displays have anything close to that refresh rate, with the fastest currently topping out at 4K and 240 Hz as far as I'm aware. There are 500 Hz (maybe slightly higher) displays, but those are only 1080p.
Still DP 1.4?
There have been many rumors about the arch, but the most promising seems to be not an increase in cores, but an increase in L1 for increase the occupancy of the execution units. Low occupancy on a huge die seems to be pushing Ada to its limits.
I'd be shocked if Nvidia didn't support DP2.1 this round. There were no DP2.1 monitors two years ago when Ada launched, though they started to arrive not too long after. There are enough now that I think it would be foolhardy to stick to a DP1.4a solution this upcoming generation.
 
  • Like
Reactions: Peksha
It’s interesting how Nvidia is balancing their launches to manage pricing and demand. The idea of budget cards using high-end specs like 24 GB GDDR7 is amusing but makes sense in a market where performance is key. It’ll be fascinating to see how they strategize the 5090 and 5080..especially with a 384-bit bus! The tier differentiation could lead to some unexpected performance jumps.
Latest rumor is:
5090 = 512-bit bus, 32 GB
5080 = 256-bit bus, 16 GB, half the cores
24 GB != 24 Gb, they are GDDR7 chips with 3 GB each.
Also, you sound like an LLM.
 
  • Like
Reactions: JarredWaltonGPU
For anyone following, this article is mostly up to date now. We have the whitepaper with full specs for the 5070 and above, though we don't have a hard launch date yet on the 5070 and 5070 Ti. There's additional information I may add in the coming days regarding some of the architectural stuff as well.