News Nvidia's RTX 5090, 5080 reportedly have the same L1 cache size per SM compared to RTX 4090, 4080

I was waiting to get a new 5000 series GPU to replace my old 1000 series; but I think instead I might look for a discount 4080 in the coming months.

Or I might go for an AMD and Intel GPU combo with some cash left over.
 
Last edited:
Don't do that. You need the geforce 5xxx because the fake frames not because it have more cache. With more TGP and the gddr7 will have some nice boost in fps (IA image).

What we need is A good fake frame gen and a IA to play the games to you.
 
  • Like
Reactions: Peksha
What a terrible card. 5080 is the worst of the next gen cards. Least improvements... super high price... meh. Nvidia is really insane to call this new generation... well, new. It's the same old fps, vram, same old everything.

Just 5090 is new and a real upgrade. 1 card doesn't make a new generation, especially if it's more than 2000 (2600 in Europe)
 
  • Like
Reactions: 8086
I was waiting to get a new 5000 series GPU to replace my old 1000 series; but I think instead I might look for a discount 4080 in the coming months.

Or I might go for an AMD and Intel GPU combo with some cash left over.
Wait for RDNA4 prices, they might be OK-ish.
 
  • Like
Reactions: 8086
Is this the same Ada silicon, which itself is Ampere silicon, same shader units and zero gain in arch?
Ampere was made on Samsung 8nm, while Ada was made on TSMC 4N. They were never the same silicon.

The microarchitectures of the SMs might've been similar, but everything else about Ada was way better (except for memory bandwidth).

In the case of Blackwell, I know Nvidia was keen to highlight some microarchitectural differences. I haven't gone through the deep dive in detail, but one slide I remember seeing was how they now support integer computations on all vector pipelines, whereas Ada only supported it on half.

In general, the microarchitecture of GPUs tend to evolve a lot slower than CPUs. So, I wouldn't worry too much about this particular aspect. Pay more attention to the changes in specialized engines and other parameters of the designs.
 
  • Like
Reactions: KyaraM
I was waiting to get a new 5000 series GPU to replace my old 1000 series; but I think instead I might look for a discount 4080 in the coming months.
I'm pretty sure they already stopped production of RTX 4080 last month, in order specifically to avoid having to burn down a lot of inventory.

Once you adjust for performance differences, I doubt you're going to get a very good deal on any RTX 4080's you can find (but I could be wrong). How does the RTX 5070 Ti compare to it? If you can't swing the price of a RTX 5080, maybe that's the one to go for.
 
  • Like
Reactions: KyaraM
What a terrible card. 5080 is the worst of the next gen cards. Least improvements... super high price... meh.
It's launching at $1k, whereas the RTX 4080 launched at $1.2k. Seems like progress?

Nvidia is really insane to call this new generation... well, new. It's the same old fps, vram, same old everything.
Not the same VRAM - it GDDR7. I think you mean it's the same quantity, but it's not as if they just rebadged the RTX 4000 silicon.
 
  • Like
Reactions: KyaraM
I'm pretty sure they already stopped production of RTX 4080 last month, in order specifically to avoid having to burn down a lot of inventory.

Once you adjust for performance differences, I doubt you're going to get a very good deal on any RTX 4080's you can find (but I could be wrong). How does the RTX 5070 Ti compare to it? If you can't swing the price of a RTX 5080, maybe that's the one to go for.
I am just very adverse to giving Nvidia any more money than necessary. I will scour the net for a nice clearance deal.
 
  • Like
Reactions: bit_user
I am just very adverse to giving Nvidia any more money than necessary. I will scour the net for a nice clearance deal.
Oh, I'm all for you getting a good deal! I was just trying to warn you that they might not be sitting on as much inventory as you seemed to assume. But, by all means, go for it!

I once got a great deal on a GTX 980 Ti, back around the launch of Pascal. Pretty much exactly what you said. There were also some deals to be had on Ampere cards, around the time of the RTX 4000 launch. But, that was around the time of the crypto crash. As for this launch, I've been seeing news about how they stopped production on some RTX 4000 models way back in December and I think demand is robust enough that we might not see such amazing discounts.

If I were in your shoes, I'd be keeping a close watch on stock levels and price trends. Especially if you don't want to completely miss out.

Good luck!
: )
 
It's launching at $1k, whereas the RTX 4080 launched at $1.2k. Seems like progress?


Not the same VRAM - it GDDR7. I think you mean it's the same quantity, but it's not as if they just rebadged the RTX 4000 silicon.
Don't make irrelevant comparisons. Comparisons in the product grid are made with the latest product, not the penultimate iteration, so that the new one looks "good enough". 5080=4080 Super, 5070Ti=4070Ti Super, 5070=4070Ti, 5090=4090Ti.

And no one cares about the improvements in additional blocks except miners and AI providers - they don't render frames. Especially when the main SM blocks haven't changed for 3 generations. And as we can see from the preliminary test results, the new GDDR7 doesn't affect the result at all
 
Don't make irrelevant comparisons.
I don't consider it irrelevant. With their naming scheme, Nvidia is basically telling us what we should compare it against.

Comparisons in the product grid are made with the latest product, not the penultimate iteration, so that the new one looks "good enough". 5080=4080 Super, 5070Ti=4070Ti Super, 5070=4070Ti, 5090=4090Ti.
That's like telling me I should only compare a company's financial performance to the previous quarter, not the same quarter of the previous year, which of course there are reasons to do!

Okay, from a buyer's perspective, if you're just trying to to understand how the new products compare with the existing ones, you might compare to only the most recent, but you might also do so on perf/W or perf/$ basis and don't so much even go by model number. That makes sense.

However, if I'm comparing a product at launch, there are reasons why it's worth looking at the corresponding prev-gen product (as defined by Nvidia), when it launched. You don't have to look at that comparison, if you don't want to, but I will compare what I want.

And no one cares about the improvements in additional blocks except miners and AI providers - they don't render frames.
DLSS 4 cares. Ray reconstruction and framegen care.

Especially when the main SM blocks haven't changed for 3 generations.
Didn't they add double vector integer throughput? If so, then how do you know what other little tweaks are lurking in there?

And as we can see from the preliminary test results, the new GDDR7 doesn't affect the result at all
Do those tests include ray tracing? Even at more extreme settings? And is that just on the RTX 5090 or also lower-end models?
 
  • Like
Reactions: KyaraM
That's like telling me I should only compare a company's financial performance to the previous quarter, not the same quarter of the previous year, which of course there are reasons to do!
Of course, that's why the numbers are compared Year-to-Year, not Year-to-Another! It's to get relative change numbers compared to the latest available product. When you compare to an older product, you get a bigger change, but relative to the older product - what useful information can you take away from this that we didn't already know?

DLSS 4 cares. Ray reconstruction and framegen care.
This does not render a frame. This is an optional post-processing, which is also disabled in the vast majority of cases.
Didn't they add double vector integer throughput? If so, then how do you know what other little tweaks are lurking in there?
Current ratios of 2FP or 1FP:1INT have not yet been exceeded in any workload.

Do those tests include ray tracing? Even at more extreme settings? And is that just on the RTX 5090 or also lower-end models?
The only test where this showed improvement was the AI tests, which doesn't matter.

Full comparisons after the product release will show all the changes.
Everyone can vote with their wallet, no need to generate FOMO dlss4/mfg/omg etc fake technology)
 
Oh, I'm all for you getting a good deal! I was just trying to warn you that they might not be sitting on as much inventory as you seemed to assume. But, by all means, go for it!

I once got a great deal on a GTX 980 Ti, back around the launch of Pascal. Pretty much exactly what you said. There were also some deals to be had on Ampere cards, around the time of the RTX 4000 launch. But, that was around the time of the crypto crash. As for this launch, I've been seeing news about how they stopped production on some RTX 4000 models way back in December and I think demand is robust enough that we might not see such amazing discounts.

If I were in your shoes, I'd be keeping a close watch on stock levels and price trends. Especially if you don't want to completely miss out.

Good luck!
: )
More importantly, I don't want to encourage Ngreedia any more. I want to humble Jensen in to submission to his customers, we are the ones he needs to worship and respect and man, I feel very disrespected by his pricing.
 
Current ratios of 2FP or 1FP:1INT have not yet been exceeded in any workload.
According to whom?

They also claim Shader Execution Reordering (SER) was enhanced.

In the architectural overview, they also showed substantial ray tracing improvements:

4aEErv9kyvNTp6sWhXkb3D.jpg


TgLAzDok2bnxQVrrb3aN8D.jpg

 
More importantly, I don't want to encourage Ngreedia any more. I want to humble Jensen in to submission to his customers, we are the ones he needs to worship and respect and man, I feel very disrespected by his pricing.
He'll be humbled, but that's going to come at the hands of AI customers - not gamers.

Again, I'm with you... just also trying to be realistic. He simply doesn't need gamers, right now. It's almost a little surprising he's even showing the gaming community this much attention, these days.
 
According to whom?
You can look at the traces on C&C or make your own.

They also claim Shader Execution Reordering (SER) was enhanced.
Is this the same technology that was used in the only game - the CP2077 ?) And not completely, but for PT. To implement it, it was necessary to modify the game engine and the shader executable code. Are there any projections for adoption two years after release?
 
Is this the same technology that was used in the only game - the CP2077 ?)
Shader Execution Reordering is a generic technology and I have no idea how commonly the mechanism is used. It solves the "divergence" problem intrinsic in the SIMD nature of these GPUs. Raytracing is one case where it happens a lot, because a batch of rays can bounce in different directions, but it happens in plenty of other cases, as well.

I'm not sure how you picked CP2077, but maybe you just glanced at this article, where CP2077 Overdrive was used as a vehicle for examining the technology?

Yes, Nvidia did provide a way to enable/disable it, probably in case it has bugs (which has happened before) or causes performance regressions. Fortunately, that also makes it easy to quantify the benefit it provides.

And not completely, but for PT. To implement it, it was necessary to modify the game engine and the shader executable code. Are there any projections for adoption two years after release?
If something like Unreal Engine incorporates it, then it could show up in a lot of games.

Anyway, I didn't want to get onto a tangent about SER. My point was simply that Blackwell's CUDA cores are not identical to those in Ada. They have design changes, even if not huge ones.