News Nvidia's RTX 40-Series Laptops Hint at Future RTX 4060 and 4050 Desktop Specs

The RTX 40-series continues to be a disappointment. I'm surprised how close together the laptop 4070, 4060, and 4050 are. The specs make it look like there won't be a meaningful difference in performance unless you step up to the 4080 -- a GPU that likely will only appear in the oversized, gaudy, overpriced gamer laptops.

Hopefully some manufacturer puts the 4070 in a 2-in-1 device, sells it in the USA, and prices it like other gaming laptops instead of putting it in some weird premium category. However, given what Nvidia has done on pricing, I'm not sure I want to know how expensive this year's systems are going to be.
 
There are plenty of games now that can exceed 8GB of VRAM use.

Isn't this just because the memory is available. Many games just try to saturate the graphical memory with textures and shading code, because it's there so why no use it. A 12GB GPU might show 12GB in use, but that doesn't mean it can't run with 8GB.

I haven't seen any games that need more than 8GB of VRAM at 1080p, and 1080p would be kind of the target resolution for a 4050 no.

Steam shows 64% of Steam users are playing in 1080p, 11% plays in 1440p and only 2% at 4k.

To be honest, I could be wrong though, I don't follow every AAA game release. Needing more than 8GB VRAM at 1080p, just seems like...a lot.
 
Last edited:
I looked up some of the VRAM requirements of 2022 PC games with "demanding" graphics (subjective).

Slim pickings in 2022, I really had to dig to find any "AAA" 3D PC games from 2022.

But still, none of them required anything more than 8GB VRAM, even for recommended specs.

The 4050 having "just" 8GB VRAM seems like a non issue?

VRAM requirements

A Plague Tale: Requiem
Minimum: 4GB VRAM
Recommended: 8GB VRAM

Elden Ring
Minimum: 3GB VRAM
Recommended: 8GB VRAM

Need for Speed Unbound
Minimum: 4GB
Recommended: 8GB

Stray
Minimum: 1GB
Recommended: 3GB
 
Last edited:
The only game that leaps to mind which can use over 8GB (without mods etc) at 1080p is Doom Eternal. 8GB really needs to be the starting point on desktop though and the bus width that nvidia is playing with doesn't really leave them any options. Given that their charts still show the 30 series covering lower end I'm not sure nvidia is even planning lower desktop parts any time soon.
 
Alternatively, the laptop 4060 and 4050 might both be using AD107 but the laptop 4050 gets its bus cut down to 96-bit from the full 128-bit. This would mean the desktop versions may have higher specs, as NVIDIA is already doing with the other models.
 
I haven't seen any games that need more than 8GB of VRAM at 1080p, and 1080p would be kind of the target resolution for a 4050 no.
Have you considered the scenario of VR use? The 4050 is likely to be advertised as VR-ready just like the 3050 was. VR games can readily eat up tons of VRAM. While the user's primary display may be 1080p, there is a strong possibility of them plugging in a Quest headset or similar to experience VR with the power PC computing can provide.

Furthermore, I don't trust big game studios to keep making games that fit within 8 GB VRAM. People said the same thing about 3 GB VRAM in the Radeon HD 7950 and 7970 being enough -- it wasn't!
 
Furthermore, I don't trust big game studios to keep making games that fit within 8 GB VRAM.

Game developers will just target what the average user has, like they always do.

Of the top 12 GPU in use by Steam users, only 1 out of those 12 has more than 8GB VRAM.

The average VRAM available is 6.2 GB.

Steam's most popular user GPU:
  1. GTX 1650: 4 GB VRAM
  2. GTX 1060: 3 GB - 6 GB VRAM
  3. RTX 2060: 6 GB VRAM
  4. GTX 1050 Ti: 4 GB VRAM
  5. RTX 3060 (laptop): 6 GB VRAM
  6. RTX 3060: 12GB
  7. RTX 3070: 8 GB VRAM
  8. GTX 1660 SUPER: 6 GB VRAM
  9. RTX 3060 Ti: 8 GB VRAM
  10. GTX 1660 Ti: 6 GB VRAM
  11. GTX 1050: 4 GB VRAM
  12. RTX 3050: 8 GB VRAM
 
The only game that leaps to mind which can use over 8GB (without mods etc) at 1080p is Doom Eternal. 8GB really needs to be the starting point on desktop though and the bus width that nvidia is playing with doesn't really leave them any options. Given that their charts still show the 30 series covering lower end I'm not sure nvidia is even planning lower desktop parts any time soon.
From my test suite, I can assure you that Far Cry 6, Forza Horizon 5, Total War: Warhammer 3, and Watch Dogs Legion all have performance fall off at 4K with 8GB, sometimes badly. Those are just four games out of 15. At 1440p, I think most of them all do okay, but I know a few other games in the past year have pushed beyond 10GB. I think Resident Evil Village and Godfall did, for example. And yes, you can turn down some settings to get around this, but the point is that we've got 12GB on a 3060 right now, and that feels like it should be the minimum for a mainstream GPU going forward. Anything using AD106 will presumably be limited to 8GB... the same 8GB that we had standard on R9 290/290X and GTX 1070 over six years ago.

As far as Steam goes, it tracks a very large group of users, many of whom only play on older laptops, stuff like DOTA and CSGO that will run on old hardware. The fact that the top 12 is dominated by lower cost GPUs isn't at all surprising. Fortnite hardware stats would probably look the same. But there are also roughly 5% of gamers that do have GPUs with more than 8GB, and more critically, most GPUs don't have more than 8GB. Like, the list of hardware with more than 8GB currently consists of:

RTX 4090
RTX 4080
RTX 4070 Ti
RTX 3090 Ti
RTX 3090
RTX 3080 Ti
RTX 3080 (both 12GB and 10GB)
RTX 3060
RTX 2080 Ti
RTX 1080 Ti
RX 7900 XTX
RX 7900 XT
RX 6950 XT
RX 6900 XT
RX 6800 XT
RX 6800
RX 6750 XT
RX 6700 XT
RX 6700
Radeon VII
Arc A770 16GB

Of those, only the 3060 and 6700 series are remotely mainstream, and Arc if we're gong to count that. But the point is we should be moving forward. Going from 8GB on a 256-bit interface to 8GB on a 128-bit interface will at times be a step back, or at best a step sideways.
 
I strongly agree with the comment about DLS 3.0.

They're not real frames, and they introduce a delay. It adds half a frame of lag to delay the last frame and add in that interpolated frame in between.

A lot of players with lower end GPU play at relatively low, or very low, framerates.

For someone with a 4080 playing at 120 fps, half a frame delay is not a big deal. It's 1/240th of a second. But for someone with a low-end GPU playing at 30 fps or below, it is a big deal. It is extra lag you simply don't really want if your FPS is already low.

The lower your framerates, the more milliseconds of lag DLSS 3.0 introduces. When a game temporarily drops to 10 fps, and you delay that latest frame even more, that just tanks response time. It's like putting the chains on an already struggling GPU.

At 30 fps DLSS 3.0 adds 1/60th of an input delay. At a struggling 10 fps DLSS 3.0 adds 1/20th of a second of delay, that's an extra 50 milliseconds of lag, and you're definitely going to notice that, stutters in games will seem much worse.

I truly hope Nvidia is not going to pull out the "DLSS 3.0" fanfare if they make a desktop 4050 or 4060, because that extra bit of lag DLSS 3.0 introduces is negligeable at 120fps, but is a problem at 30fps and below.
 
Last edited:
From my test suite, I can assure you that Far Cry 6, Forza Horizon 5, Total War: Warhammer 3, and Watch Dogs Legion all have performance fall off at 4K with 8GB, sometimes badly. Those are just four games out of 15. At 1440p, I think most of them all do okay, but I know a few other games in the past year have pushed beyond 10GB. I think Resident Evil Village and Godfall did, for example. And yes, you can turn down some settings to get around this, but the point is that we've got 12GB on a 3060 right now, and that feels like it should be the minimum for a mainstream GPU going forward. Anything using AD106 will presumably be limited to 8GB... the same 8GB that we had standard on R9 290/290X and GTX 1070 over six years ago.
Yeah, but you're unlikely to have the graphics power to run any of those games well at 4k with lower end 40 series which is why I specified 1080p (though I'd bet most will be 1440p capable). We're probably still going to see 6GB desktop cards too which just shouldn't be happening anymore. I certainly agree it's a problem and it is only going to get worse as we move along.

As far as Steam goes, it tracks a very large group of users, many of whom only play on older laptops, stuff like DOTA and CSGO that will run on old hardware. The fact that the top 12 is dominated by lower cost GPUs isn't at all surprising. Fortnite hardware stats would probably look the same. But there are also roughly 5% of gamers that do have GPUs with more than 8GB, and more critically, most GPUs don't have more than 8GB. Like, the list of hardware with more than 8GB currently consists of:

RTX 4090
RTX 4080
RTX 4070 Ti
RTX 3090 Ti
RTX 3090
RTX 3080 Ti
RTX 3080 (both 12GB and 10GB)
RTX 3060
RTX 2080 Ti
RTX 1080 Ti
RX 7900 XTX
RX 7900 XT
RX 6950 XT
RX 6900 XT
RX 6800 XT
RX 6800
RX 6750 XT
RX 6700 XT
RX 6700
Radeon VII
Arc A770 16GB

Of those, only the 3060 and 6700 series are remotely mainstream, and Arc if we're gong to count that. But the point is we should be moving forward. Going from 8GB on a 256-bit interface to 8GB on a 128-bit interface will at times be a step back, or at best a step sideways.
Do you think this capacity issue with a low bit bus can be solved with Samsung's GDDR stacking technology they'd announced some weeks back?
 
What Nvidia is doing now is no different from what AMD did with RDNA2. The flagship and high end (RX 6800 series) are great performer. The moment it goes down to the mid range, the wheels starts falling off.
Similarities,
  1. Flagship to mid range reduction in specs = 50% cut to cores, cache and memory bus
  2. High dependency on cache - Because of the reduction in memory bus, there is a high reliance on the cache to maintain a healthy memory bandwidth. That also means, the card will perform well at the target resolution. Not that it cannot run at a higher resolution, but there may be heavy performance penalty. This is apparent in the case of the RX 6900/6800 falling behind their competition at 4K. This is also the same with the RTX 4070 Ti where it tends to lose some of its competitive advantage at 1440p (in some games where its got a healthy lead over the RX 7900 XT at 1080p), and more obvious at 4K.

I just hope that Nvidia don't do the RX 6500 XT mistake of dropping memory bus back to 64bit for the entry level cards. If they do, I think they should introduce the GTX, rather than RTX branding. 64bit with RT just don't make sense. It is RT capable, but its RT slideshow.
 
  • Like
Reactions: JarredWaltonGPU
I will say they should show real fps numbers and show dlss as a secondary number. Until now they’ve always shown true fps, but dlss while a good idea, feels like a very secondary technology. IE if I want higher fps than my card can provide. Furthermore for them to lock dlss 3.0 to the 40 series isn’t very fair. For crying out loud amd even lets you use fsr on older nvidia gpus.

I did buy a 3080, truth be told my intention had been to buy a 6900xt or 6800xt or above. Since I bought the 3080 used I didn’t feel as bad about it. It’s a good card, but if I’m truthful, yes I have a freesync monitor, but with the 3080 every once in a while I get a slight flicker in a loading menu. It’s not a huge deal, but for everyone saying how much better nvidia drivers are, I don’t remember ever having that on my previous amd card.

It does feel that 50 series should be 8gb, 60/70 series 12gb, then 16 or above in the rest of the stack. On the 50 series if they are going to attempt charging 500, 8gb really? Don’t think they won’t either. If I remember, 3050s were $350?

Just seems like lower cards in the stack would be further along.
 
  • Like
Reactions: atomicWAR
Good read, solid points and I very much agree. Which just goes to show just how bad Nvidia is treating their gaming market, and to a lesser extent AMD.

I had heard rumors not long after Ampere launched how Nvidia had plans to create these whole new tiers of GPU performance and cost with the 4000 series. 2.3-2.5G (may have been higher) at the high end if I recall, about what were paying for the 4090/4080/4070Ti class cards. That we'd basically get a new high end with performance like we had never seen and prices to match. But more importantly the needle wasn't going to move at the low end to mid range nearly as much if at all and that they would basically be stagnant performance wise. I remember thinking at the time that was insane. There is no way the market will accept this. And now I am watching it play out exactly like that. The worst part of that rumor at the time was the 5000 series was supposed to be even worse in this regard if everything went to plan with the 4000 series. I wish I could find this article/video now, I honestly forget the format but it was so close to what were seeing it is scary. Which makes me wonder if we'll see a 4090ti/titan slot in at 2,300-2500 yet and if we will indeed see rtx 4060's that are barely better than the 3060 counter part with 4050 being basically equal to 3050s.

If anyone knows the article/video I am speaking of, Please message me or post it in here. Some of my deets might be a hair off but if this sounds familar and you know where I can find it, please point me to it so I may share it with others as a source and not an random memory/anecdote. lol.
 
I will say they should show real fps numbers and show dlss as a secondary number. Until now they’ve always shown true fps, but dlss while a good idea, feels like a very secondary technology. IE if I want higher fps than my card can provide. Furthermore for them to lock dlss 3.0 to the 40 series isn’t very fair. For crying out loud amd even lets you use fsr on older nvidia gpus.

I did buy a 3080, truth be told my intention had been to buy a 6900xt or 6800xt or above. Since I bought the 3080 used I didn’t feel as bad about it. It’s a good card, but if I’m truthful, yes I have a freesync monitor, but with the 3080 every once in a while I get a slight flicker in a loading menu. It’s not a huge deal, but for everyone saying how much better nvidia drivers are, I don’t remember ever having that on my previous amd card.

It does feel that 50 series should be 8gb, 60/70 series 12gb, then 16 or above in the rest of the stack. On the 50 series if they are going to attempt charging 500, 8gb really? Don’t think they won’t either. If I remember, 3050s were $350?

Just seems like lower cards in the stack would be further along.

DLSS should not be shown at all in benchmarks unless directly comparing DLSS to older versions DLSS or other up-scaling tech like FSR. Keep things as apples to apples as possible. Toms and most sites do a good job of that but marketing/presentation slides, not so much
 
I like amd but their ryzen 7000 proves if they can get away with what nvidia is doing they’d likely try sadly. I always bought amd in the past as they were a value option. But they are getting to be as bad as Intel. Truth be told Intel has had compelling values lately. I may build an am5 rig around 8000-9000 series. Though I may take more of a look at the Intel cpus than before. Hopefully Intel can fix their gpu stuff by the time of my next gpu upgrade.
 
DLSS should not be shown at all in benchmarks unless directly comparing DLSS to older versions DLSS or other up-scaling tech like FSR. Keep things as apples to apples as possible. Toms and most sites do a good job of that but marketing/presentation slides, not so much
I agree. AI-generated frames shouldn't count in raw performance benchmarks, especially when AI algorithms known to cause significant artifacting, ghosting, fuzziness and other issues are involved.
 
Three years down the road all the AAA games will still be console ports from the current generation so what powers 70 native fps today probably will still power 60 native fps. My GTX980 lasted through a whole console generation or 4 PC GPU generations. When was the last AAA game that require the best and latest GPU just to run? My GPU upgrade decision going forward will be purely AI compute and CG rendering dependent.
 
Isn't this just because the memory is available. Many games just try to saturate the graphical memory with textures and shading code, because it's there so why no use it. A 12GB GPU might show 12GB in use, but that doesn't mean it can't run with 8GB.
Bingo! And even scaling with resolution does not heavily impact memory requirements: Textures are textures, you will be loading the same textures regardless of render resolution at the same game settings (and if you scale back texture resolution - if that's even an option the game exposes - at higher render resolutions you reduce memory footprint!), or geometry, so its buffers that scale with render resolution. Take 1080p to 4k for example: buffer size quadruples, but even if we assume a good 10 buffers (for depth and normals and Z and diffuse and whatever other buffers your render pipeline involves) and 32bpp for each buffer, that goes from ~79MB at 1080p to ~316MB at 4k. Not a huge impact to total vRAM usage.

The vast majority of vRAM is not taken up by buffers or active-use textures and geometry, but by opportunistically cached textures and geometry from the rest of the level that is crammed into any spare vRAM and overwritten (with zero performance impact) if/when actual live data needs that space. That opportunistically cached data may never make its way on screen before being overwritten, but any good engine should be trying to cache it anyway when the PCIe bus is not otherwise occupied and there is spare vRAM, because there is zero penalty from doing so and it may have a small chance of avoiding a cache miss and memory or drive read later. As DirectStorage moves from something individual developers implement to a commonly available API, even that will become less of a necessity as access overheads from out-of-vRAM data are reduced.

When you see a game 'use' large quantities of vRAM, the amount used is almost always what the game has reached by running out of data to cache for the level/chunk loaded, not the amount of data it actually needs for rendering.