News GeForce RTX 3050 refuses to die as Nvidia plans fifth iteration of its 2022 budget GPU — new Ada Lovelace-powered part suggests the name could even...

I needed to upgrade my Nvidia GeForce 690 earlier this year. I figured I'd be waiting for a Nvidia GeForce RTX 5080 I wanted for $1,000 for a long time. So, I purchased a GeForce RTX 3050 6GB for $180 to hold me for a few months. I'm running 4k. 10-bit, 144MHz with it. I kind of feel like I could keep on using it for a long time.
 
Last edited:
  • Like
Reactions: thisisaname
So, I purchased a GeForce RTX 3050 6GB for $180 to hold me for a few months. I'm running 4k. 10-bit, 144MHz with it. I kind of feel like I could keep on using it for a long time.
Oh, just desktop graphics doesn't take much from a GPU.

I have a little AMD Polaris RX 550 from 2017 running a 1440p monitor @ 144 Hz. Even running Google Earth at max quality is nice and smooth on it!

At work, I'm still using a GTX 1050 Ti on a 4k monitor @ 60 Hz. Again, it's more than enough for desktop graphics.
 
Last edited:
  • Like
Reactions: TCA_ChinChin
3050 the new 1030 that never dies lol.
The bizarre thing is how they're using ADA silicon in a RTX 3000-series part that only ever previously used Ampere. I'm pretty sure they never did that with the GTX 1030.

Why are they doing it?? Are they trying to avoid price erosion of the remaining RTX 4000 cards on the market? Are they somehow trying to capitalize on the RTX 3050's name recognition?
 
  • Like
Reactions: TCA_ChinChin
The bizarre thing is how they're using ADA silicon in a RTX 3000-series part that only ever previously used Ampere. I'm pretty sure they never did that with the GTX 1030.

Why are they doing it?? Are they trying to avoid price erosion of the remaining RTX 4000 cards on the market? Are they somehow trying to capitalize on the RTX 3050's name recognition?

i believe these parts could be old laptop chips kind of like what amd did with the rx 6500 xt and 6400

if i was to guess there holding the bottom line with whatever they have on hand. they want to keep the 5000 series above that 200 pound cap they dont want to drop it. basicly youll be fine with last 2 generations if your in that 100-200 cap.
 
i believe these parts could be old laptop chips kind of like what amd did with the rx 6500 xt and 6400
Those weren't old, just not perhaps originally planned for the desktop. However, they were RDNA2, just like the rest of the RX 6000 lineup. So, not an example of mixing generations, much less re-releasing an existing model number with newer generation silicon!

if i was to guess there holding the bottom line with whatever they have on hand. they want to keep the 5000 series above that 200 pound cap they dont want to drop it. basicly youll be fine with last 2 generations if your in that 100-200 cap.
Okay, but AD106 was RTX 4000 generation. So, it's not eroding RTX 5000 pricing.

I just think this is going to be really confusing, when they want to end driver support for the RTX 3000 series. They're going to need an asterisk on the RTX 3050A.
 
  • Like
Reactions: TCA_ChinChin
Those weren't old, just not perhaps originally planned for the desktop. However, they were RDNA2, just like the rest of the RX 6000 lineup. So, not an example of mixing generations, much less re-releasing an existing model number with newer generation silicon!


Okay, but AD106 was RTX 4000 generation. So, it's not eroding RTX 5000 pricing.

I just think this is going to be really confusing, when they want to end driver support for the RTX 3000 series. They're going to need an asterisk on the RTX 3050A.

its to inflate the pricing of the 5000 series . if they have to drop the lower end 5000 series to under 200 mark ad106 they may have alot of that silicon left that didnt make the cut.
 
Why are they doing it?? Are they trying to avoid price erosion of the remaining RTX 4000 cards on the market? Are they somehow trying to capitalize on the RTX 3050's name recognition?
The naming does seem very odd. I understand shifting from GA to AD as it likely all comes down to money, but I'm not sure why they wouldn't just name it 4050 a and just reduce the specs a bit. They could even call it a 4030/4040/etc. since there is nothing being produced for those at the moment but maybe their are plans for those name plates.
 
Please, if they release this as the in spirit replacement of the gtx1050/750(ti) where its below 150$, no additional power connectors required and with single slot and/or half height options it would be actually great. AV1 encode support would be a cherry on top. Also think the naming scheme is weird, but it can be good if priced well (probably not though).
 
  • Like
Reactions: usertests
I assume this 3050A is a 1/2 cut down AD106 because the 4060 is already a 2/3 chip compared to the 4070 Mobile.

(Shaders / TMUs / ROPs)
4070M: 4608 / 144 / 48
4060Ti: 4352 / 136 / 48
4060: 3072 / 96 / 48

The shoe(s) that it fills isn't big, and I doubt Nvidia would bother delivering more than required.
3050 8GB: 2560 / 80 / 32
3050 OEM: 2304 / 72 / 32
3050 6GB: 2304 / 72 / 32
3050 4GB: 2048 / 64 /32
 
Please, if they release this as the in spirit replacement of the gtx1050/750(ti) where its below 150$, no additional power connectors required and with single slot and/or half height options it would be actually great. AV1 encode support would be a cherry on top. Also think the naming scheme is weird, but it can be good if priced well (probably not though).
Yes, this product could be great if priced low, but we know little about it. We don't even know if it's mobile or desktop. Interesting though.

I assume this 3050A is a 1/2 cut down AD106 because the 4060 is already a 2/3 chip compared to the 4070 Mobile.
4060 desktop and mobile do not use AD106. They are both fully enabled AD107 (3072 CUDA cores).

4050 mobile is 2560 CUDA cores (83% enabled AD107). 96-bit bus and 6 GB.

I think the best case scenario here is that this 3050A product also uses 2560 cores, but 128-bit and 8 GB. This would match the cores and VRAM of the full RTX 3050 8 GB, but should probably be faster. Maybe not if the TDP is 75W.

The name could be to emphasize its budget nature, but if it's a desktop card they should really just call it RTX 4050. Weren't there certain DLSS features that 30-series didn't get? Maybe that's the reason for not calling it a 40-series card. They may be unwilling to allow it to act as a full 40-series card. This might especially be the case if it has 6 GB and it's not practical for DLSS FG or something, despite being Lovelace.
 
Right, I looked it up and DLSS 3.0 frame generation is exclusive to 40-series. So maybe there's something about it that makes it unsuitable for that feature. But if that were true, it should also be true for RTX 4050 mobile, which is currently the only Lovelace card with less than 8 GB of VRAM, and the lowest number of shader cores, TMUs, RT cores, and tensor cores. Notebookcheck says it supports DLSS 3.0 frame generation.

So my theory doesn't make that much sense to me. I think it's just another dumb name from Nvidia, and indicates a (hopefully) low price point.
 
RX 580. For around $100 new, all day long. Murders the 1030 with ease.
RX 5700 XT For around $200 new. Handily beats the 3050.

rx 580 cant be bought new and doesn't have any support even then its use cases are limited the good thing about 3050 and 50 and 30 series cards is there quite compact yet powerful enough.

amd don't have any new cards in that range which if they want good will and market share would be a start.

ive yet to see a 5700xt new theres a few 6000 series cards left in europe but thats a small puddle.

and need a 9050 series card in bottom lows to do damage.

intel in uk have b570 and a750 under 200.