George³
Respectable
Yes less r&d, less another(material) manufacturer costs=higher markup and profit.and costs less
Yes less r&d, less another(material) manufacturer costs=higher markup and profit.and costs less
This is the main point I believe, for the 5090, if one removes those neural cores or whatever they are called, likely the normal raster performance can see quite a bit more significant bump, and for the lower tier... that's pure market segmentation they've been doing all these years.. "let us chop up more and give them less with same or more money"The mediocre performance uplift from Nvidia is to be expected in my opinion. It is based on the same TSMC 4nm, hence, there is very little room to improve on the hardware. The flagship RTX 5090 is really just pushing the boundaries of existing node, and the rest of the segment sees very minor improvements in the hardware division other than the use of GDDR7, which itself is not going to bring material performance uplift. Furthermore, Nvidia's focus on AI hardware means that the gaming capabilities of Blackwell is likely not prioritized.
Unless they invent and implement a different and fundamentally much more efficient GPU architecture. Which can actually process at least twice or four times the amount of information with the same number of transistors and the same frequency. Of course, it must also remain compatible with the rest of the hardware in the computers, or at least such compatibility can be emulated without major losses. How to...I don't know. 😀there is very little room to improve on the hardware
I don’t wanna change this thread into a car engine debate so I will keep it short. Turbo lag has long been addressed and it’s a nonissue in today’s vehicles. The likes of 488/Pista and 911 Turbo S are good examples of that. Even less expensive more accessible cars from Audi/VW and Ford have managed to reduce the lag significantly.I admire the addition of what FG can do in the review as a side note, but that’s for the sake of knowing what’s it is capable of and the drawbacks, like some reviews shown on YT showed while FG gets the avg FPS up by 100+, the 0.1% low is still the same, at least for some games, so they just smooth out the majority of the experience, but the stutters will still occur, unlike lowering the settings.
And the ICE comparison is invalid, in the car territory, turbo was regarded inferior to NA when they produce roughly similar HP and turbo have turbo lag, which induces latency. The racing car turbos dump extra fuel to burn in the exhaust to counter this issue, and the like of sf 90 even use the electric motors to fill in that gap, so it’s no longer NA vs turbo but with other stuffs.
What I mean was not excluding FG as a whole, but don’t bench like they are massively in the lead without the artefacts, just a side bonus of “other features”
I'm not sure what other avenues are left that could be explored, but upscaling and framegen is boring stagnation.So, the innovation that does not conform to some dogma is heresy and not innovation?
There's an reason that the latest 911 GT3 and GT3 RS are NA engine and not turbo, yet still dominates the Nurburgring... but yea, let's not derail to engine analogy.I don’t wanna change this thread into a car engine debate so I will keep it short. Turbo lag has long been addressed and it’s a nonissue in today’s vehicles. The likes of 488/Pista and 911 Turbo S are good examples of that. Even less expensive more accessible cars from Audi/VW and Ford have managed to reduce the lag significantly.
No fundamental objection but that for each new tech, one have to be cautious about a few things:The gist of it is that new tech is introduced with obvious drawbacks in the beginning and people with traditional beliefs resist it, some even denounce it but over the years the tech evolves and manages to resolve most of its drawbacks to be an overall equal or even better alternative.
Exist, but not sure when will be recreated in real hardware or software processor.Is there nothing more that can be done to improve raytracing performance, apart from increasing compute units for it? No ideas for bounding algorithms, improvements to the hardware design, potential for moving more of the work to the GPU?
Dammit, now I hear that voice in my every thought. 😛(me, doing old-codger voice) "Yep, thems was different times. Oh, Pascal was king o' the hill, but hot dang, if it weren't that 'bout everyone yuh jawed with had hold of a Polaris, tell yuh what."
At least you're right on the bold part.I'm not sure what other avenues are left that could be explored, but upscaling and framegen is boring stagnation.
The sole focus right new seems to be increasing power consumption and generating ML slop – this is not innovation. Sure, the ML models can come up with a whole lot of stuff, but it's all based on regurgitating their training data
You just got DLSS4 released in driver yesterday for all Nvidia GPUs starting Series 20, including the new transform model that increases quality a lot at minor performance impact.2)Things like upscaling is a much easier to solve and tech, but after more than a decade, with DLSS models the performance mode only half the internal resolution and still essentially having noticeably artefacts.
3) Before the tech finally is indistinguishable from native rendering, promoting and selling a product mostly/solely based on the tech performance is simply bad practice, by the time the tech matures, it isn't supported by what you purchased. RTX 2080 basically cannot run any game with ray tracing turned on smoothly, when it is finally ok with 3080, 2080 is so outdated, and then when FG is out in DLSS3, it don't apply to
RTX 3000s and 2000s.
I'm not sure what the actual difference is; all those menu choices Nvidia has you go through (GPU family, mobile/desktop, OS generation etc.) doesn't actually usually result in a different package, mostly just serves for data collection on their side.Always get the studio drivers, not the hot trash game ready ones. SD drivers are more durable plus it does not filled with bloatware.
man you got the wrong point of what I want to say, what I say is that it IS still not mature and obviously a QoL bonus, not something worth counting as the next saviour of gaming. Yes, I fully understands "it stands to reason they don't have all the capabilities or power of current or last gen.", and that's exactly the point. paying $1000 -$2000 or even more for something without major improvement in raw perf and only this new DLSS4 isn't going to be justified, it is QoL bonus thing at best, and when it eventually mature to a point it really did FG magically and no latency was added, your current 2 grand 5090 is obsolete for long.You just got DLSS4 released in driver yesterday for all Nvidia GPUs starting Series 20, including the new transform model that increases quality a lot at minor performance impact.
What's the point of griping about DLSS3 then?
And yes, some features may not be available on Series 20 and 30, that's life - those are 2 and 3 generations behind GPUs, it stands to reason they don't have all the capabilities or power of current or last gen.
I agree that Blackwell feels more like a refinement of the existing architecture rather than anything major new.man you got the wrong point of what I want to say, what I say is that it IS still not mature and obviously a QoL bonus, not something worth counting as the next saviour of gaming. Yes, I fully understands "it stands to reason they don't have all the capabilities or power of current or last gen.", and that's exactly the point. paying $1000 -$2000 or even more for something without major improvement in raw perf and only this new DLSS4 isn't going to be justified, it is QoL bonus thing at best.
I do agree on most as raster is hitting some walls for quite some time, but the gen on gen improvement in pure raster have been quite siginficant except 50series, and for DLSS, I am fine for DLSS/ FSR for the upscaling part for sure and especially action games, don't notice the artefacts as opposed to the slow paced flight sim where you focused on the flight display, upscaling or DLAA is great, but since FG and now MFG is heading to a wrong direction IMO, especially when Nv is trying to push the same dollar per frame thing upon us each gen... they have been riding on the AI train way too much for the consumer products, I have no idea how much will or will not improve on raster if they ditch that AI neural cores or something similar to good ol shaders how much more raw perf are we getting.I agree that Blackwell feels more like a refinement of the existing architecture rather than anything major new.
But honestly, what exactly happened last few gens since Series 20? It was one iterative upgrade over the other in everything, except for RT/ML/AI capabilities.
And it's not because dastardly Nvidia tries to put us on the AI hypetrain, but because raster-based rendering is simply mature, all the low-hanging fruit there has long since been picked off and we're mostly riding off optimization such as process shrinks and some arch optimizations.
The fact that AMD, which had far less ML/AI focus until now did not manage any much better shows that raster-based rendering hit serious diminishing returns in evolution.
I, personally, disagree that DLSS is "QoL" - no it's a very real and very solid performance boost, and now with DLSS4 Transform Model it also got a major quality boost too. I don't think it is fair to say it's just "QoL" when you can turn it on and get easy 20-30FPS boost at almost no quality loss and better latency as byproduct of that increased FPS (and mind you again, so we're clear, I'm not talking about FG here). That is a very real performance right there.
I think people need to realize that "major improvement in raw perf" is not going to easily happen on the raster side anymore. It has simply been farmed out in the last 2 decades.
The raster improvements we have seen were mostly product of stuffing in more transistors thanks to new process every gen.I do agree on most as raster is hitting some walls for quite some time, but the gen on gen improvement in pure raster have been quite siginficant except 50series, and for DLSS, I am fine for DLSS/ FSR for the upscaling part for sure and especially action games, don't notice the artefacts as opposed to the slow paced flight sim where you focused on the flight display, upscaling or DLAA is great, but since FG and now MFG is heading to a wrong direction IMO, especially when Nv is trying to push the same dollar per frame thing upon us each gen... they have been riding on the AI train way too much for the consumer products, I have no idea how much will or will not improve on raster if they ditch that AI neural cores or something similar to good ol shaders how much more raw perf are we getting.
We all know this plain and simple, but also note that Nvidia have increased their margin since 30 series, it was never 70%, we should also expect them to not milk the gamers forever. They have the margin to not put the 5080 as the die size they are using now, which historically should be a 5070 at best, but that is off topic tbfThe raster improvements we have seen were mostly product of stuffing in more transistors thanks to new process every gen.
Blackwell is the first gen in a long while we stayed on the literally same process node. So Nvidia went on and broken a bank by stuffing even more transistors into 5090, making it expensive AF, but the others in the line did not get much increase at all.
Here's a quickie of how x080 went.
![]()
And 5080 is also TSMC 4 nm at ~45.6 billion transistors. So practically this gen got cursed with no process node shrink.
The only reason 5090 shows decent increase because they went on stuffed 20b more transistors on top of what 4090 had, and in process of doing it maxed out their new power connector and added another 500 bucks MSRP.
The gist of it is that before this gen the massive performance boosts were in the most part thanks to new process, massively increasing transistor count, rather than raster-based arch getting massively better.
Blackwell is sort of a reality check here of what happens when process node does not shrink. And we better start getting used to it, because every new process node is going to be blood, tears, pain and money from now on.
Series 60 will probably be 3nm and something tells me Series 70 might stay on improved 3nm too.
Until there is no actual competitor, they will milk all they can all day long, 365 days a year.We all know this plain and simple, but also note that Nvidia have increased their margin since 30 series, it was never 70%, we should also expect them to not milk the gamers forever. They have the margin to not put the 5080 as the die size they are using now, which historically should be a 5070 at best, but that is off topic tbf
yea I am sure they will, so finger crossed they will have some, and for the sake of being price concious, I don't buy into the MFG saves all BS, it started going too far from useful into gimmick and snake oil.. have to say again I am not against the whole DLSS4 package, just MFG partUntil there is no actual competitor, they will milk all they can all day long, 365 days a year.
(Stands proudly) "My work here is done!"Dammit, now I hear that voice in my every thought. 😛
The idle power consumption seems totally insane! 90 W?? I'm using RTX 3060 on Linux and my idle consumption for the GPU is 14-15 W and even that feels too high when the rest of whole desktop system needs about the same while idling.Nvidia needed to modify its drivers to support the new Blackwell GPUs. It also recommended some changes in how people benchmark graphics cards, specifically related to MFG. Neither of these things appears to have been beneficial for Nvidia's latest GPUs, on the whole.
Nvidia's RTX 50-series drivers feel half-baked, focus too much on MFG : Read more