Nvidia's AD106 and AD107 graphics processors are small, but will they be cheap?
Alleged Nvidia AD106 and AD107 GPU Pics, Specs, Die Sizes Revealed : Read more
Alleged Nvidia AD106 and AD107 GPU Pics, Specs, Die Sizes Revealed : Read more
Easy: "because we can."I don't see how Nvidia could justify another price increase when generational performance declines.
The dark ages of tech is a higher cost, and lower performance compared to same tier previous gen parts. Are we entering the dark age of laptop gpu performance?Easy: "because we can."
Not like AMD and Intel are a credible threat to Nvidia's stranglehold on the graphics market yet.
Good to see the full specifications! I'm concerned there's going to be a performance regression given the reduction in CUDA cores and memory interface. Some early leaks are showing RTX 3070, 3060, and 3050 chips outperforming RTX 4070, 4060, and 4050 chips, respectively. I don't see how Nvidia could justify another price increase when generational performance declines.
With Nvidia (and AMD) slotting in smaller dies with narrower memory bus and in many cases less memory for the same marketing tier as previous gen, there is bound to be cases where the new die performs worse than the one it is supposed to replace. I can definitely see why people are getting nervous about where this is going.never saw this rumor. 4070Ti already at 3090 level except at 4k. somehow 4070 will be slower than 3070?
I wasn't clear that I meant laptops and not desktops. My bad. The performance regression rumors I've been reading are specific to laptop GPUs. Videocardz and random Twitter Chinese resellers have posted disappointing benchmarks for the 40-series mobile chips showing little generational improvement at 1080p. The potential exists for a regression at 1440p. I'm not following the desktop equivalents.never saw this rumor. 4070Ti already at 3090 level except at 4k. somehow 4070 will be slower than 3070?
There may be memory-limited edge cases for GPU compute, but I cannot recall any being found for gaming yet. e.g. the 4070Ti continues to perform around the 3090 level (usually between 3090and 3090Ti) despite having HALF the memory capacity, buswidth, and bandwidth. The Ada architecture is clearly a lot less sensitive to memory performance than previous architectures.With Nvidia (and AMD) slotting in smaller dies with narrower memory bus and in many cases less memory for the same marketing tier as previous gen, there is bound to be cases where the new die performs worse than the one it is supposed to replace. I can definitely see why people are getting nervous about where this is going.
Today's games, maybe, since they do have to target 4+ years old sub-$400 MSRP hardware to get a decent size audience capable of running them decently well.There may be memory-limited edge cases for GPU compute, but I cannot recall any being found for gaming yet.
It's going to be many, many years before developers can start assuming more than 4/8GB as a GPU capacity baseline (i.e. when 12GB+ becomes the 'cheap card' baseline, which is not going to be any time soon). And at the same time, NVME drive ubiquity can also be assumed, so asset streaming comes along to alleviate that potential bottleneck.Today's games, maybe, since they do have to target 4+ years old sub-$400 MSRP hardware to get a decent size audience capable of running them decently well.
A few more years down the line where games at near-max settings may commonly use more than 12GB worth of assets and buffers though, the 3090 could age much better than the 4070Ti.
Have you ever heard of "VR"?It's going to be many, many years before developers can start assuming more than 4/8GB as a GPU capacity baseline (i.e. when 12GB+ becomes the 'cheap card' baseline, which is not going to be any time soon). And at the same time, NVME drive ubiquity can also be assumed, so asset streaming comes along to alleviate that potential bottleneck.
Looks like the part about "near-max settings" flew 100' over your head. We are talking RTX3090 vs RTX4070Ti staying power here, not budget gaming.It's going to be many, many years before developers can start assuming more than 4/8GB as a GPU capacity baseline (i.e. when 12GB+ becomes the 'cheap card' baseline, which is not going to be any time soon). And at the same time, NVME drive ubiquity can also be assumed, so asset streaming comes along to alleviate that potential bottleneck.
Looks like the part about "near-max settings" flew 100' over your head. We are talking RTX3090 vs RTX4070Ti staying power here, not budget gaming.
Current games with graphics maxed out already push 10+GB of VRAM and that earned Nvidia a fair amount of criticism from reviewer as insufficiently future-proof with modern games already pushing 10+GB on Ultra.
Streaming assets from NVMe at 15GB/s including decompression gain is no substitute to sufficient VRAM at 300+GB/s.
At least, as far as VRAM burden is concerned, budget buyers willing to compromise on details have the option of lowering texture resolutions to make it fit as games become more VRAM-hungry and those GPUs can still get a decent useful life, albeit possibly a rung or two behind on the performance ladder from where they could have been.future proof is one of the main enemy for hardware maker.
Yes, very well indeed, since prior to the DK1 being announced. There is nothing inherent to a low-latency rendering pipeline that requires large quantities of VRAM, and the popularity of standalone HMDs with both some of the highest panel resolutions and less than 10GB of total system RAM indicates this is no more an inherent requirement as render resolution (which also only scales very loosely with VRAM requirements). VRAM burden of current games is vastly overstated.Have you ever heard of "VR"?
Remember that VRAM occupied != VRAM required due entirely to opportunistic asset caching, and its asset caching that DirectStorage and similar techniques target. VRAM required for rendering is far lower, but identifying it is far more invovled than seeing what number a process monitor spits out.Looks like the part about "near-max settings" flew 100' over your head. We are talking RTX3090 vs RTX4070Ti staying power here, not budget gaming.
Current games with graphics maxed out already push 10+GB of VRAM and that earned Nvidia a fair amount of criticism from reviewer as insufficiently future-proof with modern games already pushing 10+GB on Ultra.
Streaming assets from NVMe at 15GB/s including decompression gain is no substitute to sufficient VRAM at 300+GB/s.
It is simple: run a background task that hogs VRAM and see how much extra VRAM the background task can consume before frame rate starts dropping.Remember that VRAM occupied != VRAM required due entirely to opportunistic asset caching, and its asset caching that DirectStorage and similar techniques target. VRAM required for rendering is far lower, but identifying it is far more invovled than seeing what number a process monitor spits out.
Long story short: if you want to play it mostly maxed out, you need 11GB of VRAM at the absolute minimum. The RTX2080Ti with 11GB of VRAM and 12GB RTX3060 completely destroy the 8GB 3070Ti. While the 10GB RTX3080's performs similarly on average, its 1% lows get tanked too.This may be relevant to the VRAM topic:
https://www.techpowerup.com/review/hogwarts-legacy-benchmark-test-performance-analysis/6.html
That's not what the graphs show.Long story short: if you want to play it mostly maxed out, you need 11GB of VRAM at the absolute minimum. The RTX2080Ti with 11GB of VRAM and 12GB RTX3060 completely destroy the 8GB 3070Ti. While the 10GB RTX3080's performs similarly on average, its 1% lows get tanked too.
That game is so horrendously programmed with performance issues on such a scale over the entire GPU stack it's not even worth looking at right now. Also, iirc the test was from pre-launch; it got a day 1 patch that helped a lot, and community fixes as well. Besides, as stated directly above, your assessment isn't even correct...Long story short: if you want to play it mostly maxed out, you need 11GB of VRAM at the absolute minimum. The RTX2080Ti with 11GB of VRAM and 12GB RTX3060 completely destroy the 8GB 3070Ti. While the 10GB RTX3080's performs similarly on average, its 1% lows get tanked too.
8GB of VRAM isn't enough for anything more powerful than an RTX3050.