closs.sebastien
Honorable
an article that big.... just for saying that nobody know / we know nothing wet and we must wait & see....
Why not? 7nm has more than double the density of 12nm, certainly gives Nvidia the option to do so if they want to. From the next-gen console hype and leaks so far, it seems AMD may have sorted out its GPU power-efficiency with RDNA2 and in that case, Nvidia may need to step things up.Do people really expect NVIDIA to release a card which is a good 2x or even more performance than RTX2080Ti.
Yes I agree that 7nm is huge jump from 12nm for die shrink and also agree NVIDIA has to answer AMD's competition. But 100% performance increase is bit over the top. I highly doubt that either AMD or NVIDIA can leap up that ahead in one gen. When it comes to consoles I only expect RTX2070 performance in addition to lossless Ray-tracing implementation. As it is the Lineup will only consist of Ray-tracing RTX cards I expect them to launch RTX3060 with performance equal to RTX2070Super at max with Lossless Ray-tracing implementation. Similarly RTX3080 to be on par with RTX2080Ti and not 40%+ in performance or the RTX3080Ti to be 100%+ which is 2x RTX2080Ti performance.Why not? 7nm has more than double the density of 12nm, certainly gives Nvidia the option to do so if they want to. From the next-gen console hype and leaks so far, it seems AMD may have sorted out its GPU power-efficiency with RDNA2 and in that case, Nvidia may need to step things up.
We used to get 30-40% annual GPU performance increases. I'd certainly welcome a substantial lurch forward for a change after a decade of not much new for 3-4 years at a time mainly because GPU manufacturers have run out of fab process headroom to cram more transistor in without increasing TDP, die size and cost by a stupid amount.
Nvidia's CEO wants to push RT hard and for that to happen, Huang needs GPUs able to hit 60+fps with RT on full at every resolution tier within reason. 100% more RT performance is roughly what Nvidia needs to raise RT from being a gimmick people turn on to play in tourist mode then turn off for actual game play due to the 40-50% RT performance hit to something people may actually leave on.But 100% performance increase is bit over the top.
No, what I meant by 100% improvement I was referring to raster performance. Well I do expect Ray-tracing implementation where it can give lossless performance. I think with that crazy increase in RT-Cores it can possibly improve the experience to a great extent. I have heard that the lineup will have more than double the RT core count which is a big jump.Nvidia's CEO wants to push RT hard and for that to happen, Huang needs GPUs able to hit 60+fps with RT on full at every resolution tier within reason. 100% more RT performance is roughly what Nvidia needs to raise RT from being a gimmick people turn on to play in tourist mode then turn off for actual game play due to the 40-50% RT performance hit to something people may actually leave on.
Doubt Jensen wants his product's star feature going largely ignored two generations in a row.
Supporting having more than double the RT performance likely requires substantial upgrades to raster too: still need to calculate material properties for a given location on a surface (baseline color from textures, roughness, reflectivity, transmissivity, etc. maps) before RT can apply lighting and doing reflections means having to do the same with the surfaces that fall within each reflection's FoV up to the maximum allowed number of nested reflections, so you may end up rendering substantial chunks of the scene including otherwise hidden elements from multiple different PoVs.No, what I meant by 100% improvement I was referring to raster performance.
I mean yeah that is obvious when they say that RTX3060 will have RTX2080Ti Ray-tracing performance that means it must have at least that much or even greater Raster performance. But they were actually referring to RTX2080Ti Ray-tracing performance which was a big cutoff from what its Raster performance was. Still unable to accurately grasp the potential, we will only know when more info leaks of the low end. But seeing the rumors that NVIDIA will use GA100 for RTX3080Ti instead of using GA102 is the insane part.Supporting having more than double the RT performance likely requires substantial upgrades to raster too: still need to calculate material properties for a given location on a surface (baseline color from textures, roughness, reflectivity, transmissivity, etc. maps) before RT can apply lighting and doing reflections means having to do the same with the surfaces that fall within each reflection's FoV up to the maximum allowed number of nested reflections, so you may end up rendering substantial chunks of the scene including otherwise hidden elements from multiple different PoVs.
$$$Why not?
As per the above point, it's surely more expensive per mm^2 than 12 nm.7nm has more than double the density of 12nm, certainly gives Nvidia the option to do so if they want to.
Well, I guess it's a matter of what you're used to. 30% - 40% generational improvement would be insane, for CPUs, yet you're not willing to accept it for a GPU?We used to get 30-40% annual GPU performance increases. I'd certainly welcome a substantial lurch forward for a change
Ray tracing, tensor cores, mesh shaders, tile-based rendering, new texture compression formats... I wouldn't say GPUs have failed to innovate. Not even one generation has gone by, in quite a while, without significant new functionality.after a decade of not much new
What do you mean by this?with Lossless Ray-tracing implementation.
AMD managed just about that much with Zen 2's CCDs, so yes.Do you really think they can afford to switch twice as many transistors, at the same or higher clocks, in the same power budget.
No, I disagree. I was very deliberate about this point.AMD managed just about that much with Zen 2's CCDs, so yes.
Yes that was my initial argument. I don't think NVIDIA is crazy enough to pull something like that off. Because if they do then the price will also be double Like RTX3080Ti will cost double of RTX2080Ti or at-least $2000 and RTX3080 will cost as much as RTX2080Ti. But pricing that high will kill-off point of calling them Mainstream consumer cards. Even price of $1200-1500 is very high already anything above it is insanity and out of question for many.No, I disagree. I was very deliberate about this point.
Simply having twice the transistors doesn't mean switching them all. In a CPU, most of the transistors are not switching in most clocks, because CPU cores are mostly serial. It's a corner case when you can actually keep all of the CPU pipelines fully occupied, and then the CPU drops its clocks way down to keep power in check (or maybe not, but then power goes through the roof).
In a GPU, doubling shaders means almost doubling the real work being done by the GPU, since graphics is so massively parallel.
Implementation of Ray-tracing in a way that it has no to minimal effect on overall performance hit.What do you mean by this?
I thought you were talking about image quality, since "lossless compression" means compressing an image with no loss of quality or information.Implementation of Ray-tracing in a way that it has no to minimal effect on overall performance hit.
Yeah that is more accurate and easy to understand. Thank you. I will refer to it as "Full-performance Ray-tracing" or even better "No Performance-loss Ray-tracing".I thought you were talking about image quality, since "lossless compression" means compressing an image with no loss of quality or information.
I'd suggest a term like "full performance", for avoiding this sort of misunderstanding.
I don't get the reason why people are expecting NVIDIA to throw in their GA100 having 8192CUDA cores, 1024Tensor cores and 256RT cores as RTX3080Ti offering. Do people really expect NVIDIA to release a card which is a good 2x or even more performance than RTX2080Ti.
Implementation of Ray-tracing in a way that it has no to minimal effect on overall performance hit.
There have been lot many articles based on rumors stating that RTX3080Ti to be based on GA100 and not GA102. That was my basis of argument.I have note seen any rumors of 100% better performance. The most recent and believable leak just came a couple days ago from Moore's Law Is Dead. He was allegedly told by a test engineer that a 3080Ti, using GA102, not GA100, is at worst about 40% faster than a 2080Ti, usually above 50%, and peaks at about 70% faster.
How do you determine a specific percentage of performance improvement based on a chip codename? According to the Moore's Law Is Dead rumor, there is the possibility of a Titan based on GA100, and lord knows what that will cost, but the Ti is based on GA102.There have been lot many articles based on rumors stating that RTX3080Ti to be based on GA100 and not GA102. That was my basis of argument.
GA100 is 100% increase increase in core count itself. But lets say RTX3080Ti gets cut down version. But then again the 10-20% increase in IPC adds-up to 100% performance increase from RTX2080Ti. I am not basing this on Moore's Law Is Dead rumor, I am basing it on this article and others that came before it.How do you determine a specific percentage of performance improvement based on a chip codename? According to the Moore's Law Is Dead rumor, there is the possibility of a Titan based on GA100, and lord knows what that will cost, but the Ti is based on GA102.
I don't know who tweaked the rumor to make it a gaming card, but from the start when the GA100 specs were leaked, it was obvious this was the successor to the GV100 and was never going to end up in a mainstream gaming card. Everything in the leaked specs said enterprise card. 24-48GB HBM2e on a gaming card? Really?GA100 is 100% increase increase in core count itself. But lets say RTX3080Ti gets cut down version. But then again the 10-20% increase in IPC adds-up to 100% performance increase from RTX2080Ti. I am not basing this on Moore's Law Is Dead rumor, I am basing it on this article and others that came before it.
That all sounds very plausible, to me.He was allegedly told by a test engineer that a 3080Ti, using GA102, not GA100, is at worst about 40% faster than a 2080Ti, usually above 50%, and peaks at about 70% faster.