News GeForce RTX 3080 Max-Q Rumored to Use Full GA104 GPU

This "3080 Max-Q" likely won't perform all that much better than a desktop 3060 (non-Ti). Even hardware-wise it will be more like a massively-underclocked 3070, not even sharing the same chip as the similarly-named desktop part. The desktop 3080's graphics chip is nearly 60% larger than the one in the 3070 or this "Max-Q" card in terms of physical dimensions.

But at least they gave it a model number that can trick people into paying top dollar for laptops containing it, who hopefully won't be too surprised when they only get a little over half the performance of a desktop 3080 at higher resolutions.
 
Jan 14, 2021
1
0
10
As the article says "Not that FP32 performance is the end-all and be-all metric " we won't know until it comes out for sure on its performance. Point being that the 2080 Ti delivers 14.2 TFLOPS vs the 3070's 20.31 TFLOPS yet based on UserBenchmark data the 2080 Ti eeks out a 4% real world performance lead in games over the 3070 and a 14% lead in synthetic benchmarks. (not counting raytracing)

Brief take on raytracing and why I made the distinction above and why I believe at least for the next year at best it will not be as big of a factor in gaming as the next gen wants it to be is that it is not yet mainstream. As of late Dec 2020 only 26 games utilize ray tracing and some of those games that are actually choosing to implement it don't always benefit graphically due to the type of graphic style of the game such as World of Warcraft yet they still take the performance hit thus rendering the utilization of raytracing in such games unbeneficial. 2k-4k standard gaming still seems to be the key focus so that is what I base my personal performance standards on until that changes and how I think others should too. To reinforce my assertion, keep in mind the entire GTX 1000 series and their AMD counterpart were advertised for the future gaming gimick that was to be VR and the consoles soon followed suit too not unlike what we are seeing now with RT but look where VR is today. ¯\(ツ)

So I do not believe we can make a conclusion that the RTX 3080 Max-Q will for sure be unable to perform around a desktop 3070, worse, or better from the given hardware specs alone. We know the RTX 2080 Max-Q was on par with the desktop RTX 2060 if they make no improvements to that formula that the RTX 2080 Max-Q will be on par with the RTX 3060 Ti they have out which is almost on par with a desktop RTX 2080 being a smidge better. The best currently is the Mobile RTX 2080S which is on par with the destop RTX 2070S and the RTX 2070S is roughly around the performance of the RTX 2080 only being a smidge lower and it would be awfully foolish of NVidia to release a new flagship card that is only 3% better for a much larger price. While there will be ignorant purchasers, I don't see them making up the numbers needed for NVIDIA to not take a financial hit by doing so.

It would be in their best interest if the RTX 3080 Max-Q was closer to the performance of the RTX 3070 at minimum or they will be in trouble.
 
As the article says "Not that FP32 performance is the end-all and be-all metric " we won't know until it comes out for sure on its performance. Point being that the 2080 Ti delivers 14.2 TFLOPS vs the 3070's 20.31 TFLOPS yet based on UserBenchmark data the 2080 Ti eeks out a 4% real world performance lead in games over the 3070 and a 14% lead in synthetic benchmarks. (not counting raytracing)
While I would generally agree, that's mostly just the case across different generations of architectures, where the amount of potential compute performance could mean wildly different things for actual gaming performance. Within a generation of hardware though, the TFlops should usually be a lot more meaningful, especially since these are using the same chips as certain desktop parts, just at significantly lower clocks. If that "up to 15.3 TFlops" number pans out, that would make it slower that a "16+ Tflop" 3060 Ti, and due to the limitations of laptop cooling, many notebooks utilizing it might not even manage that.

And that kind of performance makes sense. Based on the specs rumored here, the "3080 Max-Q" will be utilizing the same graphics chip and memory configuration as a 3070, but with a little over 4% more cores enabled. However, its nearly 30% lower boost clocks (and 50% lower base clocks), combined with almost 15% lower memory clocks means it won't be coming close to a 3070. At best, it will probably be close to 25% slower than a 3070 in games limited by graphics performance, and slightly slower than a 3060 Ti, assuming it can even reliably maintain those boost clocks for extended periods.