juanrga :
blackkstar :
Throwing away features to reduce power consumption does not make something more efficient. It just makes it use less power. If you want an absurd example, consider a rock. As GPU, it's missing a lot of features that GCN and Maxwell have. But it uses a lot less power.
But my point, as ridiculous as a rock GPU is, is to prove that you can't just remove things and then declare you've improved efficiency. If Maxwell was a real winner, they'd be GPGPU beasts as well as gaming beasts. Instead Nvidia opted to remove features to increase performance as well as reduce the power used. And you can see they are playing some sort of odd games. Power consumption for gaming is fantastic and when you switch to GPGPU, it is awful compared to how much power it uses when gaming.
That's not increasing efficiency, because efficiency implies less resources used to complete the same tasks. It is blatant removing things to decrease resources used.
Do you know anything about Maxwell?
Nvidia has increased efficiency by adding stuff. They added more control logic to improve power management; they added more FP32 CUDA cores, special function units, and load/store units; they added more L2 cache, from 256KB to 2MB... e.g., more cache improves performance whereas reduces power consumption by reducing accesses to GDDR5.
Moreover, the graphics cards have received new rendering features to support DX12 and OpenGL 4.5. The new architecture also delivers real-time global illumination through one-bounce path tracing.
On the GPGPU side, everyone knows that Nvidia owns the market with 80--90% of the share. Nvidia has the fastest GPGPU cards (I provided the link to Nvidia owing benchmark record) and its
efficient design has been chosen to power future supercomputers that will be 5--10x faster than Titan (jdwii gave the link).
I'm more referring to the fact that if you want to do something like render with CUDA in Blender, you're still better off buying a GTX 580 than a GTX 970. Yes they added some transistors, but they're clearly cutting corners elsewhere to focus on specific tasks for the GPUs to shine in.
How is the fact that sites that posted Maxwell GPGPU numbers suddenly decided to make large changes to how they measure power consumption at the same time Maxwell releases, or that they've even gone as far as removing graphs and information from reviews.
No one ever brings this up enough, but review sites are at Nvidia, AMD, and Intel's mercy. If they represent a product in a way one of those companies don't like, the review site stops getting their free $1000 graphics cards. No one ever factors this into their reviews, and it's a large part of why Nvdia gets much better press than AMD.
Consider what happened when TweakTown decided not to obey Nvidia's every command. They lost their free review samples and they threw a giant fit over it.
AMD's position in all of this is that they don't have the products that are in the highest demand, so review sites don't care about getting the free AMD swag because AMD is going to send them it anyways since they're the underdog. And Intel and Nvidia know they're going to get people buying their stuff, even if they're inferior, because of brand recognition.
I'm quite sure you'll show up with single precision benchmarks to try and refute me, but Anandtech has GTX 980 losing to GTX 780 Ti, 290x, 7970, 290, 780, and GTX 580 (by large margins) in F@H double precision. That is the sort of thing I'm talking about Nvidia gutting from GPGPU to improve efficiency.
And yes, I know they're gimping it for desktop parts. But enabling them is going to increase power consumption, which shoots your whole efficiency statements out the window. In fact, we haven't even seen any sort of evidence that opening up the DP parts of Maxwell on professional graphic cards will still maintain Maxwell's efficiency.
And I think it's rather foolish to extrapolate that a part that's cut down so far in DP will still maintain the same levels of efficiency when you enable full DP throughput. Because that's basically what you're doing when you make the jump from GTX 980 efficiency benchmarks to stating Maxwell will be great in professional and HPC market where DP is used and not artificially gimped.
I also think that if AMD had a part that did so horribly relative to previous AMD parts in a benchmark like F@H double precision, that AMD would be on a spitroast in forums. But it will get written off because it's Nvidia and there's nothing wrong with Nvidia releasing a new part where the one from 3 generations ago is almost 50% faster. Praise Nvidia for increasing efficiency, actually! They're miracle workers!