blackkstar :
Juan, mate, I think you are having difficulties comprehending the fact that scaling performance in workloads increases transistor counts and decreases efficiency. You see simple ARM with horrible FPU compared to x86 being far more efficient, so you just assume that ARM will eventually catch up because of how efficient it is at its current level of performance.
http://www.bitsandchips.it/gaming/9-hardware/5214-roundup-arm-board-odroid-u3-marsboard-a20-e-rk3066?start=6
You can clearly see FPU is missing a lot of functionality and performance. Adding that performance and functionality will increase transistors and decrease efficiency. You can not have both.
http://www.bitsandchips.it/gaming/9-hardware/5214-roundup-arm-board-odroid-u3-marsboard-a20-e-rk3066?start=6
You can clearly see FPU is missing a lot of functionality and performance. Adding that performance and functionality will increase transistors and decrease efficiency. You can not have both.
No. I am claiming something completely different. Read my last resume post on ARM to get the point, because you didn't get it still.
blackkstar :
You also seem to assume that GCN and Maxwell have the same design goals. They don't. Maxwell is clearly missing parts of the architecture to do certain tasks that GCN has.
http://www.anandtech.com/show/8568/the-geforce-gtx-970-review-feat-evga/14
Direct your attention to double precision F@H and video rendering benchmarks. Maxwell just doesn't have the transistors there to do those tasks properly. And it's why it's "more efficient", because when GCN is doing things that don't need those parts of the GPU, they still receive power and reduce efficiency, because AFAIK, they don't have power gating to prevent that from happening.
http://www.anandtech.com/show/8568/the-geforce-gtx-970-review-feat-evga/14
Direct your attention to double precision F@H and video rendering benchmarks. Maxwell just doesn't have the transistors there to do those tasks properly. And it's why it's "more efficient", because when GCN is doing things that don't need those parts of the GPU, they still receive power and reduce efficiency, because AFAIK, they don't have power gating to prevent that from happening.
I am not assuming that. It seems you missed when I said that Nvidia optimized the GPU for graphics, whereas AMD is pushing a nonsensical compute approach that hurts the cards on graphic tasks.
blackkstar :
AMD is aiming for a GPGPU beast that can handle far more use cases with GPGPU.
AMD is developing the compute part of GCN for APUs and then reusing the same graphics architecture on dGPUs for reducing costs. AMD is not developing any "GPGPU beast"; in fact AMD's chances of conquering the GPGPU market are zero.
blackkstar :
That's why Nvidia fans and Nvidia happy tech media likes to push efficiency so hard. GCN will never be as efficient as Maxwell unless AMD does something magical. But efficiency is a straw man. No sane person is buying these cards to save money on their electricity bill.
And again you mix efficiency with power consumption, whereas fail to understand why Nvidia, IBM, Intel, and everyone else takes efficiency as first priority those days. Sorry, I am not going to explain the same once more: 10 times is enough!