jimmysmitty
Champion
-Fran- :
Judging GPU archs is like judging Ford for making Mustangs with V8s since the 60s saying they're lazy. If a GPU can keep going strong with a few tweaks gen over gen, then so be it. AMD using GCN for some time now has been the result of stupid (in my own views, of course) decisions in regards on how they had the Company organized, using the GPU department as a "glue" to all other techs they were producing and GCN being the jack of all trades behind APUs, GPUs and other stuff.
Now, nVidia has been using the same idea on how to arrange the SMXes since Fermi with a few tweaks here and there, but keeping the same arrangement on the GPU arch. Or at least, that is my impression. One can judge on what constitutes a "big" change on his/her own terms. In any case, my point I think still stands: GCN is just there because there is nothing at the moment that would make sense to replace it with. AMD is now in a good position to tweak it for GPUs and GPUs alone.
Cheers!
Now, nVidia has been using the same idea on how to arrange the SMXes since Fermi with a few tweaks here and there, but keeping the same arrangement on the GPU arch. Or at least, that is my impression. One can judge on what constitutes a "big" change on his/her own terms. In any case, my point I think still stands: GCN is just there because there is nothing at the moment that would make sense to replace it with. AMD is now in a good position to tweak it for GPUs and GPUs alone.
Cheers!
There is a massive difference between a Windsor V8 of the 60s and the Coyote V8 of today. Sure it is still a V8 but there is so much tech behind the new one that it easily drives circles around the old on in terms of raw power and is way more efficient.
That said, it is nothing against AMD but GCN has hit its limit on 28nm and is no longer efficient. While NVidia is using a similar uArch to Fermi, Maxwell V2 is much more efficient than even Maxwell. It too has hit its limits on 28nm though.