This is why we want Higher-end chips from Amd to give Intel more competition
Companies lie with independence of competition. In any case, have you been living under a rock (PUN intended) the last months? AMD has already announced its plans for higher-end: K12 and companion x86 sister.
i was hoping for an fx4xxx branded igpu-less kaveri for socket fm2+. oh well. there's always next time.
The fact there is none in the market, proves that igpu yields are very very good, which is not a surprise because if there was yields problems they would be detected first in the CPU. :-D
Regardless of the implementation, the specifications should be LONG stabilized, otherwise it would be impossible to develop for. Specification != implementation.
It is not the implementation which is in beta. It is the own
API which is being still developed in collaboration with game developers.
PhysX IS open. Proprietary to NVIDIA, but open.
Please check next link
Propietary vs Open Standards
Except if Intel adopts Mantle, then ~82% of the GPU market would offer it and NVidia would be forced to do so or fight uphill against it eternally.
The problem is this is a big IF.
i think there will be a 8 thread steamroller Kaveri soon
a 22nm process could be great but i think amd is working on something a little better. i speculate a 16nm cpu with 16nm gpu's
There is no 8 thread steamroller Kaveri in the menu. The IBM 22nm SOI process couldn't provide the characteristics needed for GCN. Moreover, people here is assuming, as proven, that IBM will license its process to anyone including AMD, when it is likely that GF will use IBM technology only for IBM products.
In any case, the route that AMD will follow is rather clear: 28m bulk planar --> 20nm bulk planar --> 14nm bulk FF, except if there are unexpected problems/delays, of course.
that means they are not saying that igpus will be better in gaming than dgpus
in accelerating tasks using gpus, igpus can easily beat dgpus ( for example quicksync vs amd/nvidia hw acceleration for video conversion ) because interconnect is holding dgpus here and not the gpus them self ( you can try an experiment t home, clock whichever cpu have to minimum speed and turn off all cres except 1, and then run video conversion task using hw acceleration ( you can use atixcoder for amd's dgpus ) and see how much percent dgpu is utilized and does it even runs at its full speed, this way you will find the amount of dgpu utilized which will show you that its interconnect holding back )
but gaming is different, pcie is not bottleneck here, gpus are bottleneck, run any game at highest settings and dgpu reaches to 100% at full speed, thus gpu is bottleneck
so treat these statement while keeping in mind that gaming and general purpose are different things, rest you can understand by yourself
i always said and will say it again that igpus can't beat dgpus in gaming
They are saying that Torrenza initiative is killed. Sometime ago someone here pretended that it was alive...
In any case regarding gaming, I already explained before how gaming is evolving to blur the distinction between compute GPU and rendering GPU, how killing GPGPUs will do gaming GPUs economically inviable, how nonlinear silicon scaling will affect future gaming dGPUs...
Feel free to ignore the arguments once again.