truegenius :
afaik, cpu and gpu integration on same die is to minimize the time taken to communicate between cpu and gpu and bandwidth thus removing communication bottleneck
which means more performance in gpu assisted tasks
thus for processing tasks which needs tons of data exchange between cpu and gpu will get huge benefit from this tech
for example hardware video conversion
but
gaming does not require such communications speeds (only productive works needs higher cpu-gpu communication), gaming needs higher gpu-vram speed which is not possible using system ram, also not to forget power requirement of gpus
this is why we say that pcie 2.0 x16 is enough for gaming, which means cpu-gpu communication is not a bottleneck in gaming
thus, by this i mean to say that imo we won't see death of dedicated gaming GPUs as far as gaming is alive, but workstation cards may suffer (not sure about higher end workstation cards) and we may see cpu/apu designed for workstations (if we can get enough ram bandwidth)
1) The "about 10x faster" claim only considered CPU-->GPU communication. Thus applies to gaming. For compute one has to consider also CPU-->GPU and other nuisances, increasing the performance gap.
2) Only because ancient programs used less than 640Kb doesn't mean that 640Kb are enough for everybody. Current games are developed with current hardware limits in mind. Future games doesn't need to be limited by constraints of today hardware.
3) The slow PCie3 is already bottlenecking current games. Developers alleviate this bottleneck by using tricks such as low resolution textures, texture quilting (building large textures by repetition of small textures), texture compression... E.g. unreal engine uses texture compressed at 1:4 or 1:8 ratio (depends of DXTn format used) before sending them by the PCIe slot. The compressed textures are then decompressed by the GPU.
4) Another important point is that gaming GPUs are not designed in a vacuum, but use the same basic architecture used for compute GPUs. R&D costs are then distributed between all the cards that use the architecture. If you develop an architecture for use exclusive on a gaming card, then all the cost have to be transfered to those cards, which implies ultra-expensive cards that no gamer would purchase.
This is essentially the same reason why AMD has not released Steamroller FX CPU for gamers. Previous FX CPUs shared the R&D cost with the Opterons used in servers and HPC. But Steamroller is not competitive enough for server/HPC CPUs; thus, once Steamroller architecture was dropped from Opteron CPUs, it became evident that AMD couldn't release a FX brand because a mere question of costs.
I recall that I explained this when I claimed the past year, that no FX Steamroller was coming to the deskopt. Several posters ignored my point and claimed "wait for the roadmap". The 2014/2015 desktops roadmap gave me the reason. I did the math. AMD also did.
5) Only because current PCs use slow DDR3 for "system ram" doesn't imply that "system ram" has to be slower than "vram". The PS4 uses GDDR5 as "system ram". The next year Intel releases a 'CPU' with 8--16 GB of MCDRAM with a sustained bandwidth of 500GB/s. The Nvidia Titan VRAM peaks at 288GB/s.
The Nvidia Research Team that did make the above claim is designing an ultra-high performance APU with a "system ram" of 1.6TB/s of bandwidth. I.e. more than 5x the bandwidth of the GDDR5 on the Nvidia Titan designed by the same Team.
6) Another point is that games evolve towards GPU computing. Ancient games used the CPU for everything, and the GPU only for basic display. But current GPUs evolved towards offloading the CPU for 3D graphics computation
The next step is towards offloading the physics and AI computations also to the GPU. In fact, the APU used in the PS4 has been specifically designed to compute the physics and the AI on the GPU. Watch Sony talks. Existent games and demos are already computing the physics on the PS4 GPU.
==============================================
Once again:
The past is CPU + dGPU
The future is high-performance APU
The transition for AMD is clearly APU + dGPU
Why do you believe that AMD is enabling APU-dGPU Crossfire? Why do you believe that MANTLE has
asymmetric multi-GPU support? Why do you believe that AMD is giving talks about using the dGPU for rendering whereas offloading the post-processing to the APU?
http://gearnuke.com/in-depth-look-at-amd-mantle/
jdwii :
Yeah that makes way more since i almost find it impossible to get more power from 1 die vs 2. Seems a little weird to state such a thing
Except that maths says otherwise. What happens is that Nvidia research Team did the math before doing the claim. I also did.