PaulAlcorn :
the UHD Graphics architecture is inherently scalable by design, so many opine this will just be a scaled-up version of its forthcoming Gen 11 (.5?) graphics engine. Intel has a strong background in encode/decode engines, fixed function hardware, and the like. Strapping on an HBM2 or GDDR6 interface onto its existing uArch and widening the memory bus could make its existing design pretty powerful.
I've been down this road, conceptually, and have some serious misgivings about this approach. While it would've been a good route to take for something like Xeon Phi, I think its resources are partitioned poorly and their SIMD pipe is too narrow to scale up to larger graphics and dense compute workloads.
Taking the last point first, the narrow (128-bit) SIMD engines mean more work for the instruction decoders per FLOP, which hurts energy efficiency (and energy efficiency ultimately limits GPU performance). However, this is great for the sort of mostly-serial or branch-heavy code that typically runs poorly on GPUs, which is why I say it's good for GPU-compute (hence, Xeon Phi - which is sort of like GPU-scale compute for CPU-oriented tasks).
The next point would be the level of SMT, which is too little. It makes the GPU highly dependent on good thread partitioning, to avoid stalling the EUs, since you probably need 2-3 threads running at all times to keep a given EU busy (and here, I'm using the CPU definition of a thread). Like AMD's GCN, threads are all allocated to specific EUs, although GCN supports almost twice as many threads per EU-equivalent.
Finally, they really need to revise their memory architecture. HD Graphics' GTI bus represents a significant bottleneck for scaling much beyond their desktop chips. I think their Iris Pro graphics even suffer, as a result (the 50 GB/sec to/from eDRAM can't even match Nvidia's GTX 1030's
off-chip bandwidth). I'm pretty sure one of the secrets to Nvidia's efficiency is partitioning the work and data across a number of narrow memory controllers/banks, which also explains how they reach odd memory sizes like 5 GB and 11 GB.
My money is on either a clean-sheet redesign or a complete, top-to-bottom overhaul that
barely resembles the current HD Graphics architecture. It's worth considering that HD Graphics hasn't fundamentally changed in something like 15 years. In fact, the Linux driver used for them is still called i915 (referring to the 2004-era motherboard-integrated GMA graphics).
Anyone interested in learning more should give this a look:
https://software.intel.com/sites/default/files/managed/c5/9a/The-Compute-Architecture-of-Intel-Processor-Graphics-Gen9-v1d0.pdf