AMD To Develop Semi-Custom Graphics Chip For Intel

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


No, it doesn't make sense.

The reason top end dies have the graphics is because that's the die they derive everything from. So, they design the biggest die, which right now happens to be 6 cores with GT2 graphics. The 4 and 2 core dies are cut down versions of the 6 core one.

What you propose would run counter to this idea, because every chip would now feature no graphics. On the low end that means higher cost, since you'd need to take 2 cores and put a separate GPU die.

Not to mention EMIB isn't the cheapest solution. It's the cheaper solution for high bandwidth requirements, but low end iGPUs don't need high bandwidth, so they can get away with regular MCM packaging. But then its even cheaper to do a monolithic die. So why bother at all?

People also heavily overestimate the cost impact of graphics on CPU die. It might as well be free because the people who WANT/NEED the graphics pay for R&D. The rest that don't doesn't matter because its probably 1% of the 1%. Think about it this way. If Intel didn't have iGPU on their chips, every single chip that needs an iGPU would go to competitors. The value that iGPUs provide for Intel is big.
 

No, Intel doesn't use a single die for all mainstream SKUs. In the past, they had separate quad and dual core dies. Coffee Lake still has two different dies for mainstream CPUs, one hex core die and one quad core die.

When talking about the cost of on die graphics, I'm not talking about the cost of R&D. I'm talking about the cost of the silicon area. If you increase die size by 50%, you're now getting 33% less dies per wafer. Larger dies also increases the chance a die will have a defect. So the cost per (working) chip goes up.

But sure, maybe the cost associated with fabbing an extra die (just a GPU) plus whatever costs are associated with EMIB/MCM far outweigh the whatever savings you may get on die space.
 
was a good thought anyway. that's why we love to have these discussions. everyone comes at it from a different perspective and often bring up things i never considered.

just in this case, removing the igp and making it separate only raised the other issue of cost to make individual gpu's. no clue what they are but it is more than likely more than it costs to include it on the cpu die itself. just my opinion anyway 😀

i would wonder though what it would cost to just buy amd/nvidia low end gpu's to include vs the cost of developing and including the igp on the cpu. kind of like the old days of nForce chipsets and such. if saving the r&d budget and silicon cost to go with discrete gpu's all around would save money, then i could see making an argument to do it.

but considering amd sticking with APU's, i am guessing that it's not saving anything if at all.
 

It's funny that you mention this, because they actually have CPUs with no on-die graphics.

Imagine if they took their i7-7800X, which lists for $389 - well within the range of a laptop CPU - and used that as the CPU die. Obviously, they'd have to do some serious power/thermal-based clock regulation to fit it into the envelope of a laptop CPU, but I think it's possible. It's not an obvious win, but it would provide one explanation for why they'd suddenly need an off-die GPU. Especially if they enabled 8 cores, not just 6.
 

Intel CPUs execute the vector instructions via specialized engines inside each x86 core. They don't dispatch them to their iGPU, although they have a greater net throughput.

If you wanted that, you could use OpenCL, DirectCompute, OpenMP, etc. But it's a classic throughput vs. latency tradeoff and needs to be taken into account & managed during application design and development.

From a software perspective, I quite like their HD graphics. In contrast to AMD/Nvidia, Intel chose to make their execution units more numerous, but with narrow SIMD pipes. An easy way they could scale up performance would be simply to double their SMD width, although they'd need more bandwidth as they're already memory-bottlenecked. As wider SIMD would increase the number of stalls, they'd also probably have to increase their SMT count, which is another area in which they lag behind.

Of course, these two changes would have a significant impact on their iGPU's die area and power requirements, which probably goes a long way toward explaining why they haven't done it. It's much more natural, if ultimately less efficient, for them to scale by adding more execution units.
 
Status
Not open for further replies.