News TSMC's N3 And Tiled Design Could Enable Monstrous Intel iGPUs

The problem with pushing higher performing iGPUs though is the memory subsystem has to keep up, otherwise VRAM has to be baked into the part. For example, the RTX 3050 has a memory bandwidth of 224 GB/sec. If we want something similar on an iGPU, with DDR5-6400 (rated for 51.2 GB/sec), you'd need at least a quad-channel memory subsystem.

Not to mention the additional power/heat/etc.
 
The problem with pushing higher performing iGPUs though is the memory subsystem has to keep up, otherwise VRAM has to be baked into the part. For example, the RTX 3050 has a memory bandwidth of 224 GB/sec. If we want something similar on an iGPU, with DDR5-6400 (rated for 51.2 GB/sec), you'd need at least a quad-channel memory subsystem.

Not to mention the additional power/heat/etc.
Not as simple as adding a stick of DDR6 16gb module on the side of the CPU socket is it? Well, Quad channel memory in a desktop without a gigantic price premium has been something we have not seen move into the main stream yet. I honestly don't know why but would assume cost of the motherboard and memory subsystem development for the CPU is not easily added cost effectively. Now with the DDR5 generation having the memory controller on the sticks themselves does that eliminate some of that issue?
 
Last edited:

InvalidError

Titan
Moderator
The problem with pushing higher performing iGPUs though is the memory subsystem has to keep up, otherwise VRAM has to be baked into the part.
Not a problem: Intel can simply reuse the same memory tiles they are using in Ponte Vecchio or some future iteration thereof. An easy 4-8GB of VRAM with 500+GB/s of bandwidth there.

Now with the DDR5 generation having the memory controller on the sticks themselves does that eliminate some of that issue?
Huh? No. The memory controller is still in the CPU. What has moved from motehrboard to DIMMs is the memory VRM. The other thing new with DDR5 is that DIMMs also have a command buffer chip so the CPU's memory controller only needs to drive one chip per DIMM for commands instead of driving the fan-out to every memory chip, reducing the bus load by up to 16X. The extra buffer chip is part of the reason for DDR5's increased latency.
 
Not a problem: Intel can simply reuse the same memory tiles they are using in Ponte Vecchio or some future iteration thereof. An easy 4-8GB of VRAM with 500+GB/s of bandwidth there.
Queue developers or something whining about how they can't access that for CPU related stuff because they paired the part up with a high-end GPU and they want "super ultra fast RAM"
 

spongiemaster

Admirable
Dec 12, 2019
2,278
1,281
7,560
I long for the day that I can put in a CPU with integrated graphics that give mid tier or higher discrete performance from the previous gen cards. That would surly light a fire under AMD and Nvidia's butts on the low end.
Judging by the current market, the only thing such a move would persuade AMD and Nvidia to do is abandon that part of the market completely and focus on releasing even more expensive tiers at the top to see have far they can push it. They'd probably be happy to not have to release a stripped down sub $300 GPU and instead send all their silicon to high end GPU's and enterprise products.
 
Judging by the current market, the only thing such a move would persuade AMD and Nvidia to do is abandon that part of the market completely and focus on releasing even more expensive tiers at the top to see have far they can push it. They'd probably be happy to not have to release a stripped down sub $300 GPU and instead send all their silicon to high end GPU's and enterprise products.
No, I think they would have to unload their, not up to snuff, silicon into the low end market regardless it would just see increased competition and thus lower pricing to match.
 

InvalidError

Titan
Moderator
Queue developers or something whining about how they can't access that for CPU related stuff because they paired the part up with a high-end GPU and they want "super ultra fast RAM"
CPUs don't generally need all that much memory bandwidth since each core has a much larger amount of L2/L3 cache to work with than GPUs do. If your algorithm has no data locality (cannot work from cache) then you get screwed by memory latency no matter how much memory bandwidth you may have.
 
Not a problem: Intel can simply reuse the same memory tiles they are using in Ponte Vecchio or some future iteration thereof. An easy 4-8GB of VRAM with 500+GB/s of bandwidth there.
Waaaaay too expensive for normal desktop igpus.
Maybe they make a few mobile models to replace the one with the amd gpu tacked on but even then it's probably too expensive to make a lot of sense.
 
No, I think they would have to unload their, not up to snuff, silicon into the low end market regardless it would just see increased competition and thus lower pricing to match.
It depends, packaging and the whole logistics of making skus out of failed silicon might not be worth it, especially if they have to go super low on price.
 

InvalidError

Titan
Moderator
Waaaaay too expensive for normal desktop igpus.
DRAM is DRAM. There isn't much intrinsic cost difference between memory interfaces besides volume to amortize up-front costs of a semi-custom chip and Intel already eats the up-front costs for its high-end stuff like Ponte Vecchio for the current generation. If HBM achieved the same volumes as GDDR6, eliminating the BGA packaging costs would likely make HBM cheaper long-term.
 

watzupken

Reputable
Mar 16, 2020
1,027
518
6,070
I feel these high end iGPU options will not be cost effective for budget gamers, and not meaningful for high end rigs. My opinion is that if you are to spend that much for a high end chip that comes with a high end iGPU, chances are that you will have a high end dGPU in the first place. This applies to AMD as well. On the other hand, by using the expensive TSMC N3, I doubt Intel will sell the top end configuration cheap. Look at the mobile i9 processors with the top end XE graphics, and you know that you are paying quite a substantial premium for it.

I feel Intel may be trying to impress Apple with this more than anything.
 

InvalidError

Titan
Moderator
I feel these high end iGPU options will not be cost effective for budget gamers, and not meaningful for high end rigs.
All signs point toward next-gen GPUs' MSRPs going up 30+%, which means little if any options below $500. With AMD and Nvidia abandoning the sub-$400 market, paying $200 extra for a current-60/future-50 tier IGP won't sound half as bad when that is your only remotely affordable option for decent graphics.

Budget PC gaming by older pricing standards is likely dead.
 
  • Like
Reactions: helper800