Intel's Arrow Lake expected to offer serious graphics oomph.
TSMC's N3 And Tiled Design Could Enable Monstrous Intel iGPUs : Read more
TSMC's N3 And Tiled Design Could Enable Monstrous Intel iGPUs : Read more
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
Not as simple as adding a stick of DDR6 16gb module on the side of the CPU socket is it? Well, Quad channel memory in a desktop without a gigantic price premium has been something we have not seen move into the main stream yet. I honestly don't know why but would assume cost of the motherboard and memory subsystem development for the CPU is not easily added cost effectively. Now with the DDR5 generation having the memory controller on the sticks themselves does that eliminate some of that issue?The problem with pushing higher performing iGPUs though is the memory subsystem has to keep up, otherwise VRAM has to be baked into the part. For example, the RTX 3050 has a memory bandwidth of 224 GB/sec. If we want something similar on an iGPU, with DDR5-6400 (rated for 51.2 GB/sec), you'd need at least a quad-channel memory subsystem.
Not to mention the additional power/heat/etc.
Not a problem: Intel can simply reuse the same memory tiles they are using in Ponte Vecchio or some future iteration thereof. An easy 4-8GB of VRAM with 500+GB/s of bandwidth there.The problem with pushing higher performing iGPUs though is the memory subsystem has to keep up, otherwise VRAM has to be baked into the part.
Huh? No. The memory controller is still in the CPU. What has moved from motehrboard to DIMMs is the memory VRM. The other thing new with DDR5 is that DIMMs also have a command buffer chip so the CPU's memory controller only needs to drive one chip per DIMM for commands instead of driving the fan-out to every memory chip, reducing the bus load by up to 16X. The extra buffer chip is part of the reason for DDR5's increased latency.Now with the DDR5 generation having the memory controller on the sticks themselves does that eliminate some of that issue?
Ahh, I was mistaken, shiny new RAM version moved some stuff around and I must have misread that part.Huh? No. The memory controller is still in the CPU. What has moved from motehrboard to DIMMs is the memory VRM.
Queue developers or something whining about how they can't access that for CPU related stuff because they paired the part up with a high-end GPU and they want "super ultra fast RAM"Not a problem: Intel can simply reuse the same memory tiles they are using in Ponte Vecchio or some future iteration thereof. An easy 4-8GB of VRAM with 500+GB/s of bandwidth there.
Judging by the current market, the only thing such a move would persuade AMD and Nvidia to do is abandon that part of the market completely and focus on releasing even more expensive tiers at the top to see have far they can push it. They'd probably be happy to not have to release a stripped down sub $300 GPU and instead send all their silicon to high end GPU's and enterprise products.I long for the day that I can put in a CPU with integrated graphics that give mid tier or higher discrete performance from the previous gen cards. That would surly light a fire under AMD and Nvidia's butts on the low end.
No, I think they would have to unload their, not up to snuff, silicon into the low end market regardless it would just see increased competition and thus lower pricing to match.Judging by the current market, the only thing such a move would persuade AMD and Nvidia to do is abandon that part of the market completely and focus on releasing even more expensive tiers at the top to see have far they can push it. They'd probably be happy to not have to release a stripped down sub $300 GPU and instead send all their silicon to high end GPU's and enterprise products.
CPUs don't generally need all that much memory bandwidth since each core has a much larger amount of L2/L3 cache to work with than GPUs do. If your algorithm has no data locality (cannot work from cache) then you get screwed by memory latency no matter how much memory bandwidth you may have.Queue developers or something whining about how they can't access that for CPU related stuff because they paired the part up with a high-end GPU and they want "super ultra fast RAM"
Waaaaay too expensive for normal desktop igpus.Not a problem: Intel can simply reuse the same memory tiles they are using in Ponte Vecchio or some future iteration thereof. An easy 4-8GB of VRAM with 500+GB/s of bandwidth there.
It depends, packaging and the whole logistics of making skus out of failed silicon might not be worth it, especially if they have to go super low on price.No, I think they would have to unload their, not up to snuff, silicon into the low end market regardless it would just see increased competition and thus lower pricing to match.
DRAM is DRAM. There isn't much intrinsic cost difference between memory interfaces besides volume to amortize up-front costs of a semi-custom chip and Intel already eats the up-front costs for its high-end stuff like Ponte Vecchio for the current generation. If HBM achieved the same volumes as GDDR6, eliminating the BGA packaging costs would likely make HBM cheaper long-term.Waaaaay too expensive for normal desktop igpus.
Where have you been the last year and a half? Like I said, look at the current market. Nobody is unloading anything in the low end.No, I think they would have to unload their, not up to snuff, silicon into the low end market regardless it would just see increased competition and thus lower pricing to match.
The low end pricing changed. Low end refers to the performance not the price.Where have you been the last year and a half? Like I said, look at the current market. Nobody is unloading anything in the low end.
All signs point toward next-gen GPUs' MSRPs going up 30+%, which means little if any options below $500. With AMD and Nvidia abandoning the sub-$400 market, paying $200 extra for a current-60/future-50 tier IGP won't sound half as bad when that is your only remotely affordable option for decent graphics.I feel these high end iGPU options will not be cost effective for budget gamers, and not meaningful for high end rigs.