It could also translate into bigger IGPs, especially once we get into tile-based CPUs where Intel could have a GPU tile with 4-8GB of HBM stacked directly onto it.And they said they want to monetize the division, which means consumer dGPUs.
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
It could also translate into bigger IGPs, especially once we get into tile-based CPUs where Intel could have a GPU tile with 4-8GB of HBM stacked directly onto it.And they said they want to monetize the division, which means consumer dGPUs.
Meteor initially is getting a nice bump (1/3 more), either 128 or 64 EU, that is compared to the 96/80/48/32 on alder/tiger lake configuration (Arrow lake rumored to have 192 EU on top iGPU so 50% bump generation to generation) .translate into bigger IGPs
192-256 EU is the range I had in mind: enough to put dGPUs in the $150-250 range to rest.Meteor initially is getting a nice bump (1/3 more), either 128 or 64 EU, that is compared to the 96/80/48/32 on alder/tiger lake configuration (Arrow lake rumored to have 192 EU on top iGPU so 50% bump generation to generation) .
With it being separate chip hopefully much easier to iterate on iGPU.
I would love it if Intel went after that market with a line of CPUs similar to AMD's current desktop APUs. I'm hoping the tile design works out for them because that seems like the key to having better IGPs for desktop.192-256 EU is the range I had in mind: enough to put dGPUs in the $150-250 range to rest.
It could also translate into bigger IGPs, especially once we get into tile-based CPUs where Intel could have a GPU tile with 4-8GB of HBM stacked directly onto it.
The main reason "HBM is expensive" is due to low volume. From a pure manufacturing standpoint, HBM is still fundamentally the same as normal memory from the last 20 years apart from the TSV stack connections. If HBM was made in the same volume as DDR4/5, the cost per GB would end up being similar, especially when the base die gets eliminated by stacking HBM directly on top of an MCD/IOD/GCD/CCD.HBM is too expensive for dGPUs nevermind integrated. That's why it doesn't exist except for Nvidia workstation GPUs.
Makes no sense, since the article said the core GPU IP is going to stay in the DCAI group. Plus, who would even buy it, especially if it's failing to gain market traction? It'd take quite some self-confidence to think you can succeed where Intel failed.Intel is setting up roadmap for selling off consumer graphics.
Can you explain why you would argue that they are diverging ?
No, Nvidia's 100-series specializes on datacenter just as much as AMD's CDNA.Also what their competitors are doing. AMD has a completely different accelerator line. Nvidia less so, but they still make specialized data center GPUs with their latest architectures.
CDNA products omit the following fixed-function hardware blocks needed for graphics:ML heavily favors tensor cores since it uses a lot of large matrix math between inputs and input weighs while gaming heavily favors conventional shaders to calculate the base color of stuff before feeding it into lighting/RT. Although the two may reuse many of the same fundamental building blocks, they scale in nearly direct opposite directions.
Intel has long shared the IP of their CPU cores (and certainly other blocks) between their client & datacenter groups. So, presumably they know how to manage that sort of thing.Intel will likely retain some common core between the two groups to avoid duplicating too much work or have one side borrow from the other where applicable.
Radeon VII launched with 16 GB (4 stacks!) and was apparently profitable at a MSRP of $700. You could even buy them at that price, for most of 2019 (they were phased out by the end of that year).HBM is too expensive for dGPUs nevermind integrated. That's why it doesn't exist except for Nvidia workstation GPUs.
It was a weird product. Done partly to demonstrate EMIB, and perhaps testing the waters for an iGPU/tGPU above their GT3/GT4 tier?Also they literally failed with that, the Kabylake-G.
Makes no sense, since the article said the core GPU IP is going to stay in the DCAI group. Plus, who would even buy it, especially if it's failing to gain market traction? It'd take quite some self-confidence to think you can succeed where Intel failed.
The most likely buyer I can foresee is a Chinese company, just looking to gain access to GPU patents and a toe-hold in the dGPU market. However, I doubt such an acquisition will be allowed. I don't imagine any other Western company would do it. Maybe someone in India?
Huh? Those already exist in both the Arc and Data Center Max (Ponte Vecchio) GPUs.Well selling it would mean freeing up cash for things like tensor like cores which are much more profitable in AI accelerators.
Huh? Those already exist in both the Arc and Data Center Max (Ponte Vecchio) GPUs.
The main reason "HBM is expensive" is due to low volume. From a pure manufacturing standpoint,
TSVs will be in just about everything in due time - putting holes in silicon to stack things vertically should eventually be cheaper than manufacturing a large substrate and a bunch of interposer dies to put things side by side. For some stuff like large caches, going vertical may also be the only way to go due to strict latency constraints as is the case in AMD's V-cache."Due to low volume"
"Apart from TSVs"
And they are very, very important reasons. Many technologies fail due to the lack of volume. And yes it's a catch-22, but a very real world problem.
LOL, no. Vega 20 wasn't primarily designed to be a gaming GPU. They only sold it as one, because they saw a market opportunity to do so.AMD did terribly with Radeon VII. After that they abandoned the use of HBM anything.
Do 3D NAND chips in SSDs use TSVs, or some other means of die-stacking?TSVs will be in just about everything in due time - putting holes in silicon to stack things vertically should eventually be cheaper than manufacturing a large substrate and a bunch of interposer dies to put things side by side.
The ones I have seen simply put all the connection on one or two die edges, flip the dies pads-up with offset edges and then wire-bond them.Do 3D NAND chips in SSDs use TSVs, or some other means of die-stacking?
BTW, pretty soon, DDR5 chips/DIMMs will even contain stacked dies.