News Intel Re-Orgs AXG Graphics Group, Raja Koduri Moves Back to Chief Architect Role

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

InvalidError

Titan
Moderator
  • Like
Reactions: thestryker

husker

Distinguished
Oct 2, 2009
1,207
221
19,670
If Intel is in it for the long haul then it could make sense to split the division so that they can clearly show where the money is being made and lost. The goal is not to hide losses from shareholders, although in some circumstances that may make sense, in this case the shareholders would see right through that. Instead , I think they are preparing to do battle and want get all their pieces properly in place. Obviously they will accept certain losses in the short term in order to win market share in the long term, but they don't want to drag down an entire division to do it: queue the re-org. This industry is not for the feint of heart and Intel knows that. Their slogan maybe "Intel inside" but we will now find out what's inside Intel.
 
  • Like
Reactions: rtoaht

DavidC1

Distinguished
May 18, 2006
494
67
18,860
According to latest analyst report, Intel already has at least half the volume share AMD does. Yes, AMD's share is pathetic at 8%, but Intel took 4% from them and Nvidia took a similar percentage. So AMD's share prior to that was 15-16%.

So they do have mindshare, much better than AMD. Or the whole market has been angry in general and with AMD having much less clout than Nvidia meant would be AMD buyers went to Intel.

It could also translate into bigger IGPs, especially once we get into tile-based CPUs where Intel could have a GPU tile with 4-8GB of HBM stacked directly onto it.

HBM is too expensive for dGPUs nevermind integrated. That's why it doesn't exist except for Nvidia workstation GPUs.

Also they literally failed with that, the Kabylake-G.
 
  • Like
Reactions: rtoaht

InvalidError

Titan
Moderator
HBM is too expensive for dGPUs nevermind integrated. That's why it doesn't exist except for Nvidia workstation GPUs.
The main reason "HBM is expensive" is due to low volume. From a pure manufacturing standpoint, HBM is still fundamentally the same as normal memory from the last 20 years apart from the TSV stack connections. If HBM was made in the same volume as DDR4/5, the cost per GB would end up being similar, especially when the base die gets eliminated by stacking HBM directly on top of an MCD/IOD/GCD/CCD.

BTW, AMD also uses HBM in its MI series datacenter/HPC/ML GPUs, 128GB of it on the MI250.
 
Last edited:

bit_user

Polypheme
Ambassador
Intel is setting up roadmap for selling off consumer graphics.
Makes no sense, since the article said the core GPU IP is going to stay in the DCAI group. Plus, who would even buy it, especially if it's failing to gain market traction? It'd take quite some self-confidence to think you can succeed where Intel failed.

The most likely buyer I can foresee is a Chinese company, just looking to gain access to GPU patents and a toe-hold in the dGPU market. However, I doubt such an acquisition will be allowed. I don't imagine any other Western company would do it. Maybe someone in India?
 

bit_user

Polypheme
Ambassador
Can you explain why you would argue that they are diverging ?
Also what their competitors are doing. AMD has a completely different accelerator line. Nvidia less so, but they still make specialized data center GPUs with their latest architectures.
No, Nvidia's 100-series specializes on datacenter just as much as AMD's CDNA.

However, Nvidia also has a secondary server GPU business, where they sell passively-cooled boards with no video outputs that simply contain the same GPUs as the consumer products. These are used for things like cloud-based gaming and desktop hosting, as well as transcoding and AI inference.

ML heavily favors tensor cores since it uses a lot of large matrix math between inputs and input weighs while gaming heavily favors conventional shaders to calculate the base color of stuff before feeding it into lighting/RT. Although the two may reuse many of the same fundamental building blocks, they scale in nearly direct opposite directions.
CDNA products omit the following fixed-function hardware blocks needed for graphics:
  • Texture engines
  • ROPs
  • Tessellation engines
  • RT: ray-poly intersection + BVH construction & traversal
  • Display controller
  • Video encoder
That stuff adds up!

Instead, they have more capable matrix cores and high-throughput fp64. At the ISA level, they still use GCN-style Wave64, instead of Wave32. Not sure if it's still implemented as a 4-way barrel processor, like in GCN. CDNA also uses HBM, instead of GDDR.

Intel will likely retain some common core between the two groups to avoid duplicating too much work or have one side borrow from the other where applicable.
Intel has long shared the IP of their CPU cores (and certainly other blocks) between their client & datacenter groups. So, presumably they know how to manage that sort of thing.

Given that the consumer CPUs always seem the first to launch with any new core microarchitecture, I had assumed the cores were being designed in the client group. Not sure if that's true, but it seems clear that at least the GPU cores are now going to come from the datacenter group.
 

bit_user

Polypheme
Ambassador
HBM is too expensive for dGPUs nevermind integrated. That's why it doesn't exist except for Nvidia workstation GPUs.
Radeon VII launched with 16 GB (4 stacks!) and was apparently profitable at a MSRP of $700. You could even buy them at that price, for most of 2019 (they were phased out by the end of that year).

Also they literally failed with that, the Kabylake-G.
It was a weird product. Done partly to demonstrate EMIB, and perhaps testing the waters for an iGPU/tGPU above their GT3/GT4 tier?
 
Makes no sense, since the article said the core GPU IP is going to stay in the DCAI group. Plus, who would even buy it, especially if it's failing to gain market traction? It'd take quite some self-confidence to think you can succeed where Intel failed.

The most likely buyer I can foresee is a Chinese company, just looking to gain access to GPU patents and a toe-hold in the dGPU market. However, I doubt such an acquisition will be allowed. I don't imagine any other Western company would do it. Maybe someone in India?

Well selling it would mean freeing up cash for things like tensor like cores which are much more profitable in AI accelerators.

I figure Intel will cross license it's IP on GPUs. And I agree with you that some Chinese firm will buy it up (provided sanctions are not in play). Look at what happened with S3/PowerVR. Their GPU tech is woefully behind.

Btw: thanks for earlier on the packed fma ops. I thought the packed reduced precision mode applied to all vector ops. You were right it only affected ai.
 

DavidC1

Distinguished
May 18, 2006
494
67
18,860
The main reason "HBM is expensive" is due to low volume. From a pure manufacturing standpoint,

"Due to low volume"
"Apart from TSVs"

And they are very, very important reasons. Many technologies fail due to the lack of volume. And yes it's a catch-22, but a very real world problem.

AMD did terribly with Radeon VII. After that they abandoned the use of HBM anything.
 

InvalidError

Titan
Moderator
"Due to low volume"
"Apart from TSVs"

And they are very, very important reasons. Many technologies fail due to the lack of volume. And yes it's a catch-22, but a very real world problem.
TSVs will be in just about everything in due time - putting holes in silicon to stack things vertically should eventually be cheaper than manufacturing a large substrate and a bunch of interposer dies to put things side by side. For some stuff like large caches, going vertical may also be the only way to go due to strict latency constraints as is the case in AMD's V-cache.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
AMD did terribly with Radeon VII. After that they abandoned the use of HBM anything.
LOL, no. Vega 20 wasn't primarily designed to be a gaming GPU. They only sold it as one, because they saw a market opportunity to do so.

And no, AMD didn't stop using HBM. Both they and Nvidia have reserved it for their datacenter-grade products. In AMD's case, this is the CDNA architecture compute-accelerators, branded as the MI-series. It just seems that, purely for interactive graphics rendering, GDDR6 is adequate and presumably more cost-effective.
 

bit_user

Polypheme
Ambassador
TSVs will be in just about everything in due time - putting holes in silicon to stack things vertically should eventually be cheaper than manufacturing a large substrate and a bunch of interposer dies to put things side by side.
Do 3D NAND chips in SSDs use TSVs, or some other means of die-stacking?

BTW, pretty soon, DDR5 chips/DIMMs will even contain stacked dies.
 
I don't recall seeing anything, and a search didn't come up with specifics, but I believe Intel is only using EMIB rather than an interposer with SPR. Presumably this means adding HBM to a tiled architecture would be a minimal issue from a design standpoint. There's also the potential to stack it on top of one of the tiles assuming there's height available. HBM becomes a whole different story when the extra cost associated is just the memory itself.
 

InvalidError

Titan
Moderator
Do 3D NAND chips in SSDs use TSVs, or some other means of die-stacking?

BTW, pretty soon, DDR5 chips/DIMMs will even contain stacked dies.
The ones I have seen simply put all the connection on one or two die edges, flip the dies pads-up with offset edges and then wire-bond them.

As for DRAM, Samsung announced high-capacity GDDR6X modules containing four chips split between base and mezzanine interposers. I'd imagine the cost overheads of doing that are encroaching in TSV territory.
 

TRENDING THREADS