News Intel Will Adopt 3D Stacked Cache for CPUs, Says CEO Pat Gelsinger

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Titan
Ambassador
Intel has already demonstrated a 3d cache solution in PVC, where they have 288MB of L2 SRAM on the two base tiles.
I think that's different in kind, because the cache is contained within those tiles. What's more interesting about AMD 3D V-Cache is that it augments L3 cache in the base chiplet! That's a neat trick, IMO.

AMD admitedly couldn't break down the gpu compute tiles due to the routing limitations of the organic substrates.
They pointed out that it just didn't make sense, efficiency-wise. Both for cell scaling (i.e. cost) and efficiency reasons, the right place to put the chiplet boundary was between the compute die and the L3 + memory controllers.

Where I think RDNA3 might've misstepped is in its approach of having multiple L3 cache dies, rather than a unified L3. I wonder how much cheaper the 7900 XTX is with 7 chiplets than it would be with 2, as well as how much more performance it could've achieved with 2 (i.e. due to a unified L3).
 
Last edited:

TJ Hooker

Titan
Ambassador
Took me 30 seconds to find this one, filed in Dec, 2012:
After a brief look, it doesn't seem that patent is about stacking memory dies on top of compute dies (or vice versa) though. It's about stacked memory dies on top of some sort of memory controller die. That patent would be applicable to HBM, connected to the compute die via some other means e.g. interposer (which makes sense, given they were working on the first HBM GPUs around that time). It doesn't look like it'd be applicable to 3D V-cache.

The person I responded to mentioned patents from 2019, so that's what I briefly searched for but didn't find anything.
 
Last edited:

richardvday

Distinguished
Sep 23, 2017
189
33
18,740
Let's not be overly partisan. Your point was adequately made through your corrections, rendering such generalizations unnecessary. Such attacks put people on the defensive and draw a line in the sand, often lowering the quality of the discourse (i.e. it's flame bait).

The core issue - and I don't think it's is a partisan one - is people not fact-checking themselves, especially when making sweeping claims. If we simply make a point of trying to cite references or include specifics, then fact-checking tends to come as a byproduct (though, beware of confirmation bias).
I was directly comparing intel to amd not every processor manufacturer but I could have been clearer about that. He did bring up a few good points regardless. I don't buy mips or hp or IBM Cpus. Fan boy? I would dispute that. I only gave some observations from personal experience. I had a pentium-d I don't just buy amd I buy whatever makes sense.
 
Die-stacked L3 cache is no gimmick, which is why Intel is treading down this same path.
The way AMD implemented it from the 5xxx series forward it was and is a gimmick, they lose so much performance, cooling, ease of use(need to disable ccx) , and they lost a lot of good will by releasing exploding CPUs because their CPUs couldn't handle normal amounts of volts due to the cache anymore.
It's a gimmick because it improves the performance in a limited amount of games, but the losses are very real.
 

DaveLTX

Commendable
Aug 14, 2022
104
66
1,660
According to who? I can't find any reference to AMD designing chip on wafer. All the information points to TSMC:


"TSMC's CoW (Chip-on-Wafer) and WoW(Wafer-on-Wafer) technologies allow the stacking of both similar and dissimilar dies, greatly improving inter-chip interconnect density while reducing a product's form factor."

TSMC developed CoWoS in 2012. When did AMD develop their CoW?
View: https://www.youtube.com/watch?v=gNZfDtCcXNw


They were developing what turned into CoWoS all the way back in 2007(?) in partnership with TSMC...

The way AMD implemented it from the 5xxx series forward it was and is a gimmick, they lose so much performance, cooling, ease of use(need to disable ccx) , and they lost a lot of good will by releasing exploding CPUs because their CPUs couldn't handle normal amounts of volts due to the cache anymore.
It's a gimmick because it improves the performance in a limited amount of games, but the losses are very real.
Ok intel fanboy
 
  • Like
Reactions: Ogotai and bit_user

TJ Hooker

Titan
Ambassador
So I started getting confused with the acronym soup that is packaging methods, there is a decent explanation of most of them here: https://www.anandtech.com/show/16051/3dfabric-the-home-for-tsmc-2-5d-and-3d-stacking-roadmap

A relevant bit is that CoWoS (chip on wafer on substrate) is apparently not the same thing as CoW. CoWoS is typically a 2.5D technology, for connecting multiple chips that typically sit side by side. An example of this most of us are familiar with is on-package DRAM, where the memory is connected to the compute die through an interposer, with both bonded to the interposer with micrbumps.

CoW seems to refer to direct, stacked, die to die interconnects using TSVs (through silicon vias). This is billed as more technically complex, but allowing for denser connections, lower thermal resistance, and higher efficiency.

One thing I'm not clear on is whether HBM memory stacks themselves would be considered CoW or CoWoS. It seems like the former? Maybe they treat memory on memory stacking differently than memory+logic stacking? Maybe they're making somewhat artificial distinctions between their packaging methods for the sake of marketing?
 
  • Like
Reactions: bit_user
If it didn't provide tangible benefits, then you could call it a gimmick. The mere fact that it doesn't benefit all workloads doesn't make it a gimmick.
dlss, thread director, x3d cache, these are all gimmicks to increase sales "look at this shiny new thing" , so they are gimmicks that actually do something...some of the times...instead of never doing anything, they are still gimmicks to drive sales.
 
Status
Not open for further replies.