Die-stacked L3 cache is no gimmick, which is why Intel is treading down this same path.Companies only care about having the leading edge technology if they need it as a gimmick to sell they product.
Die-stacked L3 cache is no gimmick, which is why Intel is treading down this same path.Companies only care about having the leading edge technology if they need it as a gimmick to sell they product.
I think that's different in kind, because the cache is contained within those tiles. What's more interesting about AMD 3D V-Cache is that it augments L3 cache in the base chiplet! That's a neat trick, IMO.Intel has already demonstrated a 3d cache solution in PVC, where they have 288MB of L2 SRAM on the two base tiles.
They pointed out that it just didn't make sense, efficiency-wise. Both for cell scaling (i.e. cost) and efficiency reasons, the right place to put the chiplet boundary was between the compute die and the L3 + memory controllers.AMD admitedly couldn't break down the gpu compute tiles due to the routing limitations of the organic substrates.
After a brief look, it doesn't seem that patent is about stacking memory dies on top of compute dies (or vice versa) though. It's about stacked memory dies on top of some sort of memory controller die. That patent would be applicable to HBM, connected to the compute die via some other means e.g. interposer (which makes sense, given they were working on the first HBM GPUs around that time). It doesn't look like it'd be applicable to 3D V-cache.Took me 30 seconds to find this one, filed in Dec, 2012:
US9170948B2 - Cache coherency using die-stacked memory device with logic die - Google Patents
A die-stacked memory device implements an integrated coherency manager to offload cache coherency protocol operations for the devices of a processing system. The die-stacked memory device includes a set of one or more stacked memory dies and a set of one or more logic dies. The one or more logic...patents.google.com
I was directly comparing intel to amd not every processor manufacturer but I could have been clearer about that. He did bring up a few good points regardless. I don't buy mips or hp or IBM Cpus. Fan boy? I would dispute that. I only gave some observations from personal experience. I had a pentium-d I don't just buy amd I buy whatever makes sense.Let's not be overly partisan. Your point was adequately made through your corrections, rendering such generalizations unnecessary. Such attacks put people on the defensive and draw a line in the sand, often lowering the quality of the discourse (i.e. it's flame bait).
The core issue - and I don't think it's is a partisan one - is people not fact-checking themselves, especially when making sweeping claims. If we simply make a point of trying to cite references or include specifics, then fact-checking tends to come as a byproduct (though, beware of confirmation bias).
The way AMD implemented it from the 5xxx series forward it was and is a gimmick, they lose so much performance, cooling, ease of use(need to disable ccx) , and they lost a lot of good will by releasing exploding CPUs because their CPUs couldn't handle normal amounts of volts due to the cache anymore.Die-stacked L3 cache is no gimmick, which is why Intel is treading down this same path.
According to who? I can't find any reference to AMD designing chip on wafer. All the information points to TSMC:
"TSMC's CoW (Chip-on-Wafer) and WoW(Wafer-on-Wafer) technologies allow the stacking of both similar and dissimilar dies, greatly improving inter-chip interconnect density while reducing a product's form factor."
TSMC developed CoWoS in 2012. When did AMD develop their CoW?
Ok intel fanboyThe way AMD implemented it from the 5xxx series forward it was and is a gimmick, they lose so much performance, cooling, ease of use(need to disable ccx) , and they lost a lot of good will by releasing exploding CPUs because their CPUs couldn't handle normal amounts of volts due to the cache anymore.
It's a gimmick because it improves the performance in a limited amount of games, but the losses are very real.
If it didn't provide tangible benefits, then you could call it a gimmick. The mere fact that it doesn't benefit all workloads doesn't make it a gimmick.The way AMD implemented it from the 5xxx series forward it was and is a gimmick,
That timeline still doesn't work. Fiji was released in 2015. 3 years after TSMC was already implementing CoW.View: https://www.youtube.com/watch?v=gNZfDtCcXNw
They were developing what turned into CoWoS all the way back in 2007(?) in partnership with TSMC...
dlss, thread director, x3d cache, these are all gimmicks to increase sales "look at this shiny new thing" , so they are gimmicks that actually do something...some of the times...instead of never doing anything, they are still gimmicks to drive sales.If it didn't provide tangible benefits, then you could call it a gimmick. The mere fact that it doesn't benefit all workloads doesn't make it a gimmick.