News Intel Demos Meteor Lake CPU with On-Package LPDDR5X

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

abufrejoval

Reputable
Jun 19, 2020
496
341
5,060
I stand by my statement "you give me GDDR or HBM at DRAM prices, and I'm ready to 'suffer' the consequences in terms of performance",

if only because I'd sell those HBM and GDDR chips at a nice profit ;-)

I had seen and noted that Xbox sell-off board and I remember there have been other "console PC" designs e.g. in China which used GDDR exclusively evidently to simplify the design and because expandability wasn't part of it.

But this board has too many unknowns right down to the number, specs and interface of the chips and then evidently they had to use a discrete GPU for the gaming tests.

And I'd hazard that even if GDDR latencies were indeed penalizing code latencies, a V-cache and GPU code gains on the graphics side might largely compensate for that.

And then please remember that the original discussion was about HBM.

And there was a reason that Intel tried to fit the HMC alternative on their HPC designs and are now putting HBM on CPUs: they obviously don't do it to increase latencies.

You'd still find some HPC use case that won't fit that mold, but that's basically hyperscaler credo: once a use case's scale is big enough, only a bespoke design will fit it!
 

bit_user

Titan
Ambassador
I had seen and noted that Xbox sell-off board and I remember there have been other "console PC" designs e.g. in China which used GDDR exclusively evidently to simplify the design and because expandability wasn't part of it.

But this board has too many unknowns right down to the number, specs and interface of the chips and then evidently they had to use a discrete GPU for the gaming tests.
It's very clearly a way to sell partially-defective XBox chips, which is probably the main reason the iGPU is disabled.

And I'd hazard that even if GDDR latencies were indeed penalizing code latencies, a V-cache and GPU code gains on the graphics side might largely compensate for that.
Microsoft and Sony both seem to think so. Sony used GDDR memory for unified, system memory going on 10 years, now. Microsoft has done so since their XBox One refresh, in 2017. And that's even without 3D V-Cache.

And then please remember that the original discussion was about HBM.
You broadened it to GDDR. I'm just responding to your point about GDDR. Please remember this article is about LPDDR5X, not HBM!

And there was a reason that Intel tried to fit the HMC alternative on their HPC designs and are now putting HBM on CPUs: they obviously don't do it to increase latencies.
HBM has worse best-case latency than DDR5. The reason it's a win for them is similar to why GPUs prefer GDDR memory over regular DDR - that the extra bandwidth becomes a lot more relevant, under substantial loads.

People tend to fixate too much on best-case latency, but what matters is latency under load. If you have too little memory bandwidth for a given workload, then it becomes a bottleneck and the request queues fill up. The result is worse latency than if you had enough bandwidth to keep those queues mostly empty. This paper demonstrates the point, quite well:
 
Status
Not open for further replies.