News Intel HPC Roadmap: 800W Rialto Bridge GPU, Falcon Shores XPU, Ponte Vecchio with HBM Benchmarks

JamesJones44

Prominent
Jan 22, 2021
146
98
660
0
Interesting that the watt usage per core is going up 0.5 watts over last gen with a die shrink, I'm curious to see what frequencies they end up running these at.
 
Reactions: Rdslw

dehjomz

Distinguished
Dec 31, 2007
42
29
18,560
0
This all sounds cool, but the question I always have for Intel is, when will the Silicon ship? I'm looking forward to the innovations and I hope Intel can bring them market sooner rather than later. I'm looking forward to what AMD does with Xilinx and Persando.
 
Reactions: Rdslw

escksu

Reputable
Aug 8, 2019
747
294
5,260
0
Interesting that the watt usage per core is going up 0.5 watts over last gen with a die shrink, I'm curious to see what frequencies they end up running these at.
OK, you have to note that these chips are not intended for consumer user. They are meant for high performance computing (Eg. supercomputers). So, the important thing isn't the how much the chip consume but rather overall performance/watt (eg. Gigaflop per watt).

An example. Say this chip is 100 GFlop and 800W (just an example), older chips are 75GFlop and 600W. Chipwise, both are same performance/watt.... Don't seem that impressive. But here is the catch, to achieve 10,000 GFlop, you need 100 of the new chip instead of 133 of the older ones....

33 more chips means more hardware needed to support these chips. More circuit boards, voltage regulators, interconnects, etc etc.....All additional hardare consume and dissipate power. So cost and power consumption goes up.
 
Reactions: Rdslw
Interesting that the watt usage per core is going up 0.5 watts over last gen with a die shrink, I'm curious to see what frequencies they end up running these at.
I am quite worried it will be an absurd fireball. I personally would be hesitant to use such power hungry part without long experience in the field. I expect they will have incidents and with that kind of power, it will be intense. (stuff like PCIE data line + 300W power line being single water drop distance from one another like apple did)

OK, you have to note that these chips are not intended for consumer user. They are meant for high performance computing (Eg. supercomputers). So, the important thing isn't the how much the chip consume but rather overall performance/watt (eg. Gigaflop per watt).

An example. Say this chip is 100 GFlop and 800W (just an example), older chips are 75GFlop and 600W. Chipwise, both are same performance/watt.... Don't seem that impressive. But here is the catch, to achieve 10,000 GFlop, you need 100 of the new chip instead of 133 of the older ones....

33 more chips means more hardware needed to support these chips. More circuit boards, voltage regulators, interconnects, etc etc.....All additional hardare consume and dissipate power. So cost and power consumption goes up.
You have 33% less gpu's which means ~33% less racks to worry about. I expect same peta scale supercomputer would end up ~5-10% less power with this and You also have less switching and scheduling, so overall performance can gain a few percents as well.
My instincts tell me that something will be wrong, though. I feel like there was a reason why cards were not 1000W each before, and they will have incredible problems with those...
 

JamesJones44

Prominent
Jan 22, 2021
146
98
660
0
OK, you have to note that these chips are not intended for consumer user. They are meant for high performance computing (Eg. supercomputers). So, the important thing isn't the how much the chip consume but rather overall performance/watt (eg. Gigaflop per watt).

An example. Say this chip is 100 GFlop and 800W (just an example), older chips are 75GFlop and 600W. Chipwise, both are same performance/watt.... Don't seem that impressive. But here is the catch, to achieve 10,000 GFlop, you need 100 of the new chip instead of 133 of the older ones....

33 more chips means more hardware needed to support these chips. More circuit boards, voltage regulators, interconnects, etc etc.....All additional hardare consume and dissipate power. So cost and power consumption goes up.
Yes, I understand they are not for consumers. My point was, this CPU seems to use 0.5 watts per core more than its predecessor, I'm curious as to why that would be given the die shrink. My thought was maybe they are running them at much higher clocks, but that could be wrong. My primary curiosity is what the PPW ends up being if they are running at very high clock speeds.
 

escksu

Reputable
Aug 8, 2019
747
294
5,260
0
Yes, I understand they are not for consumers. My point was, this CPU seems to use 0.5 watts per core more than its predecessor, I'm curious as to why that would be given the die shrink. My thought was maybe they are running them at much higher clocks, but that could be wrong. My primary curiosity is what the PPW ends up being if they are running at very high clock speeds.
I am not sure as they did not give more details on the cores. However, it could be numerous factors, so its hard to tell. Can be more cache, more executions units etc etc. Like you said, can be clockspeed too.
 
Reactions: JamesJones44

ASK THE COMMUNITY

TRENDING THREADS