News Nvidia's Grace Hopper GH200 Powers 1 ExaFLOPS Jupiter Supercomputer

Status
Not open for further replies.

bit_user

Polypheme
Ambassador
That picture of a floor full of racks, arranged in neat rows had me thinking... is there no value it a more compact arrangement? Maybe the switch network is too high-latency for a more physically compact layout to make much difference, but then what if you could connect all of the nodes in the same CXL topology? Could it ever make sense to pack all of the machines into more of a cube-type arrangement?
 

SunMaster

Commendable
Apr 19, 2022
160
138
1,760
That picture of a floor full of racks, arranged in neat rows had me thinking... is there no value it a more compact arrangement? Maybe the switch network is too high-latency for a more physically compact layout to make much difference, but then what if you could connect all of the nodes in the same CXL topology? Could it ever make sense to pack all of the machines into more of a cube-type arrangement?
I imagine heat from the cubes centre will be challenging, growing exponentially with the cube size. Watercooling is great but no matter how great cooling solution you have you never get rid of all excess heat.

Just my wild guess.
 

bit_user

Polypheme
Ambassador
I imagine heat from the cubes centre will be challenging, growing exponentially with the cube size. Watercooling is great but no matter how great cooling solution you have you never get rid of all excess heat.
I'm not saying you'd pack them together without any space in between. Also, think about inside of those racks. They clearly dissipate enough heat that machines sandwiched between multiple other machines stay cool enough.
 
Sep 12, 2023
5
4
15
I'm not saying you'd pack them together without any space in between. Also, think about inside of those racks. They clearly dissipate enough heat that machines sandwiched between multiple other machines stay cool enough.
Apologies if I didn’t quite get your idea and you are already aware of this. The racks are able to dissipate the heat due to hot rows and cold rows. Each row has the front of racks facing each other that draws in the cold air and each alternate row has the back of the racks facing each other and the warm air is drawn up and out of the DC.

There are other benefits for not cramming racks to close to each other.

5 Advantages of Row Cooling vs. Room Cooling for Edge and Data Center Environments

Edit: In some of the data centers I've been in the rows are pretty narrow and not much wasted space.
 
Last edited:

bit_user

Polypheme
Ambassador
Apologies if I didn’t quite get your idea and you are already aware of this. The racks are able to dissipate the heat due to hot rows and cold rows. Each row has the front of racks facing each other that draws in the cold air and each alternate row has the back of the racks facing each other and the warm air is drawn up and out of the DC.
Thanks. I've heard such things. I just wonder if there wouldn't be some worthwhile benefits to a 3D topology of the racks, rather than 2D.

It seems like we're starting to move beyond air cooling, anyhow. Once there's water cooling, I think it opens up opportunities for 3D arrangements.

Say, is Google still using containers, in their datacenters? The few pictures I've seen showed those stacked in ways that could be utilized by their network topology.
 
Sep 12, 2023
5
4
15
Thanks. I've heard such things. I just wonder if there wouldn't be some worthwhile benefits to a 3D topology of the racks, rather than 2D.

It seems like we're starting to move beyond air cooling, anyhow. Once there's water cooling, I think it opens up opportunities for 3D arrangements.

Say, is Google still using containers, in their datacenters? The few pictures I've seen showed those stacked in ways that could be utilized by their network topology.
You’re welcome. In terms of a 3D topology for performance and latency it’s not really by area but logically I can see how it should improve things.

Row cooling is a common topology in colocation or similar DC’s, never been in a hyperscaler DC but as far as I know they can have fairly exotic setup’s. Not familiar with Google and containers, will look it up. A cooling setup that may better suit your idea might be an immersion solution, just don’t know how practical it would be for a super computer due to how frequently they are replacing faulty parts.

Microsoft finds underwater datacenters are reliable, practical and use energy sustainably

Agreed regarding air cooling, DC designs are getting creative with managing cooling power and power usage in general.

World's largest data center being built in the Arctic
 
  • Like
Reactions: bit_user
Status
Not open for further replies.