Nvidia's Grace Hopper now powers 40 AI supercomputers globally.
Nvidia's Grace Hopper GH200 Powers 1 ExaFLOPS Jupiter Supercomputer : Read more
Nvidia's Grace Hopper GH200 Powers 1 ExaFLOPS Jupiter Supercomputer : Read more
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
I imagine heat from the cubes centre will be challenging, growing exponentially with the cube size. Watercooling is great but no matter how great cooling solution you have you never get rid of all excess heat.That picture of a floor full of racks, arranged in neat rows had me thinking... is there no value it a more compact arrangement? Maybe the switch network is too high-latency for a more physically compact layout to make much difference, but then what if you could connect all of the nodes in the same CXL topology? Could it ever make sense to pack all of the machines into more of a cube-type arrangement?
I'm not saying you'd pack them together without any space in between. Also, think about inside of those racks. They clearly dissipate enough heat that machines sandwiched between multiple other machines stay cool enough.I imagine heat from the cubes centre will be challenging, growing exponentially with the cube size. Watercooling is great but no matter how great cooling solution you have you never get rid of all excess heat.
I'm not saying you'd pack them together without any space in between. Also, think about inside of those racks. They clearly dissipate enough heat that machines sandwiched between multiple other machines stay cool enough.
Apologies if I didn’t quite get your idea and you are already aware of this. The racks are able to dissipate the heat due to hot rows and cold rows. Each row has the front of racks facing each other that draws in the cold air and each alternate row has the back of the racks facing each other and the warm air is drawn up and out of the DC.I'm not saying you'd pack them together without any space in between. Also, think about inside of those racks. They clearly dissipate enough heat that machines sandwiched between multiple other machines stay cool enough.
Thanks. I've heard such things. I just wonder if there wouldn't be some worthwhile benefits to a 3D topology of the racks, rather than 2D.Apologies if I didn’t quite get your idea and you are already aware of this. The racks are able to dissipate the heat due to hot rows and cold rows. Each row has the front of racks facing each other that draws in the cold air and each alternate row has the back of the racks facing each other and the warm air is drawn up and out of the DC.
You’re welcome. In terms of a 3D topology for performance and latency it’s not really by area but logically I can see how it should improve things.Thanks. I've heard such things. I just wonder if there wouldn't be some worthwhile benefits to a 3D topology of the racks, rather than 2D.
It seems like we're starting to move beyond air cooling, anyhow. Once there's water cooling, I think it opens up opportunities for 3D arrangements.
Say, is Google still using containers, in their datacenters? The few pictures I've seen showed those stacked in ways that could be utilized by their network topology.
This is what I'm talking about:Not familiar with Google and containers, will look it up.