I don't believe that. I think a popular misconception is that these firms doing AI training have all of their GPUs simultaneously working on the same model, at any given point in time. Actually, they have multiple models concurrently in development and multiple experiments being conducted, at any given point in time. That's not to mention the fact that any AI service provider will have a significant portion of their fleet doing inference workloads, which are
very granular. So, the maximum number of GPUs you need in a single place is probably quite a lot fewer than the total number. That's as far as xAI goes.
As for what
@jp7189 is talking about, we know even less about that datacenter and who or what it's for. If it's for a multi-tenant cloud provider, like Amazon or Google, then the need to pool all of the resources together in a single location is way less, still.