This just leaves me with a couple questions, with 4x 60TB SSD's that can handle enough data from such a network, it sounds cool, but what do you do after the first 5 minutes, when your 4 SSD drives are now full? Where does the data go?
You're doing it wrong. What you're supposed to do is have a dual-CPU server with a couple hundred cores and a bunch of guest VMs running on them. Those VMs all want to access some data, but because those VMs can spin up on different hosts from time to time, the data won't be local. Instead, the data is located across a large array of storage servers. VMs not only need regular storage I/O, they also need to talk with other VMs in the datacenter, or even to communicate with clients over the internet. Another big consumer of I/O bandwidth is reading & writing VM snapshots, which you want to be
fast, because the VM can't run until the snapshot finishes.
So, the case for 400 Gbps is really to handle the spikes in network traffic and because transferring lots of data (like VM snapshots) takes time, which you want to minimize, because time is latency and latency is $$$.
Oh, and we didn't even get into AI, but multi-machine AI training is probably one of the main use cases.
Next question is. given that a 60TB SSD drive is around $16,000 I know they exist but no one is providing a quote, so estimating based on the price of 15TB SSD drives.
Here's one.
I just saved you over $8k!
: D