Hmm... Musk has business relationships that pose some interesting questions.
- As one of the OpenAI founders, can he no longer gain access to their technology, or does Microsoft now control too much of the company for him to have such pull?
- Considering Tesla's AI hardware sounds pretty impressive, why not arrange to buy some of theirs? Are there technological differences that significantly disadvantage it on running transformer networks?
Yes... but, Nvidia is basically the single supplier of the hardware everyone wants to use. That gives them
quite a bit of leverage, in any price negotiations.
10k GPUs is a lot of money for a company that (I think) is still making losses. If we assume about $20k each (including the servers to host them), that's a cool $200M. Only about 1% of what he paid for Twitter, but probably a multiple of Twitter's annual hardware spend.
Somehow, I had a figure of $18k in mind. Not sure if I'm misremembering that or maybe the street price has shot way up since then. shopping.google.com shows prices anywhere from $28.5k to ebay prices of $43k or more.
Then, I thought I'd see what Dell's list price is, so I popped over to dell.com and looked at the price of adding one to a PowerEdge R750xa.
They want an absolutely astounding $86,250 per H100 PCIe card, and they make you add a minimum of 2 GPUs to the chassis!!! Having a decent amount of experience with Dell servers at my job, I know they like big markups for add-ons, but I'm still pretty stunned by that one.
If you know anything about these, you're probably aware that the PCIe cards aren't even the best type of H100. What you
really want are the SXM version. And a further irony is that a pair of the current H100's cannot even run GPT-3, which is why Nvidia recently announced a refresh of its H100 with more memory, due out in Q3.