News AI titans Microsoft and Nvidia reportedly had a standoff over use of Microsoft's B200 AI GPUs in its own server rooms

Admin

Administrator
Staff member

" .....had a standoff over use of Microsoft's B200 AI GPUs"


Didn't know MS also made or had their own AI GPUs. :rolleyes:

Its latest AI GPU, the Blackwell B200, will be the fastest graphics processing unit in the world for AI workloads A "single" GPU delivers up to a whopping 20 petaflops of compute performance (for FP8) and is four times faster than its H200 predecessor.

Not correct, that is 20 petaflops of FP4 horsepower. So 20 petaflops of FP4 is for a single B200.

But if you want to just stick with FP8 format, then the B200 offers only 2.5x more theoretical FP8 compute than H100 (with sparsity, of course), and that too comes from having two chips in total.

So the FP8 throughput is half the FP4 throughput at 10 petaflops, and the FP16/BF16 throughput is again half the FP8 value at 5 petaflops.

Of course it's actually two large chips for such a GPU,

Then that would be GB200, not B200 as you mentioned in the article. Because the 20 petaflops claim of FP4 is for a single B200. Half of a GB200 superchip.

Of course, the Blackwell B200 is also not a single GPU in the traditional sense, but the petaflops figure match that of the GB200 superchip instead.
 
Last edited:
Didn't know MS also made or had their own AI GPUs. :rolleyes:
Just an error in word order. We were workshopping the title and accidentally put two words in the wrong spot. Instead of "over use of Microsoft's B200 AI GPUs in its own server rooms" it's supposed to be "over Microsoft's use of B200 AI GPUs in its own server rooms" — I've fixed it now.
Not correct, that is 20 petaflops of FP4 horsepower. So 20 petaflops of FP4 is for a single B200.
Yeah, it depends on which "GPU" we're talking about. It's up to 40 petaflops of FP4 with sparsity, and up to 20 petaflops FP8 with sparsity... for GB200. But that's two B200 at higher clocks; a 'regular' B200 is only 18 petaflops FP4 and 9 petaflops FP8. I looked at the wrong column when editing, apparently. :\
But if you want to just stick with FP8 format, then the B200 offers only 2.5x more theoretical FP8 compute than H100 (with sparsity, of course), and that too comes from having two chips in total.
Again, technically that's half of a GB200 that's 10 petaflops sparse FP8, and if you want a 'normal' B200 then it's down to 2.25X. But Nvidia likes to use GB200 for comparisons because it shows a bigger delta, naturally. I've corrected that paragraph to make sure we clearly indicate what's being discussed.
 
Just an error in word order. We were workshopping the title and accidentally put two words in the wrong spot. Instead of "over use of Microsoft's B200 AI GPUs in its own server rooms" it's supposed to be "over Microsoft's use of B200 AI GPUs in its own server rooms" — I've fixed it now.

Yeah, I know that. I was just being sarcastic ! Thanks for the correction, btw ! :)

Again, technically that's half of a GB200 that's 10 petaflops sparse FP8, and if you want a 'normal' B200 then it's down to 2.25X.

I suppose that should be dense for GB200, no ? 10 petaflops FP8. Sparse FP8 value should be 20 petaflops, imo. Or are you counting half of these values here, to compare it with the B200 ?


Anyway, let me check the specs again though. Will edit this post accordingly !
 

d0x360

Distinguished
Dec 15, 2016
123
49
18,620
Microsoft angered that they funded a significant part of nVidia'a meteoric rise in valuation wants revenge for nVidia daring to temporarily pass them... They spent over a decade chasing God damn apple and they won't stand for this again!

In other news..

Brought to you from the future!
The AI bubble has burst and Microsoft has acquired nVidia who ran out of money due to all the expensed hyper cars (we counted 184 Bugatti Veyron Super Sports parked at nVidia by 6am and the office doesn't even open until 9!

A ridiculous number of black leather jackets was also discovered buried until various offices, why does every employee need 400 jackets!? An odd obsession with using the same GPU control panel for 24 years was also unearthed but wait this is the future so we meant 35 years.

Our last nVidia news item is a doozy.. Oddly one day every engineer at nVidia went to the Bahamas and forgot to come back which allowed AMD to finally take the GPU performance crown.

In other news Intel GPU's still suck and their CPU division is hurting because they pretended arm didn't exist until 2032. Their fab gets little business as everyone (including Intel) says screw it and uses tsmc. Despite warnings from basically everyone that splitting fab off to operate like a separate entity wouldn't work Pat Gelsinger decided to do it anyways... He may be a Chinese spy working in the cpu design arm of Huawei but eee.MarsTechnica.spaceX was unable to verify that so pretend we didn't say it.

Future news brought to you by your friends at..

Pudding CO! ®1964
Serving the best future news since jello.
and...
Lack of Sleep LLC!
 
Last edited:

vanadiel007

Distinguished
Oct 21, 2015
237
233
18,960
Sounds to me like unfair competition practices, trying to exclude other companies by designing the equipment in such a way it can only be used with the equipment of 1 specific vendor.

Could be a sign of things to come, if let's say Nvidia decides all it's video cards need a custom connection to the motherboard PCB that is "needed" to ensure the video cards "AI" operates properly.
 
  • Like
Reactions: JarredWaltonGPU

TechyIT223

Prominent
Jun 30, 2023
230
51
660
Sounds to me like unfair competition practices, trying to exclude other companies by designing the equipment in such a way it can only be used with the equipment of 1 specific vendor.

Could be a sign of things to come, if let's say Nvidia decides all it's video cards need a custom connection to the motherboard PCB that is "needed" to ensure the video cards "AI" operates properly.


If nvidia tries to monopolize here then most probably they will lose the market share, clientele, and goodwill as well.
 

kjfatl

Reputable
Apr 15, 2020
188
131
4,760
If nvidia tries to monopolize here then most probably they will lose the market share, clientele, and goodwill as well.
There is no question that Nvidia is trying to effectively monopolize the market. Most companies TRY to be dominate, and if dominant hold onto that position for as long as possible. Nvidia has already lost goodwill. Market share will follow. I can 't see Amazon, Meta, Apple, Microsoft, Intel, Tesla, AMD, and Samsung letting Nvidia remain in a dominant position for more that a few years. They all view Nvidia as a threat, not a partner.
 
Last edited:

TechyIT223

Prominent
Jun 30, 2023
230
51
660
There is no question that Nvidia is trying to effectively monopolize the market. Most companies TRY to be dominate, and if dominant hold onto that position for as long as possible. Nvidia has already lost goodwill. Market share will follow. I can 't see Amazon, Meta, Apple, Microsoft, Intel, Tesla, AMD, and Samsung letting Nvidia remain in a dominant position for more that a few years. They all view Nvidia as a threat, not a partner.

Okay maybe Nvidia isn't trying to gain a monopoly here but the company certainly has not lost goodwill either IMO.
 

TRENDING THREADS