News Lenovo says demand for AMD's Instinct MI300 is record high — plans to offer AI solutions from all important hardware vendors

In particular AMD's MI300X doesn't seem to compare badly to Nvidia's H100 SXM5. The MI300X has 192 GB of VRAM (H100 has 80), and memory bus is 8192 bits (H100 has 5120), and some additional technical numbers. And e.g. for cinema, rendering large scenes seems to be working better with the MI300X (according to Blender GPU benchmarks).

For other workloads, the H100 may perhaps be doing better. But there still is some market left for AMD.

(Disclaimer: I hold some AMD stock. So, I may be biased. But I do hold AMD stock, because of technical details as mentioned, where AMD is not really lightyears behind Nvidia, and giving Intel a run for their money, in some segments.)
 
200 cities' data scientists with other fabless AI hardware and 10 junior colleges each to seed some improvement in #include and libraries but 'only' holding stock via investment groupings: Oh yeah, nVidia dominance can be ok sometimes. 7 other cities with nVidia powered self-driving ferries: Pin compatibility would be great.
 
In particular AMD's MI300X doesn't seem to compare badly to Nvidia's H100 SXM5. The MI300X has 192 GB of VRAM (H100 has 80), and memory bus is 8192 bits (H100 has 5120), and some additional technical numbers. And e.g. for cinema, rendering large scenes seems to be working better with the MI300X (according to Blender GPU benchmarks).

For other workloads, the H100 may perhaps be doing better. But there still is some market left for AMD.

(Disclaimer: I hold some AMD stock. So, I may be biased. But I do hold AMD stock, because of technical details as mentioned, where AMD is not really lightyears behind Nvidia, and giving Intel a run for their money, in some segments.)
Its worth noting that B200 and MI300X have almost identical AI performance watt per watt. And since the Mi300X costs half as much you can buy twice as many... this results in that you can also run 4x faster FP64 loads for the same cost and less power than the B200....

I mean anyone looking to buy these should definitely run the numbers on what thier needs are and how much value they can get out of these systems...

Also ZLUDA has pretty much proven there is nothing wrong with the HIP runtime either... since it can run CUDA applications on top of ROCm at expected native speeds.