News AMD unveils Instinct MI300X GPU and MI300A APU, claims up to 1.6X lead over Nvidia’s competing GPUs

Status
Not open for further replies.
This is literally "all the things" AMD has been working on & talking about for more than a decade.
  • Heterogeneous processing (i.e. CPU + GPU)
  • HBM
  • Chiplets
  • Infinity Link
  • 3D V-Cache

It's taken a while to come together, but it'll be interesting to see how it competes against Nvidia's Grace+Hopper superchips.

On that front, I can already point out one advantage in Nvidia's column. Their H200 supports almost as much HBM, while Grace supports 512 GB. So, they have a net win on memory capacity, even if the bulk of it is just LPDDR5X @ 480 GB/s. Perhaps the MI300 has a DDR5 controller, but I don't see it called out in the slides.

I guess an elephant in the room is pricing - both short-term and long-term. I can easily believe in AMD selling this at or below the current street price of Nvidia's solution, but with what margins? If Nvidia ever catches up to demand or perhaps loses its "preferred" status, how do the price floors compare.

Looking in Intel's direction, I see a lot here reminding me of Ponte Vecchio. While the latter is even more technically impressive (i.e. from a fabrication standpoint), AMD took the more sane approach of gradually building up to this point over generations of products introducing each of these technologies. Hopefully, that means they have better yields and fewer issues with these products.
 
Last edited:
  • Like
Reactions: artk2219
The market has only one destination, and this is it. Fully integrated SoC, zero support chips, just a board with a chip and connectors, and at this point it will probably be nothing but usb-c ports (or whatever may be the best solution at the time). Idiot proof! Just in the nick of time!
 
The market has only one destination, and this is it. Fully integrated SoC, zero support chips, just a board with a chip and connectors, and at this point it will probably be nothing but usb-c ports (or whatever may be the best solution at the time). Idiot proof! Just in the nick of time!
Well, this is probably meant to sit on an OCP OAM daughter card, similar to Nvidia's SXM boards.

vTrGy2WBTBJHFnMsUtT9UA.png

(H100 on SXM5)
Note the relative size of the VRM to the H100 and its HBM!
: D

Here's an AMD MI250X on an OAM card that seems to be resting in place (screws missing), in a host system.

3mGRBPMTysi5T8aN3Ehwze-970-80.png


In both cases, you'd obviously run them with heatsinks installed.

As for I/O, note the PCIe and Infinity Fabric links, in this diagram:

nGTgmhLdk4FaM8m9Xftzr6.jpg


Both are key for scaling up to handle large training workloads. I expect PCIe/CXL will be used to access storage and additional memory pools, at least the latter of which will need to reside in the same box.
 
  • Like
Reactions: artk2219
Status
Not open for further replies.