News Radeon Instinct MI100 Arcturus Early Specifications Show Impressive 200W TDP

bit_user

Polypheme
Ambassador
Phoronix has long reported that open source driver patches indicate Arcturus will lack any 3D graphics hardware engines. This is to be a pure compute-accelerator die.


The Radeon Instinct MI60 and MI50accelerators offer up to 58.9 TFLOPs and 53 TFLOPs of peak INT8 performance
Two issues, here. Both nit picks, but it's what I do.
  1. The 'S' in TFLOPS should be capitalized, because it's part of the unit: Trillion Floating Point Operations Per Second.
  2. When describing integer performance, you omit the "Floating Point" part, leaving just TOPS.
I noticed the table repeats these same errors. TFLOPs should be either TFLOPS or TOPS, depending on the row.
 

alextheblue

Distinguished
Phoronix has long reported that open source driver patches indicate Arcturus will lack any 3D graphics hardware engines. This is to be a pure compute-accelerator die.
It could be built on Vega or Vega-derived CUs (as rumored), and they disable anything unnecessary. Only time will tell. Not sure how much I buy any of these rumored specs though.
 

bit_user

Polypheme
Ambassador
It could be built on Vega or Vega-derived CUs (as rumored), and they disable anything unnecessary. Only time will tell. Not sure how much I buy any of these rumored specs though.
Lisa Su has acknowledged that AMD will be pursuing a bifurcated strategy of HPC and consumer products. We've already seen the beginnings of this, with Vega 20, however it makes sense that they could go further.

I'm not really sure what would be gained by disabling "anything unnecessary", as they already do clock gating that dynamically powers down parts of the chip that are idle. I've heard estimates that graphics hardware blocks consume up to 25% of their die space, which would only increase with things like ray tracing and some of their other recent additions (DSBR, mesh shaders, etc.). So, the incentive is there to reclaim that for general-purpose compute hardware.
 
  • Like
Reactions: alextheblue

alextheblue

Distinguished
Lisa Su has acknowledged that AMD will be pursuing a bifurcated strategy of HPC and consumer products. We've already seen the beginnings of this, with Vega 20, however it makes sense that they could go further.
Yeah that's been the case for them internally for a while now I suspect, given RDNA's focus. It remains to be seen how many resources they will throw at each design.
I'm not really sure what would be gained by disabling "anything unnecessary", as they already do clock gating that dynamically powers down parts of the chip that are idle. I've heard estimates that graphics hardware blocks consume up to 25% of their die space, which would only increase with things like ray tracing and some of their other recent additions (DSBR, mesh shaders, etc.). So, the incentive is there to reclaim that for general-purpose compute hardware.
I'm not sure if there's anything else in the chip related to that which they could gate, that they aren't already. I mostly meant if they are using existing designs, and the graphics hardware is present, they aren't exposing it in the drivers. I know there's incentive to get rid of the superfluous blocks, but I don't know if we're going to see a redesign like that so soon. Especially since that piece of silicon couldn't also be used in professional graphics cards. What would they use for those? RDNA? Older Vega? Or would they end up with three designs? Who knows at this stage. :p
 

bit_user

Polypheme
Ambassador
Especially since that piece of silicon couldn't also be used in professional graphics cards. What would they use for those? RDNA? Older Vega?
Nvidia and AMD both offer workstation cards that mirror their consumer range, even reusing the same consumer chips, but with a few professional features enabled.

Or would they end up with three designs? Who knows at this stage. :p
Nvidia's P100 and V100 are good examples, here. I suspect neither saw much use as actual graphics cards. They were simply too expensive and didn't offer enough performance advantage vs. the top-end consumer GPUs.

I think the Titan V was mainly sold as a lower-cost deep learning accelerator. I'm betting most people who bought them weren't using them for gaming or other graphics tasks.
 

alextheblue

Distinguished
Nvidia and AMD both offer workstation cards that mirror their consumer range, even reusing the same consumer chips, but with a few professional features enabled.
I know. I was specifically referring to a hypothetical graphics-less piece of silicon. They wouldn't be able to use that silicon in a pro card, which puts their future workstation cards in an interesting position. Would they be using GCN, or some variant of RDNA? If GCN, a new generation, or a rehash?
Nvidia's P100 and V100 are good examples, here. I suspect neither saw much use as actual graphics cards. They were simply too expensive and didn't offer enough performance advantage vs. the top-end consumer GPUs.

I think the Titan V was mainly sold as a lower-cost deep learning accelerator. I'm betting most people who bought them weren't using them for gaming or other graphics tasks.
Those both had Quadro models obviously, but I'm not sure how big the market is for that level of graphical capability. I wouldn't have suggested anyone game on these, nor do I suspect many people bought a Titan V for gaming. My point is ditching the graphics would limit the market for that particular design, and having additional layouts costs them precious resources. Not sure if it's worth it. Guess we'll find out.
 

bit_user

Polypheme
Ambassador
I know. I was specifically referring to a hypothetical graphics-less piece of silicon. They wouldn't be able to use that silicon in a pro card,
Right, which is how we ended up with the example of Quadro P100 and V100.

Those both had Quadro models obviously, but I'm not sure how big the market is for that level of graphical capability. I wouldn't have suggested anyone game on these, nor do I suspect many people bought a Titan V for gaming.
Yeah, and I'm trying to say that the graphics performance for even professional graphics is nearly as lousy for the $. I just don't believe the majority of these get purchased for graphics. Aside from V100 being used to prototype their interactive ray tracing, I'm betting most Quadro P100 and V100 cards are purchased for deep learning or GPU compute. You can't justify their price any other way.

My point is ditching the graphics would limit the market for that particular design, and having additional layouts costs them precious resources. Not sure if it's worth it. Guess we'll find out.
If almost nobody is buying them for graphics tasks, then the size of the market you're foreclosing is almost zero.

Besides, consider this: AMD could still put the display driver and video codec engine (which you need to be competitive at using it for video processing). Even if they drop the rest of the hardware assist, they could still emulate all of that stuff in software. So, one could still use it as a graphics card. I doubt they'll go to all of that trouble, but it's possible.