The big chief is on its way.
Radeon Instinct MI100 Arcturus Early Specifications Show Impressive 200W TDP : Read more
Radeon Instinct MI100 Arcturus Early Specifications Show Impressive 200W TDP : Read more
Two issues, here. Both nit picks, but it's what I do.The Radeon Instinct MI60 and MI50accelerators offer up to 58.9 TFLOPs and 53 TFLOPs of peak INT8 performance
It could be built on Vega or Vega-derived CUs (as rumored), and they disable anything unnecessary. Only time will tell. Not sure how much I buy any of these rumored specs though.Phoronix has long reported that open source driver patches indicate Arcturus will lack any 3D graphics hardware engines. This is to be a pure compute-accelerator die.
Lisa Su has acknowledged that AMD will be pursuing a bifurcated strategy of HPC and consumer products. We've already seen the beginnings of this, with Vega 20, however it makes sense that they could go further.It could be built on Vega or Vega-derived CUs (as rumored), and they disable anything unnecessary. Only time will tell. Not sure how much I buy any of these rumored specs though.
Yeah that's been the case for them internally for a while now I suspect, given RDNA's focus. It remains to be seen how many resources they will throw at each design.Lisa Su has acknowledged that AMD will be pursuing a bifurcated strategy of HPC and consumer products. We've already seen the beginnings of this, with Vega 20, however it makes sense that they could go further.
I'm not sure if there's anything else in the chip related to that which they could gate, that they aren't already. I mostly meant if they are using existing designs, and the graphics hardware is present, they aren't exposing it in the drivers. I know there's incentive to get rid of the superfluous blocks, but I don't know if we're going to see a redesign like that so soon. Especially since that piece of silicon couldn't also be used in professional graphics cards. What would they use for those? RDNA? Older Vega? Or would they end up with three designs? Who knows at this stage.I'm not really sure what would be gained by disabling "anything unnecessary", as they already do clock gating that dynamically powers down parts of the chip that are idle. I've heard estimates that graphics hardware blocks consume up to 25% of their die space, which would only increase with things like ray tracing and some of their other recent additions (DSBR, mesh shaders, etc.). So, the incentive is there to reclaim that for general-purpose compute hardware.
Nvidia and AMD both offer workstation cards that mirror their consumer range, even reusing the same consumer chips, but with a few professional features enabled.Especially since that piece of silicon couldn't also be used in professional graphics cards. What would they use for those? RDNA? Older Vega?
Nvidia's P100 and V100 are good examples, here. I suspect neither saw much use as actual graphics cards. They were simply too expensive and didn't offer enough performance advantage vs. the top-end consumer GPUs.Or would they end up with three designs? Who knows at this stage.
I know. I was specifically referring to a hypothetical graphics-less piece of silicon. They wouldn't be able to use that silicon in a pro card, which puts their future workstation cards in an interesting position. Would they be using GCN, or some variant of RDNA? If GCN, a new generation, or a rehash?Nvidia and AMD both offer workstation cards that mirror their consumer range, even reusing the same consumer chips, but with a few professional features enabled.
Those both had Quadro models obviously, but I'm not sure how big the market is for that level of graphical capability. I wouldn't have suggested anyone game on these, nor do I suspect many people bought a Titan V for gaming. My point is ditching the graphics would limit the market for that particular design, and having additional layouts costs them precious resources. Not sure if it's worth it. Guess we'll find out.Nvidia's P100 and V100 are good examples, here. I suspect neither saw much use as actual graphics cards. They were simply too expensive and didn't offer enough performance advantage vs. the top-end consumer GPUs.
I think the Titan V was mainly sold as a lower-cost deep learning accelerator. I'm betting most people who bought them weren't using them for gaming or other graphics tasks.
Right, which is how we ended up with the example of Quadro P100 and V100.I know. I was specifically referring to a hypothetical graphics-less piece of silicon. They wouldn't be able to use that silicon in a pro card,
Yeah, and I'm trying to say that the graphics performance for even professional graphics is nearly as lousy for the $. I just don't believe the majority of these get purchased for graphics. Aside from V100 being used to prototype their interactive ray tracing, I'm betting most Quadro P100 and V100 cards are purchased for deep learning or GPU compute. You can't justify their price any other way.Those both had Quadro models obviously, but I'm not sure how big the market is for that level of graphical capability. I wouldn't have suggested anyone game on these, nor do I suspect many people bought a Titan V for gaming.
If almost nobody is buying them for graphics tasks, then the size of the market you're foreclosing is almost zero.My point is ditching the graphics would limit the market for that particular design, and having additional layouts costs them precious resources. Not sure if it's worth it. Guess we'll find out.