AMD Claims First 7nm GPUs With Radeon Instinct MI60, MI750

CaptainTom

Honorable
May 3, 2012
1,563
0
11,960
Hmm 60 compute units in the cut down model? That's barely cut down at all for a new process.

Here's to hoping they do a V56 version for a limiter "Frontier" release. 56 7nm compute units clocked at 1.8GHz+ with that much bandwidth would likely beat the 2080...
 

Gillerer

Distinguished
Sep 23, 2013
361
81
18,940


The amount of memory being cut down in half is more significant and works to effectively segment the two products.

But I agree - seems TSMC's 7 nm node is in good shape if AMD can get good enough yields for MI50. Of course, there are a lot of numbers left between MI25 and MI50 to introduce a yet-lower tier model in a couple of months. Maybe AMD is building up stock for those?
 

bit_user

Polypheme
Ambassador
The main news seems to be fp64 and PCIe4.

Deep learning performance is almost a footnote. It does seem that AMD was blindsided by Nvidia's Tensor cores. That said, at least they have respectable fp64 performance to fall back on (for those HPC applications requiring it).

Anyway, I'm really suspecting 4096 is some kind of architectural ceiling to the shader count, imposed by GCN. They first reached that with 28 nm Fury, back in 2015, and have never gone beyond. This has really got to be hurting, since there's only so much you can do with clock speed. That said, on such a new process like 7 nm, perhaps it wouldn't make a lot of sense to try and go bigger.

Eh, color me disappointed. I knew it wasn't going to take back the crown, but I was hoping for a little more improvement over first-gen Vega. Maybe something that could challenge a GTX 1080 Ti.
 
If performances are really close to Tesla V100 with a chip that is 33% of the size of the later, then Nvidia would have some work to do.

The V100 is the biggest die from Nvidia while this is barely around 325mm2.

It is also PCIe4 and the scaling is just amazing if anything close from the slide. 99% scaling for multi-GPU setups? just wow. It is not gaming, but Infinity Fabric will obviously be there at some point with some kind of trick.

Also, AMD bringing a more open source approach is a winner on the long term. Freesync is the best example with recent Samsung 4k TV using the technology. G-sync will never make it back. Nvidia should just use Freesync tech and move on.
 


I am surprised they haven't looked into a possible MCM solution like they have with their CPUs. Its working well there but would take a lot of work to make it act as one I guess.



You really seem to think these things don't you? Samsung launched a whole set of G Sync monitors this year. I doubt they would if they thought it was unprofitable. Freesync is nice but until the second gen comes and AMD imposes some sort of QA like nVidia does it wont be as good. That and they need to grad a much larger share of the market. So long as nVidia controls the market companies will continue to make G Sync monitors to get those sales.

Hell Asus launched a 65" G Sync TV this year. I doubt G Sync will go anywhere until AMD really holds the performance and price crown again.
 

hannibal

Distinguished
The problem with multi gpu solutions is that the os does not support it as well as it does support multi cpu. So the operation system more or less automatically support multi cpu situations, but if you put together multi gpu configuration you have to do all the heavy lifting of utilising those GPUs by yourself (aka sli and crossfire driver solutions)
If operation systems would support multi gpu solutions directly it would make them more viable option.
 

bit_user

Polypheme
Ambassador

I don't disagree, but they are posing it for server/cloud applications which do support it.

Yes, we all wish multi-GPU were better supported for gaming/graphics, but that's not really what this is about.
 

bit_user

Polypheme
Ambassador

No, passive is totally standard for server GPUs, even for 250 and 300 Watt cards. They have forced air induction through the chassis, making any onboard cooling fairly redundant. Combined with the lack of any display connectors to obstruct airflow out the back, you can get pretty decent cooling that way.
 

bit_user

Polypheme
Ambassador
I want to know what genius thought it'd be a good idea to name the 64-CU model MI60 and the 60-CU model MI50. It wouldn't really be a problem, if not for their very logically-named Vega 64 and Vega 56.

I mean, they could've avoided the whole mess by calling them MI75 and MI70, or MI 150 and MI 160.
 


Maybe the same guy who decided to name the chipsets similar but one gen ahead of Intels?
 

alextheblue

Distinguished

I figured this was more of a snag-HPC-dollars-while-acting-as-a-test-vehicle-for-7nm kind of chip, in prep for Navi. I don't even know that we'll see a consumer version of this 7nm Vega.
 

bit_user

Polypheme
Ambassador

Agreed, but I was hoping to see a larger increase in things like fp32 TFLOPS, which would suggest that it could take on a GTX 1080 Ti, if they'd wanted. ...just to have some idea of what to expect from Navi, you understand.
 

hannibal

Distinguished
Big increases would mean big chip and high price. I personally hope moderate improvement and reasonable price... not unlike in some other new GPUs...
HMD is still silly expensive along the interposer so not likely to see Vega refress... though it is not impossible. Hopefully Navi will be gddr6 and much cheaper to produce than Vega or Nvidia rtx series...