AMD Unveils Vega Radeon Instinct Details, Shipping Soon

Status
Not open for further replies.
It's interesting that AMD can focus their silicon strategy to accomplish a single goal while Nvidia or Intel needs to do partnership. I am not even sure if Nvidia was looking at AMD CPU architecture to develop their products.

AMD believe in the multi-GPU environment while Nvidia is dropping the Multi-GPU architecture more and more.
 

MASOUTH

Distinguished
Jun 3, 2008
53
14
18,535
completely OT and I know it's correct but...I'm not sure I will ever be comfortable with the plural of axis
 

withoutfences

Commendable
Nov 28, 2016
4
0
1,510
I doubt that CryptoCurrency Miners will pic these up, CC Mining is the ASiC Market entirely now for any real return on investment given the currently difficulty.

Even for speculative CC Mining, ASIC's are still the ticket since you need to mine hard for the first hour and then beyond that who cares.

I see this in the low cost server market, be in, mobile big data computational systems, filling the cap between low power single board Server Market and the institutional Systems (which these _will find a home in).

Now I just need to work for company that has interested in upgrading thier hardware.

Our Dell PowerEdge 2850's and 2950's are getting long in the tooth.
 

HideOut

Distinguished
Dec 24, 2005
560
83
19,070
It wont take years to get it back because most of us cant get them. The crypto purchases are made by huge companies, mostly in china. They buy palets of these at a time :(
 

Josh_killaknott27

Prominent
Mar 22, 2017
18
0
510
people keep telling me to sell one of the MSI gaming RX480's i have in a crossfire build, apparently they are going for 350 nowadays.
I just really want vega to be a really competitive performer and have been holding out.
 


GPUs have been worthless to CryptoMinning since ASICS hit the market and blew GPUs out of the water. It was after the R9 390 went. After the ASICs hit the old R9 390/390Xs hit the market and flooded it causing the price to drop and making them near worthless.



What partnership do Intel and NVidia have to have and with whom?

Last I checked Intel works on their process tech and uArchs themselves. In fact if you look at it, Intel currently has 5 consumer uArchs they are working on and should be out in the next year while AMD has Ryzen and plans Zen 2, just an enhanced Zen, in a year or so.

As well I have not seen anything that Intel has done in the server marketspace with a "partnership". They have their own CPU and their own accelerator ( Knights Landing) that they work on and develop with their own technologies.
 

bit_user

Polypheme
Ambassador

Nvidia has now designed two generations of custom ARM cores. So far, they've targeted mobile. In the future... ? It's true that they recently partnered with IBM.

Intel has Xeon Phi and the FPGA accelerator blocks in some of their other Xeon CPUs. I think they're trying to tackle parts of the HPC and AI market in their own way. They don't exactly preclude customers using their CPUs with other GPUs, but I haven't heard of any partnerships...


Have you heard of NVLink? What's that all about, then?

You might be interested in checking out their DGX line of multi-GPU boxes.
 

bit_user

Polypheme
Ambassador

Yeah, but more recently:

http://wccftech.com/amd-gpu-supply-exhausted-by-cryptocurrency/

As has already been said, these Instinct cards are server-grade products that are probably too expensive to be attractive to miners. Plus, by the time they're available (Q3), perhaps the latest mining bubble will have popped.



IBM, who integrated NVLink directly into a version of their Power CPUs.
 

bit_user

Polypheme
Ambassador

By the time Vega launches, the mining bubble might've popped. In that case, you'd probably be lucky to get $150 for it.

Depending on how much your electricity costs, you might be better off just using your system to do mining, while you're not gaming. Ideally, put it somewhere that it won't require air conditioning - like a garage.
 

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
858
315
19,360


I'm with you on that. I originally wrote it as "axis's" just because axes make me think of, well, axes that you chop things with. Our sharp-eyed editing staff caught it, though.
 


Ahh the "ASIC" resistant ones. Just like the others were supposed to be. That got beat and now need ASICs.
You are correct though. The cost to return on these would be insanely stupid.

And I didn't think of that as a partnership. I mean it makes sense for NVidia since they are a GPU manufacture with some mobile CPUs. Intel, on the other hand, has had a HPC design for a very long time with Xeon and Xeon Phi but their systems will still work with AMD or NVidia GPUs.

Honestly AMD is basically trying to get part of that 99% back from Intel because it is a big, fat juicy steak of a market and Intel has been stuffing itself with it in the absence of a decent competitive chip.

Guess we will see but Purley looks to be pretty damn powerful and that's the wall AMD will hit and have to jump over to truly make a profit for once. The consumer market is a drop in the bucket compared to this.
 

alextheblue

Distinguished


They use consumer cards. Professional cards are too expensive. This is why it's hard to find affordable Polaris cards right now.
 

msroadkill612

Distinguished
Jan 31, 2009
202
29
18,710
multi gpu inherits a pretty bad rap, & i suspect much of the problems w/ crossfire etc. relate to frame pacing or sincing the gpus linked via the vagaries of the system bus.

multi gpu on the fabric on a epyc type mcm e.g. sounds quite another thing - blistering fast dedicated direct links to other cpu or gpu processors on either socket of a 2p.

separately, it seems noteworthy that, a dual slot dual gpu rig as in the article, actually has 32 dedicated lanes available for use on epyc /TR mobos. even traditional crossfire should fare well on TR/epyc moboS. The norm has been 8 lanes per crossfired gpu.
 


Intels high end has almost always had 32 lanes for PCIe x16 slots, however the benefit going from x8 to x16 is almost unnoticeable (a few % normally) and probably wont even be saturated until well after Vega/Volta hit the floor.
 

bit_user

Polypheme
Ambassador

You're thinking of games, which generate most of their geometry and textures on the GPU.

For big data & deep learning, bus speed absolutely matters. That's why Nvidia created NVLink and NVLink2, and it's why AMD put 128 lanes of PCIe on EPYC. I mean seriously, that socket is > 4000 pins - this was surely neither an inexpensive nor whimsical decision.
 


He specifically mentions Crossfire, Crossfire is not used in that scenario.

And I am sure it depends on the case scenario. For example, in Cryptocurrency mining even though you are maxing a GPU out beyond what any game will do you can still run a high powered GPU on a x1 PCIe as all it needs it to be able to transfer data across the bus but the data is not large and the cores are just crunching.
 

bit_user

Polypheme
Ambassador

Yes, and that is off-topic, since this thread is about their server/HPC GPUs.


Yes, it depends. But then you can't go and make blanket statements that x8 vs. x16 doesn't matter. That's why I pointed out some key applications for which these cards were designed and are being marketed where your statement is incorrect.

And cryptocurrency miners are not buying server/HPC GPUs.
 


As I said it depends but there is a lot of bandwidth in PCIe 3.0:

https://www.theregister.co.uk/2012/06/20/mellanox_infiniband_ethernet_isc/

That said, it also depends on the cryptocurrency. Bitcoing, of course not. They have server farms using low power CPUs and ASICs designed for the task. However there are some ASIC resistant currencies that still get the best bang for buck out of GPUs and HPC farms would benefit them . Some companies take advantage of that and rent out their servers to mine coins for a price of course.
 

Rexer

Distinguished


I was waiting for Vega for 7 months when I clocked up my 390x one last time before it said goodbye. So I thought, "Well, no problem I'll just get a new 580 8gb oc card till Vega's released". So the following week, I checked the prices and was surprised to find them all gone. Originally, I'd hope to get an Asus 8gb oc card but there were none to be had. This was less than 2 months after 580 were released. Panic set in, I went bonkers searching the retailers till I managed to find a Sapphire card 8gb still at the retail price. That panic buy saved me because two days later, only the 4gb cards were left. A week earlier, cheap 8gbs were all over the place. Frys, Newegg, Amazon, MicroCenter, etc. It was pretty crazy.
A friend of mine who watches financial reports said economist were telling people to get your money into crypto currencies and bit coin. I thought that doesn't sound good but I never gave it a second thought. It surprises me who pays attention to it.
 

bit_user

Polypheme
Ambassador

What kind of response is that? There's even more bandwidth in NVLink, PCIe 4.0, and PCIe 5.0.

Look, Nvidia judged it was worth adding 480 Gbps of additional connectivity to their P100 GPU, in 2016. 4.75x of what only x16 lanes of PCIe 3.0 would provide. That should tell you something about the bandwidth need in AI and HPC. And apparently even that's not fast enough, because NVLink 2.0, featured in the V100 GPU, is yet 25% faster.

https://en.wikipedia.org/wiki/NVLink

And it's not just Nvidia. Without even any PCIe 4.0 products on the market, why is the industry already finalizing PCIe 5.0?

https://en.wikipedia.org/wiki/PCI_Express#PCI_Express_5.0

You're out of your depth. It's understandable that you reflexively thought about multi-GPU gaming, and commented on that. But those lessons don't apply here.


As said above, four different times, by four different people, HPC products are not used for cryptocurrency mining. It doesn't make economic sense. For ASIC-resistant currencies, the can get by just fine with cheap, commodity hardware (or further cost-reduced "mining" SKUs), so they do.

The reason for this is simple. In crypto-mining, if a GPU fails, you just swap it out and go on about your merry way. For many GPU-accelerated simulations and cloud applications, failures are more costly and reliability is at more of a premium, so they're willing to pay more for server-grade hardware and the corresponding levels of support.
 


You didn't read the article I linked....

If you did you would see it has nothing to do with "gaming" but rather what it takes to actually choke a PCIe 3.0 slot. And it is a lot.

I never said it was enough for AI or all HPC products.

As well, bandwidth increases do not always mean they needed more just that they could. It will eventually be utilized by HPC products but just because they increased it does not mean it will be choked up immediately.

And again it was a response to a specific person for gaming. I in no way meant anything about things beyond that.

The funny thing is that AI will end up just like Bitcoin, using FPGAs and ASICs as they can be much more powerful than GPUs. When the first ASICs came out for Bitcoin they did what GPUs did to CPUs, made them pointless. Even HPC will probably eventually head there.
 
Status
Not open for further replies.