Nvidia Unveils PCI-Express Variant Of Tesla P100 At ISC 2016

Status
Not open for further replies.

AdviserKulikov

Honorable
Jan 13, 2015
1,099
0
11,960
263
The "TDP restrictions" sound like an excuse to push their proprietary tech and require users get a NVidia only motherboard. PCI-e has had 500W GPUs running on them, the option for additional power connections is always available.
 

jimmysmitty

Champion
Moderator


It is more about the environment. Sure they could throw a 300W GPU in there. Problem is that it would limit the TDP of other parts. Servers and HPC, where this will be going, require a precise design for the best cooling and functionality. They don't have custom liquid cooling or cases but a set design and air cooling.

It is also not stating a TDP limitation of PCIe but rather "Due to TDP restrictions in PCI-Express environments" which again refers to the limitations in cooling when you have tons of these in a single room doing HPC work.

The interesting thing is that this might be close to what the Titan for Pascal will be. Guess we will have to wait and see though because the Titan already has 12GB of VRAM and I would expect the new GPU to have more, 16GB but that is just me.
 

bit_user

Splendid
Ambassador
This is all a bit silly. There's no reason a server can't dissipate this much power and more. In fact, 4 of the 6 current Intel Xeon Phi SKUs (the PCIe cards, released a couple years ago) are 300 W.

Plus, why do you assume these will only be used in servers? The aforementioned Xeon Phi's come in two variants - actively and passively air-cooled. The passive ones are for use in servers, while those with integrated blowers are aimed at workstations.

I've read that the GP100 has no graphics-specific blocks, meaning it cannot be used on a graphics card. We'll have to await a completely different chip.
 

SpAwNtoHell

Reputable
Dec 5, 2015
45
0
4,530
0
Future looks promising ... But somehow doubt this chip will end up at some point in a mainstream enthusiast card, probably a gp102 will take that spot, but will be many factors to consider...

Overwall what bothers me is that nvidia fights a battle with itself on both fields mainstream high end and profesional... And this is no good...
 

jimmysmitty

Champion
Moderator


It may be silly but I assume that nVidia does it for a reason. Do you think they don't want to be able to sell people a 300W GPU/Coprocessor? I think they would if they could. People assume it is because they want people to go to the next step, that is probably true and would be true for any company but I also think they wouldn't leave a possible market open for Intel to scoop up. I can't find anything confirming any restrictions for PCIe to 250w TDP but again there has to be a reason why nVidia wouldn't try to sell to a market they could easily sell to.

And I am not saying only servers but the majority of Tesla and Xeon Phi cards are put into HPC farms and not work stations.

I also said this is probably close to what the next Titan will be, not that it is. Nothing is stopping nVidia from having a version of GP100 with the GPU blocks.
 

bit_user

Splendid
Ambassador
Rumors are that their high-end Graphics Processing Unit will be the GP102. I hope and expect it'll have HBM2. But, if costs remain too high, I could see them going with 384-bit GDDR5X.
 

DotNetMaster777

Reputable
Jan 22, 2016
184
4
4,685
0
Welcome to the world of high performance computing !!!

This will be useful for AI and Machine learning )))) it will improve the machine learning infrastructure !!!
 

Jarmund

Commendable
Jun 4, 2016
94
0
1,660
13
wow, things look pretty sexy in the HPC world, damn! i wish there was a way to use this marvel and transform it into a gaming card i mean... it shares the same GTX 1080 GPU (if i'm not mistaken... correct me please if i'm wrong) but it's packed with the better performing HBM Vram and 12 gigs of it! why nVidia why? why won't you let me burn my credit card?!
 

bit_user

Splendid
Ambassador
No, this uses the GP100, which has no ROPs or other graphics-specific engines. And 12 or 16 GB of HBM2. You need to wait for the GP102, which sounds like it's going to use GDDR5X (probably 384-bit, I'd guess). The GTX 1080 & 1070 both use the GP104.

Because there's not a big enough market of people willing to pay like $5k or $10k for a gaming card. So, they went for a less price-sensitive market, until they can bring down all the costs that make this thing so expensive.

A more cynical explanation would be that GTX 1080 already beats all of AMD's existing + soon-to-be-released cards, allowing them to hold back their big weapon.
 

sbandyk

Distinguished
Jan 19, 2012
5
0
18,510
0
The claim of power limitations on PCIe does seem odd. The PCIe bus itself can't provide nearly enough power to run any GPGPU cards. 16x slots can only provide a max of 75W to graphic cards. Any Tesla card or high-end video card requires an external power connector(s) to provide extra current.
I'm wondering if they're worried about power consumption to make the PCIe cards to functional in more server chassis. Thing about Server's.. They tend to be be purpose built. If you buy a server that's been designed specifically for GPGPUs, it'll have an extra beefy Power Supply(s) and extra power leads in the case for the cards.
But, there are actually cases where people want to add a GPU co-processor to a chassis that wasn't specifically designed for a GPU, or to one that wasn't designed for a 300W card (or cards).
Believe it or not, sometimes [ahem.. Higher Ed] cost is a huge driver of hardware selection. I've got A LOT of generic cases with Supermicro server boards in them [occasionally Desktop boards] in my Data Center because compute needs are effectively infinite in Higher Ed Research but funding is anything but.

Other thing to consider.. To the best of my knowledge, NVLink has still only been announced for Power systems. I could be wrong, but I'm not aware of NVLink being available for any Intel systems.. Because why would Intel do the R&D to adopt a proprietary bus for a niche market, especially when Intel is trying to compete in that market with Xeon Phi?
So.. With NVLink systems.. They're ALL custom designed to run whatever Nvidia spec'ed for these cards. With PCIe, you have to deal with what's out there.

Oh yea.. 16-lane PCIe slots are supposed to support up to 75W. The spec also only Officially supports the use of One 6-pin and One 8-pin aux power feed and guess what that totals up to? 300W. [some graphic cards have dual-8Pin but that's apparently not supported in the PCIe spec]
..Another thing about servers.. Especially HPC Servers.. You don't push the spec. You don't over-clock CPUs and you don't run your co-processor cards at 100% of the available power on the Bus. We don't like our servers crashing when they're 10 seconds into a simulation and we REALLY don't like them crashing a week into a simulation.
 

bit_user

Splendid
Ambassador
Yes, of course.

I can understand not wanting to max your PSU right to the hairy edge, but server gear should be built with enough margins to run it at spec.

I thought any serious HPC effort made liberal use of snapshotting to deal with the inevitable failures. I mean, as soon as you try to do anything at scale, the failures are virtually guaranteed. Hoping stuff doesn't crash is pretty much a plan for failure.
 

TJ Hooker

Glorious
Ambassador

Not sure if this post is serious or not, but if it is:
This is not a gaming card. It doesn't even have video outputs.
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS