Nvidia Announces Pascal-Based Tesla P100 GPU With HBM2

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

FritzEiv

Honorable
Dec 9, 2013
253
0
10,780
how about the premiums you will have to pay for new motherboards that have this proprietary nvlink. When PCIe 4.0 specs are already out with of course backwards compatibility. I can't stand companies that block other companies by strong holding developers like Intel did Nvidia is guilty of this as well I guess it is a Oregon way of thinking as their HQ's are both located in Oregon.

What about the premiums you will have to pay for motherboards that support PCIe 4.0? Besides, PCIe 4.0 is far slower than NVLINK and in workloads that care about the interface's performance, that's a big deal. Also, in the high performance sector, you tend to have to replace the board (and often many other things, if not the entire system) every time it's time for an upgrade.

AMD and Intel have advanced interface technologies they can use to compete with NVLINK if they want to.

I think that's a fair question, and it's one we asked Nvidia today at GTC. A few answers: First, as others have pointed out, the Tesla P100 version of Pascal is for HPC/Data Center applications. NVLINK is mostly GPU-to-GPU interconnect, although IBM has announced NVLINK support for its CPU as well. When we pointed out that NVLINK in a consumer class card would pose some of the problems others have pointed out, it was clear Nvidia realized this and they essentially indicated they'd be a little short sighted to expect people to be OK buying a motherboard with a proprietary interconnect. They also seemed to indicate that NVLINK can coexist with PCIe, that is, remain isolated for GPU-to-GPU communications (how much this could actually benefit typical gaming scenarios though is a big question, as someone else here pointed out). Hopefully we'll see what happens on the enthusiast side of Pascal soon enough; but Nvidia made it clear that they aren't going to talk about that this week, no matter how many ninja interrogation techniques I tried.
 


I don't know why I said unexplainable, was very poor word choice. What I really meant was "unbelievable" (well, it is believable, but you know what I mean).
 

InvalidError

Titan
Moderator

That makes more sense. When the high efficiency server market has been a one-horse race for so long though, having the market leader achieve monopoly through technical superiority is expected. Qualcomm is in a similar position in the mobile space with most mid-range to high-end devices using Snapdragon chips.
 
this is big pascal ladies and gents... this is the true successor to the maxwell 980ti/titan. to all those who were holding out on buying a 980ti waiting for pascal to drop, this is it.... so keep holding your breath.
 

southernshark

Distinguished
Nov 7, 2009
1,015
6
19,295
It doesn't really seem like it would help me to run World of Tanks any better, or maybe not even run it at all, so it's pretty much a failure. I don't see any use for it.
 

alextheblue

Distinguished
I doubt PCIe 4.0 will carry any more of a premium than 3.0 did on the motherboard side of things, which is $0 since many PCIe 2.0 boards supported 3.0 with a BIOS update to support Ivy Bridge CPUs and CPU upgrade to Ivy i5/i7.
Same with FM2+. Drop in a Kaveri-based chip and you get PCIe 3.0 support. Many FM2+ boards required a BIOS update to work (or be stable with) with these chips as well.
 

TJ Hooker

Titan
Ambassador


I feel like this might be sarcasm, but I can't tell. Poe's Law strikes again?
 

From Nvidia's blog:
Availability
General availability for the Pascal-based NVIDIA Tesla P100 GPU accelerator in the new NVIDIA DGX-1™ deep learning system is in June. It is also expected to be available beginning in early 2017 from leading server manufacturers.
Nvidia has already many P100 GPUs ready. From Computer Base:
DGX-1
So I don't know where you get the information about no HBM2 until Fall, but its not true.
 

Realist9

Reputable
May 31, 2014
97
0
4,630
So companies can buy the "DGX-1 Deep Learning system" with Pascal based GPU in June.

Companies that want to set up their own system with these in it can buy them separately in Q1 2017.

So we're guessing that consumer (individual PC owners) availability for the consumer version of Pascal will not come until Q1 2017 or later?
 


It does matter if it is a single GPU vs dual because what is to stop nVidia from launching a dual Pascal part with this performance?

You really seem to be raging against nVidia and seem to know that HBM 2 wont be available until later. Not sure why. You should hope nVidia keeps challenging AMD because if they don't then AMD will become just like nVidia in pricing, it has happened before and it will happen again.

https://news.samsung.com/global/samsung-begins-mass-producing-worlds-fastest-dram-based-on-newest-high-bandwidth-memory-hbm-interface

Samsung began mass production of HMB2 in the beginning of this year. Unless you know something we do not, and please if you do post your sources, I don't think nVidia would be paper launching a product a year out nor that they would say businesses would be able to actually buy the product in June.

Guess we can only wait and see.

@turkey3_scratch Tesla is for the super computing. Quadro is their workstation GPU. SO this is not replacing the Quadro but the current Tesla.
 


So you know what the TDP of a dual Pascal GPU would be? You have information that is not out yet?

I didn't say win. I said challenge. Big difference. I wouldn't consider nVidia a monopoly either. AMD has had some meh GPUs up until the Fury line. Or do you think the R9 290x launch was the best ever? It wasn't. Most people with a HD 7970GHz stayed with that because at launch the R9 290X was not worth it, especially not until the aftermarket cooling launched and fixed the horrible throttling issues Hawaii XT had.

Mass Production means mass volume meaning customer availability is not far off. SK Hynix is a bit behind Samsung with 4GB HBM2 chips not in mass production until at least Q3 2016.

As for when, they can't buy it yet but per the other article on Toms data centers will be able to buy the systems with this GPU in June. Most products are "paper launched" to start then they become available.

Either way this is the Tesla part, which is how nVidia always starts their new GPUs in the data center market then they trickle down to workstation and consumer.

Besides no one would pay for a Tesla for gaming. Not worth the cost.

Oh and both are very costly. The FirePro 9100 is, when not on sale, normally $4000. The Quadro equivalent, the K6000, is also around $4000 normally. Of course on both sides it depends on what you do because in some cases AMD is better in others nVidia is better.
 

Johnpombrio

Distinguished
Nov 20, 2006
252
73
18,870
NOTHING of this NVidia presentation at the GPU conference has anything to do with consumer products. This was a talk aimed at large corporations, universities, high level engineers and programmers.
Wait until NVidia does announce its Pascal consumer products later this spring.
 


On sale, the FirePro 9100 is cheaper yes. I was looking at pre-sales prices.

And if you look at reviews the W9100 and K600 go back and forth depending on the application and use.

They are priced within each other which is pretty normal to see them jump up and down with sales but the original price for the W9100 was the same as the K6000.
 


The PCIe 2.0 boards that got those BIOS updates were mostly premium boards to begin with on only one chipset. Even worse, in the datacenter, that doesn't happen as much. OEM server boards often lock out such upgrades and force you to get a new revision of the same board to use updated features.
 

InvalidError

Titan
Moderator

Mostly premium? Asus added Ivy/PCIe3 support to 30 of their entry-level h61 motherboards:
http://event.asus.com/2011/mb/PCIe3_Ready/
That's more entry-level models than all of their higher-end and business-oriented models combined.
 


I hadn't seen that, my mistake. However, that still doesn't apply to data center systems.
 

InvalidError

Titan
Moderator

In servers, there often are PCIe switch ICs sitting between the CPU and peripherals to increase the usable PCIe lane count and upgrading the CPU won't upgrade those chips to support the increased signaling rate. This is similar to how many high-end desktop motherboards with SLI/CF support could not be upgraded to PCIe3 because their analog SLI/CF PCIe switches lacked sufficient bandwidth. Low-end motherboards wire the CPU directly to the x16 slot, the only thing possibly preventing higher speed on them is poor board layout.
 

Dave7423

Reputable
Dec 12, 2015
1
0
4,510
This is a card for Compute only. The last Tesla didn't have a video out at all, no hdmi, no DisplayPort, no vga...compute only, and it's a monster...you cannot compare this card to any Nvidia Graphics card, it would compare closer to a CPU than a GPU....
 


In that same respect it can't be compared to a FirePro which is a workstation GPU.

The closest to compare it to would be a Intel Xeon Phi.
 

manleysteele

Reputable
Jun 21, 2015
286
0
4,810
So companies can buy the "DGX-1 Deep Learning system" with Pascal based GPU in June.

Companies that want to set up their own system with these in it can buy them separately in Q1 2017.

So we're guessing that consumer (individual PC owners) availability for the consumer version of Pascal will not come until Q1 2017 or later?

I wouldn't be surprised to see the x70 and x80 parts without HBM launched at the next available window. NVidia needs to start paying for tooling as soon as possible.

By now, they should have yield failures starting to stack up. I don't know how closely the Pascal architecture is tied to HBM and NVLink. Maybe too close for the suggestion. They may nurse Maxwell along for a while yet. It's all guessing until they announce.
 

InvalidError

Titan
Moderator

I doubt Nvidia is having much trouble with HBM or NVLink. HBM is little more than 8x128bits-wide DDR4 per stack and NVLink is kind of like PCIe on steroids. Since Nvidia (and AMD) are used to scale their GPU cores and add/remove/shuffle blocks around their ring bus or whatever architecture they use to move data around the chip to accommodate architecture tweaks and applications, they should have very little trouble swapping out the HBM controller for a GDDR5(X) one and removing NVLink for chips that aren't intended to have it.
 

manleysteele

Reputable
Jun 21, 2015
286
0
4,810


LOL. I'm just throwing stuff at the wall to see what sticks at this point. If I'm right in any small measure I'll say I predicted it and if I'm wrong, you won't here a peep about it from me.
 

marvin83

Commendable
Apr 11, 2016
4
0
1,510


It does matter if it is a single GPU vs dual because what is to stop nVidia from launching a dual Pascal part with this performance?

You really seem to be raging against nVidia and seem to know that HBM 2 wont be available until later. Not sure why. You should hope nVidia keeps challenging AMD because if they don't then AMD will become just like nVidia in pricing, it has happened before and it will happen again.

https://news.samsung.com/global/samsung-begins-mass-producing-worlds-fastest-dram-based-on-newest-high-bandwidth-memory-hbm-interface

Samsung began mass production of HMB2 in the beginning of this year. Unless you know something we do not, and please if you do post your sources, I don't think nVidia would be paper launching a product a year out nor that they would say businesses would be able to actually buy the product in June.

Guess we can only wait and see.

@turkey3_scratch Tesla is for the super computing. Quadro is their workstation GPU. SO this is not replacing the Quadro but the current Tesla.

Either way, AMD is winning in the end, no matter who's using HBM (Intel, nVidia, Samsung, AMD) as they own the patent (alongside Hynix).

Go underdog AMD! :)
 
Status
Not open for further replies.