Nvidia Details Volta GV100 GPU, Tesla V100 Accelerator

Status
Not open for further replies.
The GV100 contains eight Tensor cores per SM, and each core delivers up to 120 TFLOPS
That can't be right.

Each tensor core generates 64 FMAs per clock, translating to 128 FLOPs. At 8 tensor cores per SM, you get 1024 FLOPs per SM, which works out to 1 TFLOPS @ 1 GHz. Now, if we assume it was 120 TFLOPS for the entire GPU, then that would yield a very reasonable clock speed of 1.43 GHz, assuming all 84 SMs were enabled, or 1.5 GHz assuming 80 SMs.

That's awfully impressive. I'd bet we're not talking about IEEE 754-compliant floating point, here. They must've cut some things besides denormals to get that much speed up over their normal SIMD units.

second generation of NVLink ... allows for up to 300 GB/s over six 25GB/s NVLinks per GPU.
Because it's bidir, and they're counting each direction.
 


You're right, it's 120 peak Tensor core TFLOPs for the total card. With the caveat that Peak TFLOP/s rates are based on GPU Boost clock. Fixed :)

 
BTW, it sounds like there are some interesting efficiency enhancements, in the form of sub-warp granularity scheduling. Anandtech hinted at this (pretty explicitly, I might add), but said that details would have to wait.

I'm guessing there are probably specific synchronization points at which warps can get broken up & reconstituted. It's not going to be fully-independent scheduling of each lane, or else it'd no longer be "Single Instruction", with its attendant benefits.
 


Ahhh ... you beat me to it :0

This comes on the back of a video i just watched entitled "if this is what the new iPhone, looks like, Samsung is in trouble". With the younger generations allegedly being "tech savvy", this obvious bent towards from over function proves that "tech savvy" assumption blatantly wrong. How about a smartphone for which I don't have to say "Let me call you back from a land-line" ? How about a phone that doesn't focus on form giving me the thickness of a slice of cheese and batterly life less than a day, and instead gives me a week of battery life, as my Treo 650 did more than 12 years ago ? Give it 3 months ... then the fad that says t says "I'm in the know, I'm not a bling bling guy" will be the total absence of RGB. I have watched me kids go thru ...

I wanna see Britney Spears concert ... I would not be caught dead at a Britney Spears concert
I gotta have an iPhone ... I gotta have anything but an iPhone
Today, my kids and their peers have aged out of the fad thing (oldest approaching 30) but, those peers with younger siblings it's.... I gotta have bling bling up the gazoo .... mark my words :)

 
That math only works by coincidence. The 120 TFLOPS number is derived according to my post, above. On a related note, the TPU uses fp16. Since they didn't quote another fp16 number, it might've replaced the double-rate fp16 support in P100.

BTW, I was surprised not to see anything about int8 performance.
 
Your "old man" is showing. Seriously, what does that have to do with this?

And yes, I think significantly more kids in this generation are meaningfully tech-savvy than previous generations. I'd actually find it pretty intimidating if the majority were actually nerds like us.
 
So how much is this card going to cost us consumers? A whopping $1,000 USD or a more reasonable price like $350 USD and will it be true to NVidia word which is not worth much today since they have lied to the customer in the passed! Frankly I am getting pretty damn tired of NVidia boasting a lot and not delivering on their so call promises. They told us SLI was the way of the future and that failed. They lied about the memory specs of the 900 series and got caught doing it. Now they are coming out with another so call super graphics card? I cannot help to wonder how long they had this sitting on their shelf holding off to release it just to sucker us consumers yet again.

Sorry all for the nasty response. But I think NVidia has lost their focus on doing right by us consumers and are focusing more on making a quick buck and damn us all again.
 
So how much is this card going to cost us consumers? A whopping $1,000 USD or a more reasonable price like $350 USD and will it be true to NVidia word which is not worth much today since they have lied to the customer in the passed! Frankly I am getting pretty damn tired of NVidia boasting a lot and not delivering on their so call promises. They told us SLI was the way of the future and that failed. They lied about the memory specs of the 900 series and got caught doing it. Now they are coming out with another so call super graphics card? I cannot help to wonder how long they had this sitting on their shelf holding off to release it just to sucker us consumers yet again.

Sorry all for my nasty response. It's just that I have been a loyal user of NVidia for so long that it has been sad to read that the company has been doing some unhanded wrong practices and been treating the customer like crap lately and I am feel they have fail us as a customer that use to trust them.
 


You also forgot drivers that cripple previous generation cards to help sell newer cards (Kepler/Maxwell).

No apologies at all needed. This has been Nvidia's mantra for a long time now. I still won't forgive them for my $2500 laptop turning into a paper weight due to the defective 8600m's used that Nvidia got sued over and I was never reimbursed for. Whenever I consider buying an Nvidia card now, I always check to see what the other options are (only AMD left unfortunately) and determine if the price/performance delta is worth the risk.

 
@Stormfire962 this isn't meant for consumers. It's meant for workstations, HPC, scientific/engineering applications, etc. Based on the rest of your post it kinda seems like you think this is meant for gaming, but it's clearly not; it doesn't even have a display output.

It will cost several thousand USD minimum.
 
"So how much is this card going to cost us consumers? A whopping $1,000 USD or a more reasonable price like $350 USD and will it be true to NVidia word which is not worth much today since they have lied to the customer in the passed!"

Another article i read on this said $18,000. That is not a typo. This is a high end server gpu, not a gaming gpu. Its a 815mm^2 chip, its utterly huge, and extremely expensive to manufacture.
 
This particular GPU is too big ever to be offered at a consumer price point.

As for their other Volta GPUs, I'm skeptical they have PCIe 4.0. They probably taped out (meaning the design was sent off for initial fabrication - originally on a magnetic tape, back in the day) at least 8 months ago. They won't be adding PCIe 4.0 to any Volta GPUs, if it's not already there. With their proprietary NVLink, they have less incentive than others to add it.

I'm a bit puzzled why you seized on this particular aspect, BTW. PCIe isn't generally a bottleneck for games. Even single-GPU deep learning setups probably aren't bottlenecked by it. And multi-GPU deep learning - that's what NVLink2 is for.
 
Status
Not open for further replies.