Inside The World's Largest GPU: Nvidia Details NVSwitch

Status
Not open for further replies.

bit_user

Polypheme
Ambassador

I don't expect they will. Turing is really like Volta 2.0. For AI, the only substantial gain is the new integer-based inferencing capability of the Turing-generation tensor cores.

The only other thing they have to gain by releasing a new server GPU at 12 nm would be to make an AI-focused version that cuts back on fp64 throughput. But, I don't know if they could save enough die space to make that worthwhile, nor do I know if there's a big enough HPC market for Volta without Tensor cores.

An argument against anything Turing-based is that most HPC and AI customers probably have no interest in the RT cores. So, I'm not sure we'll see them in whatever succeeds the V100. Or maybe just enough to provide API-level compatibility.
 
Status
Not open for further replies.