Nvidia Pits Tesla P40 Inference GPU Against Google’s TPU

Status
Not open for further replies.
Good information. Liked to the pointer to the google paper. Do you have a reference you can share for the nvidia paper that describes the environment for "Tesla P40 even seems to be twice as fast as Google’s TPU". Neat to see a general purpose GPU outperform an ASIC. Seems unlikely. The section noting that bandwidth was the root cause was interesting.

Your "power/performance" and "cost/perforamance" analysis seems right. Consider also the google claim of more predictable (lower variance) response time from inferencing using the TPU. Not sure why this would be true vs. GPU, but google says it is. Consistent, fast performance is valuable.
 

Petaflox

Prominent
Mar 30, 2017
28
1
530
You are say that Goggle design and produce is own TPU?
If so the will cost is far more than Tesla P4. And yes TDP will be mach less because
are design to do exactly what they need.
 

t1gran

Prominent
Mar 6, 2017
10
0
510
"In the Nvidia-provided chart below, the Tesla P40 even seems to be twice as fast as Google’s TPU for inferencing" - isn't it about inferences/sec WITH <10 ms latency only? Because the total inferencing performance of Google's TPU seems to be twice as fast as Nvidia’s Tesla P40 - 90 INT8 vs 48 INT8.
 

B Salita

Reputable
Jun 1, 2015
1
0
4,510
Looks like Nvidia will have to bow out of the NN market. The world needs a 3rd party TPU clone. Where's Kickstart when you need 'em?
 

Petaflox

Prominent
Mar 30, 2017
28
1
530
Goggle made a custom chip, there is no point to compare versus a commercial chip.
Also, did goggle have is own patents? Do they use other company patents? Is there any patent at all?
This article doesn't reflect the professional standard I'm used to read in tom's website.
 
Status
Not open for further replies.