Chip Fights: Nvidia Takes Issue With Intel's Deep Learning Benchmarks

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Champion
Ambassador
[quotemsg=18472257,0,1425755]FF 16 Google it. Its even discussed hire before & by other people.[/quotemsg]So, you're telling me that Nvidia lied, on a major performance metric & feature, in their own press release? No, I'm not going to just "Google it", you need to do better than that. Give me a good source on their DGX-1, which is what the news article referenced and the only Nvidia product that I have been talking about, saying that it doesn't actually support half-precision floating point! Or else, drop it.

[quotemsg=18472257,0,1425755]Remember you asked me how do I know performance (with open source compilers) would be half of the one possible to achieve with Intel property (top) ones.[/quotemsg]Yes, and you still haven't answered that question. Please provide some real-world compiler benchmarks supporting that claim, or let's move on.

[quotemsg=18472257,0,1425755]well 72x2=144; 246-144=102
So you actually did say 100W. On the contrary 72x3=216 which leaves 30W for everything else which is pretty much (more then) enough & if it's not then they have serious design flow with it.[/quotemsg]I know how you're claiming 100 W, but I never said exactly 100 W.

If you look up the definition of the word "vicinity", it means an approximate location. I was estimating near 2 W - not exactly. And 101 W is probably closer than your estimate of 29 W (the actual TDP figure I quoted was 245 W - not 246 W).

Do you actually understand what it means for the interconnect to be cache coherent? And in order to avoid bottlenecks, the aggregate bandwidth must be far higher than even what MCDRAM provides. I doubt the on-chip interconnect, alone, can be powered in just 29 W.

[quotemsg=18472257,0,1425755]Main thing for processing a deep learning algorithms is 8bit & 4bit precision. Doing them in 16 bit is bad & un efficient doing them in 32 bit is doubled that.[/quotemsg]
1. Please provide a link to anyone using 4-bit, for this. I'm not aware of that practice, but I'd like to know if people are doing it.
2. When did I ever say anything about 32-bit? You're the only one talking about 32 bit! Don't put words in my mouth!

[quotemsg=18472257,0,1425755]I don't have problem with opinions you have problems with basic math.[/quotemsg]Every time you've accused me of that, it was actually your misunderstanding. Which makes these accusations doubly-bad.

I honestly don't know how you've managed not to get banned, but a lot folks on here aren't as patient as I've been.
 
Status
Not open for further replies.

TRENDING THREADS