Chip Fights: Nvidia Takes Issue With Intel's Deep Learning Benchmarks

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Titan
Ambassador
So, you're telling me that Nvidia lied, on a major performance metric & feature, in their own press release? No, I'm not going to just "Google it", you need to do better than that. Give me a good source on their DGX-1, which is what the news article referenced and the only Nvidia product that I have been talking about, saying that it doesn't actually support half-precision floating point! Or else, drop it.

Yes, and you still haven't answered that question. Please provide some real-world compiler benchmarks supporting that claim, or let's move on.

I know how you're claiming 100 W, but I never said exactly 100 W.

If you look up the definition of the word "vicinity", it means an approximate location. I was estimating near 2 W - not exactly. And 101 W is probably closer than your estimate of 29 W (the actual TDP figure I quoted was 245 W - not 246 W).

Do you actually understand what it means for the interconnect to be cache coherent? And in order to avoid bottlenecks, the aggregate bandwidth must be far higher than even what MCDRAM provides. I doubt the on-chip interconnect, alone, can be powered in just 29 W.


1. Please provide a link to anyone using 4-bit, for this. I'm not aware of that practice, but I'd like to know if people are doing it.
2. When did I ever say anything about 32-bit? You're the only one talking about 32 bit! Don't put words in my mouth!

Every time you've accused me of that, it was actually your misunderstanding. Which makes these accusations doubly-bad.

I honestly don't know how you've managed not to get banned, but a lot folks on here aren't as patient as I've been.
 
Status
Not open for further replies.