Nvidia Announces Tesla T4 GPUs With Turing Architecture

Status
Not open for further replies.

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
858
315
19,360


It's only 75W and will live in a server, so the linear airflow will keep it cool. Servers are like tornadoes inside, usually at least 200LFM. I'll add something to the article to explain that.
 

bit_user

Polypheme
Ambassador

Indeed, TU104 is the same silicon used in RTX 1070 and RTX 1080. All they did was down-clock and scale it back to fit a 75 W power envelope. It is then fitted with double the RAM (ECC, too), a passive heatsink, and a several $k price tag.



It's a stretch to call it a graphics card. While it can do desktop virtualization, note the lack of any display outputs.


This seems like wishful thinking.


Well, Nvidia is clearly winning. The question is whether anyone building AI-specific chips can unseat them. We already know Vega 7 nm won't.
 

bit_user

Polypheme
Ambassador

Eh, it doesn't really have anything to do with being only 75 W, as their 250 W Tesla V100 PCIe cards are also passively cooled.

https://www.nvidia.com/en-us/data-center/tesla-v100/
 


I wouldn't be surprised actually since there is not only server CPUs but also the Knights series Xeons which are pretty much what Tesla competes with although Intel is said to not be continuing those in the future.


Eh, it doesn't really have anything to do with being only 75 W, as their 250 W Tesla V100 PCIe cards are also passively cooled.

https://www.nvidia.com/en-us/data-center/tesla-v100/[/quotemsg]

The card has no fans but they are designed to go into a server rack which has fans spinning at full speed pushing air through the heatsinks out the back. Even right now with the door closed I can hear my servers spinning in my office.

The design is probably due to the way servers are built. I doubt you could throw a V100 into a mid tower or full tower and run it like in a server chassis without running into thermal problems.



Except it has more VRAM thats also ECC and the drivers are mush more refined for what they do. Most GPUs start off as HPC based chips that get slowly trickled down to consumer ends after being cut off.

Its the same with CPUs. Most CPUs have a server variant that cost quite a bit more than the desktop counterpart does.
 

bit_user

Polypheme
Ambassador

Yeah, I think that's what Paul was saying, and I agree. Nvidia specifies how much CFM (or m^3 / sec) are required for their passively-cooled Tesla cards.

It's not just Nvidia, either. AMD makes passively-cooled server cards, as did Intel, when they offered Xeon Phi on a PCIe card.


That's twisting it, somewhat. I don't think it's really true to say it's a server chip before gaming, or vice versa. The past few generations have had the consumer cards released (or, in this case, simply announced) first. But Nvidia obviously collects requirements for each new chip. Some of those are for server applications, while others are for gaming and workstation uses. Then, all their chips (except for GP100 and GV100) are built to fill niches in all of these markets and sold on the appropriate vehicle (Tesla, for server; Quadro, for workstation; GeForce for consumer).


No, not in the same way as Nvidia is doing with GPUs. Intel's actual server chips are LGA 3647, and use different silicon than their workstation or desktop chips. AMD happened to use the same Zepplin die, in first gen Ryzen, Threadripper, and Epyc. But that's a first, for them, and I'm not sure if the dies from Epyc 7 nm will trickle down to desktop, or if they are going to bifurcate their silicon.
 

bit_user

Polypheme
Ambassador

Don't be fooled by the size. As I said above, it uses the same chip as the RTX 2070 and RTX 2080, but with double the RAM.

Nvidia did the same thing with the P4, which was also a passively-cooled, low-profile card with the GP104 chip used on the GTX 1070 and GTX 1080.

Probably one of the ways they squeeze it onto such a small board is that the VRM needed for 75 W is just a lot smaller than what the desktop versions require. Not having a fan should also save a little area, since you don't have the fan header & controller, plus perhaps some accommodations for the airflow, etc.
 


https://ark.intel.com/products/93794/Intel-Xeon-Processor-E7-4809-v4-20M-Cache-2_10-GHz

Intel does have server CPU variants. The one you specified is HPC though. My point is technologies and ideas typically push to HPC/server first where money will be spent then come to consumer. Turing for example has some ideas from Volta which consumers never saw.
 
Status
Not open for further replies.