kyotokid :
...well not sure if NVLInk follows the same configuration as PCIe. The board apparently has four native NVLInk expansion slots
I'm pretty sure these use PCIe to communicate with the host CPU (and, by extension, its memory). The NVLink communication is probably over-the-top, as you can see from the pictures.
kyotokid :
This is the first application of NVLink I have seen outside of clusters being made for large supercomputers like Summit. Each cluster in the Summit supercomputer has two IBM Power 9 CPUs (each with 48 lanes of what is called "Bluelink" connectivity) and 6 Tesla V100s all on NVLink with 8 ports. What this means is direct connectivity between each of the GPUs and CPUS as well as full interconnectivity between all six GPUs with no need for switching.
Each V100 has 6x NVLink2 lanes. So, I don't know what you mean by "with 8 ports", but you could have each V100 directly connected to the other 5 + 1 CPU. Whether this is optimal depends on your needs. If most communication is GPU <-> GPU (as in deep learning), then yes. But if the GPUs are mostly talking to the CPUs, then having a link to only 1 CPU would probably create a bottleck between the two CPUs as GPUs try to fetch data from memory attached to the other CPU.
NVLink has routing capabilities. So, for larger configs, they just reduce the connectivity and traffic can hop through one or more intermediate nodes. I've not heard of a centralized crossbar.
kyotokid :
I have never hears of Tesla cards being used for graphics output, rather, they are computational accelerators that can crunch numbers quickly
I think it's clear these
aren't Tesla cards. They never said these were - you're just assuming that. The PCIe version of Tesla V100 appears not to have NVLink2 and doesn't have a graphics port. Also, they're passively cooled, whereas these clearly aren't.
http://images.nvidia.com/content/tesla/pdf/Tesla-V100-PCIe-Product-Brief.pdf
I think the DGX Station probably uses some variant of the Titan V, but with fully-enabled GV100's and maybe augmented over-the-top connectivity.
kyotokid :
True, Vega can make use of physical memory but that will be at a cost in speed due to having to share processing among more channels as well as having fewer stream processors/cores than a dedicated GPU card. The Tesla V100 and Quadro GP100 have have 5120 cores while the Vega/Ryzen 2240G has only 704. Also Vega only supports OpenCL GPU rendering.
I don't understand why you're comparing big, dedicated Nvidia GPUs to AMD APUs. I was talking about Vega 64, which has 4096 stream processors and up to 16 GB of HBM2. Again, look up "Vega HBCC".