I just tried uploading that slide image to Google Translate, which seems to work surprisingly well. It revealed basically the same information included in the article, so maybe the author already did exactly that. Something funny: it seems they refer to their multi-node link as "Dragon Chain technology".
The article does contain a glaring error:
Nvidia's H100 delivers 67 FP64 TFLOPS.
No, that's the fp32 rate. Its fp64 throughput is exactly half of that, at 33.5 TFLOPS. It is
a lot, and a huge leap from even the previous generation's A100 (which managed only 9.7). I'm guessing they got embarrassed by AMD leap-frogging them on this metric, in the previous generation.
en.wikipedia.org
Also:
Loongson calls its LG200 accelerator a "GPGPU," which certainly implies that the part not only supports AI, HPC, and graphics workloads but can also perform general-purpose computing. But we can only wonder what the company means by general-purpose computing.
No, "GPGPU" just means it's more general than graphics. That extends as far as HPC and AI, but usually not much beyond. Anything GPU-like is still practically limited to workloads that are number-crunching, highly parallel, and SIMD-friendly.
GPGPU is an old term, going back at least 20 years. It just referred to using a GPU for computations other than graphics.
Sorry to bother you, but I'm sure
@JarredWaltonGPU can set Anton straight on these points.