That explains where the 7nm thread ripper dies went...
What's curious here is the choice of GPU's. Instinct has excellent FP throughput. It's no match for NVIDIA's GV100 on AI metrics where matrix math is required.
It means they are not going to be using it for AI type math. So no learning algorithms. Most likely cell based algorithms that take more time to process each node the further you break it down.
��
Tesla V100 (SXM2) | Tesla V100 (PCIe) | |
---|
Double Precision | 7.5 TFLOPS | 7 TFLOPS |
Tensor Performance (Deep Learning) | 120 TFLOPS | 112 TFLOPS |
GPU | GV100 | |
MI25[edit]
The MI25 is a
Vega based card, utilizing HBM2 memory. The MI25 performance is expected to be 12.3 TFLOPS using FP32 numbers. In contrast to the MI6 and MI8, the MI25 is able to increase performance when using lower precision numbers, and accordingly is expected to reach 24.6 TFLOPS when using FP16 numbers. The MI25 is rated at <300W TDP with passive cooling. The MI25 also provides 768 GFLOPS peak double precision (FP64) at 1/16th rate.
[5]