News Intel Cancels Omni-Path 200 Fabric and Stops Development

bit_user

Splendid
Ambassador
I wonder how much the rise of EPYC had to do with the demise of OmniPath. If you have any reason to think you won't be running a 100% Intel stack, then it must call into question the idea of buying into Intel's interconnect technology.
 

DavidC1

Distinguished
May 18, 2006
406
11
18,785
0
I think OmniPath died because it debuted with Knights Landing processors and when that lineup got cancelled, the roadmap became blurry as that caused the delay in future OmniPath generations.

Also, OmniPath 200 needs PCIe 4 interconnect to allow such bandwidth. There are no such Intel processors.
 

bit_user

Splendid
Ambassador
Also, OmniPath 200 needs PCIe 4 interconnect to allow such bandwidth. There are no such Intel processors.
Technically, PCIe 3.0 can bond up to 32 lanes. So, that probably explains a lot about why they had some CPUs with it integrated in-package.

But, that's a really good point about connectivity. I hadn't considered that. However, here's a PCIe 3.0 x8 Omni-Path 100 host adapter and a x16 dual-port card:

https://ark.intel.com/content/www/us/en/ark/products/92004/intel-omni-path-host-fabric-interface-adapter-100-series-1-port-pcie-x8.html

The dual-port makes more sense, since you might not have a full load on both ports, in the same direction, at the same time. But their willingness to offer a x8 single-port adapter shows they're not too concerned about being bus-bottlenecked, even in a simple uni-directional sense.

Thinking about it some more, if Infiniband can already offer 200 Gbps per direction, that puts Intel in a really bad spot, with their lack of PCIe 4. POWER has had PCIe 4 for over a year and now AMD is out there.
 

ASK THE COMMUNITY