jimmysmitty :
bit_user :
jimmysmitty :
And I am sure it depends on the case scenario. For example, in Cryptocurrency mining
Yes, it depends. But then you can't go and make blanket statements that x8 vs. x16 doesn't matter. That's why I pointed out some key applications for which these cards were designed and are being marketed where your statement is incorrect.
And cryptocurrency miners are
not buying server/HPC GPUs.
As I said it depends but there is a lot of bandwidth in PCIe 3.0:
What kind of response is
that? There's even more bandwidth in NVLink, PCIe 4.0, and PCIe 5.0.
Look, Nvidia judged it was worth adding 480 Gbps of
additional connectivity to their P100 GPU, in 2016. 4.75x of what only x16 lanes of PCIe 3.0 would provide. That should tell you something about the bandwidth need in AI and HPC. And apparently even that's not fast enough, because NVLink 2.0, featured in the V100 GPU, is yet 25% faster.
https://en.wikipedia.org/wiki/NVLink
And it's not just Nvidia. Without even any PCIe 4.0 products on the market, why is the industry already finalizing PCIe 5.0?
https://en.wikipedia.org/wiki/PCI_Express#PCI_Express_5.0
You're out of your depth. It's understandable that you reflexively thought about multi-GPU gaming, and commented on that. But those lessons don't apply here.
jimmysmitty :
That said, it also depends on the cryptocurrency. Bitcoing, of course not. They have server farms using low power CPUs and ASICs designed for the task. However there are some ASIC resistant currencies that still get the best bang for buck out of GPUs and HPC farms would benefit them . Some companies take advantage of that and rent out their servers to mine coins for a price of course.
As said above, four different times, by four different people, HPC products are not used for cryptocurrency mining. It doesn't make economic sense. For ASIC-resistant currencies, the can get by just fine with cheap, commodity hardware (or further cost-reduced "mining" SKUs), so they do.
The reason for this is simple. In crypto-mining, if a GPU fails, you just swap it out and go on about your merry way. For many GPU-accelerated simulations and cloud applications, failures are more costly and reliability is at more of a premium, so they're willing to pay more for server-grade hardware and the corresponding levels of support.