Xilinix Pairs AMD's EPYC With Four FPGAs At Supercomputing 2017

Status
Not open for further replies.
AMD knews that PCIe lines were a big thing for data center using GPU or other calculating processors. I wonder if AMD could get a good part of the market for these systems over Intel.
 

crazygerbil

Prominent
Nov 16, 2017
1
0
510
Bit_User, Yeah, I don't know what the heck they are doing there. The chip itself draws <15W maximum, so either the on-board peripherals draw way too much power, or (more likely) they massively over-specified the power supply for it. 225W is actually the maximum power that can be drawn through the power connectors it has (75W from the PCIe slot and 150W from the 8-pin PCIe power connector).
 

alextheblue

Distinguished
First, you're comparing TDP of a reference card to peak board power. Second, latency. There's a reason they're using DDR4 and not GDDR.

More importantly, it's not fair to compare a chip with fully programmable logic vs a GPU. If GPUs are well suited to your workload, fantastic. If they're not, the only competition FPGAs have left would be CPUs. It's sort of like comparing a semi truck to a train for purposes of hauling goods. If you ignore the fact that freight trains are bound to rails, they seem like a clear winner, why would anyone ship goods via a truck?

Slightly off topic: If you're limited to fewer lanes per CPU, you have to buy more CPUs to support your accelerator cards. That simultaneously boosts CPU sales and the relative competitiveness of CPUs against said accelerator cards. Unfortunately for Intel, AMD has spoiled their fun a little bit.
 

bit_user

Polypheme
Ambassador

Wow, looks like I touched a nerve.

Dude, I didn't say this product makes no sense. I was just putting it in perspective, which I felt was relevant after Paul went on about how FPGAs are so much faster per W.
 

alextheblue

Distinguished
You didn't touch a nerve and I really have no stake here. I am not defending Paul's statement - though I actually assumed he was referring to their power consumption in workloads where GPUs are not ideal (thus pitting them against less efficient CPUs).

I just felt it wasn't a fair comparison.
 

bit_user

Polypheme
Ambassador

Okay, go ahead and down-vote Paul - he made the comparison. I just posted facts, of which you contested none.

I think your down-vote is mostly to do with the fact that this concerns an AMD-based system and I quoted figures about a Nvidia GPU.
 

alextheblue

Distinguished
I contest the validity of the comparison itself. FPGAs are better than GPUs for some workloads. GPUs are better than FPGAs for some workloads. Making best use of FPGAs is more challenging. A generalized comparison is very difficult. Paul should have been more clear about what exactly he meant, also I can't downvote him.
Now you've touched a nerve, as you've ascribed false motive.

Nvidia has the most powerful and efficient graphics cards. Period. Your choice of graphics card brand wasn't as issue. Here's another one: right now Intel makes the most powerful, best overclocking consumer chips for desktops. Full stop.
 

bit_user

Polypheme
Ambassador

Sorry, I'm not buying it. Moreover, the comparison is warranted because, as you point out FPGAs are better for certain things, and these do host more RAM. So, it's completely valid to compare them, but not only one aspect and it can't be completely divorced from the specific workload.

Now, unlike what Paul seemed to imply about FPGAs, I never said that because Nvidia's GP104 can do more int8 TOPS/W that it's fundamentally superior to FPGAs. I just made a specific counter-point to demonstrate that in certain circumstances it can still even beat FPGAs in both computational performance and efficiency - his chosen metrics.

Honestly, I was surprised that the FPGAs weren't faster or even more efficient. I really only wanted to share that tidbit - not get into some pitched battle about FPGAs vs. GPUs.
 
Status
Not open for further replies.