News AMD deploys its first Ultra Ethernet ready network card — Pensando Pollara provides up to 400 Gbps performance

Petascale, zetascale, clusters of 100,000 processors...Ok, there exist two or four processor motherboards where all processors work in parallel on the same task. Can actually anyone in the entire country connect just one more morherboard to the existing one for them to work as a minimal cluster in parallel instead of reading the tales and salivating about future Elon's one million GPU clusters. Show that anyone here, at least one out of 300 million, or one out of 8 billion, can do anything by their hands
 
Last edited:
  • Like
Reactions: artk2219
Second-hand 10G SFP+ server cards can be picked for a few tens of dollars on eBay.
I have two of these, but they have PCIe 2.0 x8 interfaces. All of the cheap ones do. So, you'd better have either a x8 or x16 slot to spare or an open-ended x4. Be aware that consumer motherboards with multiple x16 slots will typically reduce the primary x16 slot to just x8, if you populate others.

I've seen PCIe 3.0 x4 versions, but they're in much higher-demand and priced correspondingly.
 
  • Like
Reactions: thestryker
I have two of these, but they have PCIe 2.0 x8 interfaces.
I can only speak for Intel 520s, but they work perfectly fine in x4 slot so long as you're only using one port (just need to be willing to demel an x4 slot if needed!). I did move on to 710s when I built my new server and primary machine though because I still wanted full bandwidth and client platform lanes <insert rant here>.
 
I can only speak for Intel 520s, but they work perfectly fine in x4 slot so long as you're only using one port
x4 would work, if it's open-ended. The cards I'm talking about are physically x8, so they need either an open-ended slot or one that's at least x8 in length.

As for how much bandwidth you actually need, PCIe 2.0 has roughly 500 MB/s (4 Gb/s) of throughput per lane, so four slots should be plenty for running a single port @ 10 Gb/s. I'm not aware of a PCIe card that won't work, at all, in fewer lanes than it was designed for. So, put in a slot that has just one lane connected and 4 Gb/s is what I'd expect to achieve.

(just need to be willing to demel an x4 slot if needed!).
Sometimes, there are other components in the way, once you go past the end of a shorter slot. I have a B650 board that's quite the opposite situation, where it has 3 slots that are physically PCIe 4.0 x16, but only have x1 lane active!

I did move on to 710s when I built my new server and primary machine though because I still wanted full bandwidth and client platform lanes <insert rant here>.
I got a server board with built-in 10 Gb/s NICs, because it's micro-ATX form factor and has just 3 slots. If you put a 2.5 width graphics card in the x16 slot, that'd be your only PCIe card! I don't have any plans to do that, but the weird decision they made was to put the primary slot at the top. So, even a 2-wide card would block the middle x4 slot and force you to either run it at x8 or forego the 3rd slot.

So, I reasoned that it's not a good move to plan on using a network card in there.
 
  • Like
Reactions: thestryker
I got a server board with built-in 10 Gb/s NICs
That would have been my ideal, but LGA 1700 boards all had pretty big compromises so it wasn't a realistic option.
Sometimes, there are other components in the way, once you go past the end of a shorter slot. I have a B650 board that's quite the opposite situation, where it has 3 slots that are physically PCIe 4.0 x16, but only have x1 lane active!
That would drive me nuts!
 
  • Like
Reactions: bit_user