Ultimate, price-no-object performance


Jan 11, 2010
I am a long-time tech enthusiast who hasn't bothered to keep up with hardware advances in a while (it got a lot harder around 1995 or so, and I'm lazy). So I have a decent idea of what the different technologies accomplish and why, but I know very few specifics (of, for example, when and where SSD vs. SAS vs. SATA is best).

I am in a position to build a set of servers with a single goal: ridiculous performance. Because it's relevant, the use case is high frequency trading. That should give you an idea of the kind of budget I can have and what the potential rewards are. I don't know what flavor of *nix yet, but it will be 64-bit with a realtime kernel.

I will source W5590 chips if I can get them, so my first question is: which mobo? It seems that SuperMicro and Intel would be the safe bets. I need the X58 chipset obviously, but beyond that, it seems my biggest concern will be PCI-E 2.0 capability; we can easily end up dealing with data loads that push 10GbE close to max, and latency is the biggest issue we face. I'll probably be looking at 2-4 10GbE ports per box. (No interest in InfiniBand; we'll accept a small performance hit to avoid vendor lock-in.)

Beyond that, if these boxes won't be used for massive storage loads, how can I minimize the effect of disk activity? Is it just a matter of speed, and so the answer is SSD?

For those that know all the things I have questions about, I'm well aware that I'm in way over my head. But I have to start somewhere.

Thanks in advance for all help.
well if you need some density for power you could go dual LGA1366 with intel 5520 chipset

for 10GbE you only need pci-e 2.0 x4 per port (so if you have a dual port card, you would need x8 link, also 2 dual port cards would need 2 pcie x8 or better slots)

look into the SUPERMICRO MBD-X8DAH+-O Dual LGA 1366 Intel 5520, though keep in mind that this board is larger than E-ATX and requires a case from Supermicro (unless you want to mod one, i doubt it from the sounds of it)

quick specs: 18 DDR3 Dimm slots (max 144GB ECC Registered ram), 2 x pcie 2.0 x16 + pcie 2.0 x8 in x16 slot, dual LGA 1366, rest check either the link or supermicro's site
seems like there cases are fine. i would recommend the 3U or 4U chassis just for space reasons, and i don't think you would need a ton of these servers to do what you need (though i have been wrong before), that is a lot of processing power in one computer


Jan 15, 2010
I would not build myself if price is no object, your time is better used on programming!

I would agree that the dual socket Nehalems is the way to go. My app just went live at a large investment bank with HP DL380 G6 boxes using dual X5570 and 72Gb RAM and that machine is absolutely fantastic (other vendors make similar boxes). I run two RAID1 disks for the OS and then three disks in RAID5 for local storage even though the real data in on a SAN. Comes standard with two gigabit Ethernet but you can add additional controllers. The coming 6-core 32nm CPUs would probably be better, but that is not out yet.

I am assuming (hoping) that you are using C++ given the performance requirements. In that case make absolutely sure that you get the Intel compiler, the optimizations that it can perform with the Nehalem makes a real difference compared to GNU (my app is a Monte Carlo based risk app).

I was very positively surprised at how well that RAID array works performance wise and we cut corners by using 10k SAS drives. I would think you would be OK without SSD. What would the application use the disks for?