Intel’s 24-Core, 14-Drive Modular Server Reviewed

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
G

Guest

Guest
[citation][nom]MonsterCookie[/nom]Even if the communication is somehow trough the backplane, the latency and the bandwidth will be affected by the number of active nodes installed in the case.[/citation]
Hi MonsterCookie. I doubt the number of installed nodes would affect overall performance on all the Compute Modules. That would be a definite design flaw. I emailed the Intel rep and will post his answer as soon as I get it.
 
G

Guest

Guest
[citation][nom]Casper42[/nom]HP Released the 300GB SAS 10K 2.5" drives in Mid December.Surprised Intel hasn't recognized them yet and included them at least in their literature if not with a demo unit.[/citation]
Hi Casper42. Intel has tested eight (four fujitsu and four seagate) 2.5" SAS drives so far. It appears that the Fujitsu drives requires a firmware update while the Seagate drives work with the older firmware.

Check out Table 5.2 in the Tested Hardware and Operating System List:

http://download.intel.com/support/motherboards/server/mfs5000si/sb/mfsys25_mfsys35_mfs5000si_thol_rev1_8.pdf

As of the doc's printing date (January 2009), the largest drives listed/tested are drives models that max out at 146GB. So far, they've certified the Seagate's Savvio 15K and 10K.2 drives, but the 10K.3 and 15K.2 haven't been tested yet. The Seagate 10K.3 has the 300GB drive in it's family that I'd like to see on the MFSYS25.

Same goes for the Fujitsu drives. The 300GB MBD2300RC is one of the two Fujitsu 2.5" drive models not supported yet.

Not sure why it's takes a while for companies to adopt newer hardware. Maybe there's just a ton of red-tape we don't see as customers...



 
G

Guest

Guest
I honestly don't think I would ever recommend Intel server products.

About a two years ago my employer purchased an Intel Enterprise Blade systems and three Intel iSCSI SANs as part of a solution for a reasonable, but certainly not inexpensive amount.

Fast forward one year later.
Intel Enterprise Blades -- Discontinued!
Intel SAN -- Discontinued!

http://www.intel.com/support/motherboards/server/blade.htm
http://support.intel.com/support/motherboards/server/ssr212ma/sb/CS-028262.htm

This means that if something breaks we can't even get replacement parts. Now that's Intel "Enterprise" level I can't imagine what crap they're going to give to "mid" level companies.

We're sitting here thinking boy we're actually using that SAN space a good clip, but we can't just add more Intel SAN modules like Intel reseller told us. It's not because we can't afford them either, it's because we can't even buy them now!

Get the picture? Intel processors = good. Entire Intel servers = bad.
 
G

Guest

Guest
Hi IntelCrap!!!

While I won't disagree with you on the Intel Enterprise Blade Server product line you are referring to which was discontinued - i will have to diasgree with you on this Intel Modular Server and its longevity.

The original Intel Blade Server was actually the IBM Blade Center product and for the most part OEM'd. In the begining of the callaboration Intel had a bigger role in the design but as time went on IBM had more control - simply because it was a better fit for the IBM customer (enteprise class datacenter). So Intel eventually had to pull out of this because although it has been hugely succesful for IBM, it wasn't so for Intel's channel. I have been a long time user of Intel's server products and have had nothing but overall great experiences. With this product, it is a 100% Intel design, not OEM'd and for this reason I am well informed that it will be around for a long time. In actual fact my Intel rep has given me the roadmap which shows commitment to this chassis until the end of 2012. I am happy with the 5 year life span and the support for next generation processor architecures when they are released.

Bottom line, with this product line, being an Intel design, it will be around for a good long time.
 
G

Guest

Guest
FYI... We updated the Network tests. Go to "16-Network Test-PassMark Advanced Network Test". -julio
 
G

Guest

Guest
[citation][nom]JAU[/nom]Hi MonsterCookie. I doubt the number of installed nodes would affect overall performance on all the Compute Modules. That would be a definite design flaw. I emailed the Intel rep and will post his answer as soon as I get it.[/citation]

Hi MonsterCookie. Per the Intel rep, the MFSYS25 midplane is designed to handle at least 10GB/s per lane. If there would be any limitations due to high traffic, it would be at the Storage Module, however, having a second Storage module in place would take care of this limitation.

Here's a pdf of another party's findings on running all six compute modules at the same time...

http://www.principledtechnologies.com/Clients/Reports/Intel/Multiflex0208.pdf

Hope this helps. -julio
 
G

Guest

Guest
Hallo,
is there any possibility you tested just from curiosity, if this SAN takes another than certified Savio SAS drives?
If it would be possible to test 2,5inch sata (intel X25-E) - if its going to be recognized by internal SAN?

I am planning small vmware deployment and 3 Intel SATA SSDs (intel x25-E) in a raid5 would be killer solution for SQL virtualization (basicaly equivalent of (at least) HP EVA 4400 16-20 SAS 15krpm drives for SQL :)

If there was even remote possibility to test any 2,5inch sata drive in this beast, I would be very, very interested in results.

Sure it is not intel supported, but if it works - time to say good bye to HP, or DELL HW for this project :)
 

MonsterCookie

Distinguished
Jan 30, 2009
56
0
18,630
[citation][nom]JAU[/nom]Hi MonsterCookie. Per the Intel rep, the MFSYS25 midplane is designed to handle at least 10GB/s per lane. If there would be any limitations due to high traffic, it would be at the Storage Module, however, having a second Storage module in place would take care of this limitation. Here's a pdf of another party's findings on running all six compute modules at the same time...http://www.principledtechnologies. [...] ex0208.pdfHope this helps. -julio[/citation]

Hi

first of all thanks for your helpful responses. I think in most topics on TH there is barely any communication between the author/reviewer and the readers.

Second thing: they ommit even in their PDF datasheet (at least i could not find it) the MPI latency, what i am really interested in, since in some (especially in our) applications there are several tiny packets send trough MPI, and the bandwith above 2-4 Gb/s is not an issue anymore, but the latency. If the MPI latency is not low enough, than the CPU-s will just sit idle in this nice case, and wayting for the data what to process.

I feel sorry for Intel to repeat myself: this product -besides beeng overpriced-, is not efficient (at least for our specific applications it would perform really poor). For the same price anyone with a bit of HW knowledge can build a cheaper cluster, which outperforms this wiht a notable margin, or buid the same, and save 20-30% of the price.
 

G_B_S

Distinguished
Feb 3, 2009
1
0
18,510
@MonsterCookie:
you are completely missing the point! This thing is NOT indended as a HPC/computational cluster. It is positioned as a SMB production server/SAN environment - basically half a rack in one box. So, MPI performance/latency and your BYO price arguments are completely irrelevant.

Also, disk IO does not go at all through the ethernet switches, but through SAS connections...

Pricing actually is quite attractive (in the intended SMB usage model): as soon as you need >= 3 servers this box has a price advantage over seperate 1/2u + SAN solutions...
 

VTOLfreak

Distinguished
Jan 23, 2006
77
0
18,630
Ever seen an entire blade enclosure crash when the backplane/midplane craps out? It's not a pretty sight. I'll stick with pizzaboxes (1U servers) and individual SAS enclosures and switches. It may take up more rackspace and you need to plan out the cable mess but it will be more reliable in the end.

Most datacenters don't have the needed cooling to support a rack full of blades anyway, so whats the point using an enclosure like this if you have to keep the other half of the rack empty to stay within cooling limits? I'm not saying its useless but in most cases you are better of with other solutions.
 

MonsterCookie

Distinguished
Jan 30, 2009
56
0
18,630
[citation][nom]VTOLfreak[/nom]Ever seen an entire blade enclosure crash when the backplane/midplane craps out? It's not a pretty sight. I'll stick with pizzaboxes (1U servers) and individual SAS enclosures and switches. It may take up more rackspace and you need to plan out the cable mess but it will be more reliable in the end.Most datacenters don't have the needed cooling to support a rack full of blades anyway, so whats the point using an enclosure like this if you have to keep the other half of the rack empty to stay within cooling limits? I'm not saying its useless but in most cases you are better of with other solutions.[/citation]


I said roughly the same, but i got turned down by G_B_S
Apparently some ppl DO have the money to buy these -instead of the pain of BYO-, so we cannot blaim them.
 

mbreitba

Distinguished
Jun 11, 2009
1
0
18,510
Just wondered if the IO Meter benchmarks were done with one drive or the entire array, or several drives? The results seem quite low for a 14 drive SAS solution, but if it was a 3 or 4 drive setup it seems pretty reasonable.
 
Status
Not open for further replies.