Intel’s 24-Core, 14-Drive Modular Server Reviewed

Status
Not open for further replies.

kevikom

Distinguished
Jan 30, 2009
15
0
18,510
This is not a new concept. HP & IBM already have Blade servers. HP has one that is 6U and is modular. You can put up to 64 cores in it. Maybe Tom's could compare all of the blade chassis.
 

sepuko

Distinguished
Dec 13, 2005
224
32
18,710
Are the blades in IBM's and HP's solutions having to carry hard drives to operate? Or are you talking of certain model or what are you talking about anyway I'm lost in your general comparison. "They are not new cause those guys have had something similar/the concept is old."
 
G

Guest

Guest
Why isn't the poor network performance addressed as a con? No GigE interface should be producing results at FastE levels, ever.
 
So, When you gonna start folding on it :p

Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.

You should have tried to render 3d images on it. It should be able to flex some muscles there.
 

MonsterCookie

Distinguished
Jan 30, 2009
56
0
18,630
Now frankly, this is NOT a computational server, and i would bet 30% of the price of this thing, that the product will be way overpriced and one could buid the same thing from normal 1U servers, like Supermicro 1U Twin.
The nodes themselves are fine, because the CPU-s are fast. The problem is the build in Gigabit LAN, which is jut too slow (neither the troughput nor the latency of the GLan was not ment for these pourposes).
In a real cumputational server the CPU-s should be directly interconnected with something like Hyper-Transport, or the separate nodes should communicate trough build-in Infiniband cards. The MINIMUM nowadays for a computational cluster would be 10G LAN buid in, and some software tool which can reduce the TCP/IP overhead and decrease the latency.
 
G

Guest

Guest
less its a typo the bench marked older AMD opterons. the AMD opteron 200s are based off the 939 socket(i think) which is ddr1 ecc. so no way would it stack up to the intel.
 
G

Guest

Guest
The server could be used as a Oracle RAC cluster. But as noted you really want better interconnects than 1gb Ethernet. And I suspect from the setup it makes a fare VM engine.
 
G

Guest

Guest
Actually it is under $20k for a fully confgured system with 6 blades - i priced it up online. You can push it a bit higher than this if you go for very high end memory (16GB+) and top bin processors but for most the fully loaded config would come in around $20k. It's very well priced.
 

kittle

Distinguished
Dec 8, 2005
898
0
19,160
The chassis for your client was under 20k... np

but to get one IDENTICAL to what was tested in this article - whats that cost? I would think "price as tested" would be a standard data point.

Also - the disk i/o graphs are way to small to read without a lot of extra mouse clicks, and even then i get "error on page" when trying to see the full rez version. Makes all that work you spent gathering the disk benchmarks rather useless if people cant read them.
 
G

Guest

Guest
the price as tested in the article is way less than $20k. they only had 3 compute modules and non redundant SAN or Switch. Their configuration would cost around $15k - seriously just go and price it up online - search MFSSYS25 and MFS5000SI
 

asburye

Distinguished
Jan 30, 2009
1
0
18,510
I have one sitting here on my desk with 6 compute modules, 2 Ethernet switches, 2 Controller modules, 4 power supplies, and 14-143GB/10k SAS drives. The 6 compute modules all have 16GB RAM, 2-Xeon 5420's each and 4 of them have the extra HBA card as well, our price was < $25,000 with everything including shipping and tax. The Shared LUN Key is about $400. We bought ours about 2 months ago.
 
[citation][nom]nukemaster[/nom]So, When you gonna start folding on it Did you contact Intel about that network thing. There network cards are normally top end. That has to be a bug.You should have tried to render 3d images on it. It should be able to flex some muscles there.[/citation]
Nahhh... you don't run F@H on CPUs any more ;)
You run it on GPUs! CUDA FTW! :p
 
G

Guest

Guest
Thanks for the comments/suggestions/questions everyone. Your input is appreciated and will be applied to future reviews.

We're addressing the issue with the network test results. - julio
 

Area51

Distinguished
Jul 16, 2008
95
0
18,630
This is the only solution that I can think of that has the integrated SAN solution. None of the OEM's (Dell, HP, IBM) can do that in their solution as of yet. Also if you configure the CPU's with L5430's this becomes the perfect VMware box.
As far as power switch... Remember that in a datacenter enviornment you do not turn off the chassis. there is less chance of accidental shutdown if there is no power switch on the main circuit. This is 6 servers network switch, and a SAn solution in-one you do not want a kill switch. that is why no OEM ever puts a power switch in thier blade solution Chassis.
 
G

Guest

Guest
Hi folks. We're re-running the network test this weekend. Stay-tune for the update. - julio
 

MonsterCookie

Distinguished
Jan 30, 2009
56
0
18,630
I went over the review/test, and as far as i understood in this system there is only a single GLAN switch.
Besides, the individual nodes do not have their own HDD, but a NAS instead.

This is particularly bad, because the disk-I/O is also handled by this poor LAN switch.
One should use at least two switches: one for internode communication, and the other for NFS and so on.


Second point: if those prices which i have seen in the forum are right, than 20k$ is equivalent to 15600Euros.
For that money i can buy from the company where we are buying our equipment from the same system build from Supermicro 1U Twins. For this price i even have Dual GLAN per node, and one Infiniband per node.
This system above could be called indeed a computational server, and the Intel system is just something like a custom made weakly coupled network of computers which is coming with a hefty price tag.
Of course one could argue that buying 3 Supermicro twins plus an Infiniband switch is not as neet looking as this Intel, but once it is in a rack, who cares?
I would not really would like to listen to this intel machine on my desk anyway, so that should be put in a nice rack as well.
 
G

Guest

Guest
[citation][nom]Area51[/nom]This is the only solution that I can think of that has the integrated SAN solution. None of the OEM's (Dell, HP, IBM) can do that in their solution as of yet. Also if you configure the CPU's with L5430's this becomes the perfect VMware box. As far as power switch... Remember that in a datacenter enviornment you do not turn off the chassis. there is less chance of accidental shutdown if there is no power switch on the main circuit. This is 6 servers network switch, and a SAn solution in-one you do not want a kill switch. that is why no OEM ever puts a power switch in thier blade solution Chassis.[/citation]

Thanks Area51.

From the documentation, the Intel Rep and my own general observation, I see this system sitting in a small or mid-sized data-center. It really was easy to work with and I think it's lack of complexity is a bonus for smaller businesses that may not have the resources to build out a mini-datacenter out of one of their back offices.

At the same time, I feel that the MFSYS25 has great potential as a remote server due to the built-in KVM, the chassis' remote management and it's modular design. The chassis just lacks a remote shutdown to make it even more manageable from afar.

I've seen some power supplies on blade chassis with built-in power switches. I just feel better unplugging a power cable after the system is completely shut off as opposed to yanking the cord while the chassis is still running. I'm just not a fan of flying sparks coming out of the sudden plug disconnect.

...or maybe just shut the chassis down from the UPS kill switch...

 
G

Guest

Guest
MonsterCookie. The Compute Modules connect to a common midplane inside the chassis. The midplane is the backbone of the system as everything inside the chassis plugs into it, one way or another.

Storage work is handled by the Storage Module, not the Ethernet Switch Module. Both modules can be redundant with a second set of Storage and Ethernet Switch Modules.
 

MonsterCookie

Distinguished
Jan 30, 2009
56
0
18,630
[citation][nom]JAU[/nom]MonsterCookie. The Compute Modules connect to a common midplane inside the chassis. The midplane is the backbone of the system as everything inside the chassis plugs into it, one way or another.Storage work is handled by the Storage Module, not the Ethernet Switch Module. Both modules can be redundant with a second set of Storage and Ethernet Switch Modules.[/citation]

OK, but does that mean, that the communication between nodes goes trough this backplain, having a low latency and high bandwidth, like on an IBM/SUN machine?
I think, for Intel it should have been necessary to make these points crystal clear for their potential costumers, and give these timings and the maximum bandwidth in the datasheet. Even if the communication is somehow trough the backplane, the latency and the bandwidth will be affected by the number of active nodes installed in the case.
 

Casper42

Distinguished
Apr 24, 2007
61
2
18,640
HP Released the 300GB SAS 10K 2.5" drives in Mid December.
Surprised Intel hasn't recognized them yet and included them at least in their literature if not with a demo unit.
 
Status
Not open for further replies.

TRENDING THREADS