IBM Making Nvidia Fermi-filled BladeCenter Server

Status
Not open for further replies.

rhodesar

Distinguished
Aug 31, 2010
5
0
18,510
This is interesting, however I would like to know more about the performance expectations in relation to non-GPU based predecessors.
 

nevertell

Distinguished
Oct 18, 2009
335
0
18,780
Nobody gives a damn if it was hot in a consumer based box, now it will sit still in an airconditioned room with the fans running on 100% and IT IS NOT GOING TO BOTHER ANYBODY.

Except for the dudes with the bills to pay. It's going to bother them.
 

ceteras

Distinguished
Aug 26, 2008
156
0
18,680
I'm impressed. I call these avant-garde computing solutions, using the latest technology in a flexible way.
I wouldn't be concerned with heat, the blades have a proper heatsink, and in the future this could be upgraded to cooler versions of GPU's.
 
G

Guest

Guest
if they can make it cheap enough then this could be a killer, Nvidia's Tesla systems have proven the computational power of these things when deployed correctly, it perfect for the form and functionality of a blade center. If the could price it right Intel and AMD could be in for a fight (still can't eliminate them though, these are computational add ons, still need a few proper CPU to run the show)
 

tommysch

Distinguished
Sep 6, 2008
1,165
0
19,280
I dont see why you are all whining about it being hot... Heat = power. Im all for efficiency but its not a reason not to push the TDP as far as you can. I wouldnt mind having a 500++ Watt card or CPU if its still efficient.

My liquid cooled Q6600 produce more heat than a Fermi card, its running at 3.6GHz @ 1.45V @ 50°C under load with 3x 120mm radiator flowing at around 400 L/hr.
 

mavroxur

Distinguished
[citation][nom]rohitbaran[/nom]Yeah, the air conditioning bills will just shoot up![/citation]

And with a more dense, powerful computing solution in your data center, your server count could be decreased, so actually saving cost if you look at the bigger picture.
 

IncinX

Distinguished
Nov 10, 2006
80
0
18,630
[citation][nom]nevertell[/nom]Nobody gives a damn if it was hot in a consumer based box, now it will sit still in an airconditioned room with the fans running on 100% and IT IS NOT GOING TO BOTHER ANYBODY. Except for the dudes with the bills to pay. It's going to bother them.[/citation]

But they can save money on heating the room. Oh wait...
 

pocketdrummer

Distinguished
Dec 1, 2007
1,084
30
19,310
[citation][nom]TommySch[/nom]I dont see why you are all whining about it being hot... Heat = power. Im all for efficiency but its not a reason not to push the TDP as far as you can. I wouldnt mind having a 500++ Watt card or CPU if its still efficient.My liquid cooled Q6600 produce more heat than a Fermi card, its running at 3.6GHz @ 1.45V @ 50°C under load with 3x 120mm radiator flowing at around 400 L/hr.[/citation]

Some of us are energy conservative. I'd rather have a low watt, cooler, quieter card than one that screams over what I'm playing. You shouldn't NEED to run a complex water cooling system unless you're overclocking. Even then, I have this i7-920 overclocked to 3.6Ghz at 1.175v on air (gotta love the D0 stepping).

Besides, what about the other 80% of the time you're using your computer? Why does it need to be so loud running windows' GUI?
 

kelemvor4

Distinguished
Oct 3, 2006
469
0
18,780
Good that a big name company is doing this, but companies like T-Platforms manufacture blade servers with Two of these GPU's and two xeon CPU's per blade already providing far more dense and far more flexible HPC solution.

There's a lot to be said for support from a company like IBM; on the other hand the other solutions are head and shoulders more powerful and dense. IMO, ibm needs to step up their HPC offering here.
 

tommysch

Distinguished
Sep 6, 2008
1,165
0
19,280
[citation][nom]pocketdrummer[/nom]Some of us are energy conservative. I'd rather have a low watt, cooler, quieter card than one that screams over what I'm playing. You shouldn't NEED to run a complex water cooling system unless you're overclocking. Even then, I have this i7-920 overclocked to 3.6Ghz at 1.175v on air (gotta love the D0 stepping).Besides, what about the other 80% of the time you're using your computer? Why does it need to be so loud running windows' GUI?[/citation]

You know that the Q6600 is basically 2 65nm E6600 on a single dice with a 2.4GHz base clock? I am aware that it is getting extremely old. I'm planning on building a new system around a i930 but I would be looking to get at least 4.0GHz out of it, 4.2GHz if possible. Where I live electricity is almost free and its not wasted since the same amount of electricity would be used by a dumb heating device anyway. Except for 3 months/year there is no waste.
 

Chris_TC

Distinguished
Jan 29, 2010
101
0
18,680
Hey, I have a really good idea.
Why doesn't somebody come up with a heat related pseudo joke? That would be funny as hell. Anybody think of something?
 

Wheat_Thins

Distinguished
Jun 6, 2008
63
0
18,630
How many industrial software suites utilize CUDA anyways? There are hardly any mainstream programs so unless its developed in house I can't see many companies dying to get their hands on these.
 

BulkZerker

Distinguished
Apr 19, 2010
846
8
18,995
[citation][nom]TommySch[/nom]I dont see why you are all whining about it being hot... Heat = power. Im all for efficiency but its not a reason not to push the TDP as far as you can. I wouldnt mind having a 500++ Watt card or CPU if its still efficient.My liquid cooled Q6600 produce more heat than a Fermi card, its running at 3.6GHz @ 1.45V @ 50°C under load with 3x 120mm radiator flowing at around 400 L/hr.[/citation]

Because then you have to pay to cool the room also. That's why the Seamicro server based on hundreds of Atom processors is honestly a pretty damn good idea.

This server idea may be a new attempt at some brute force OMGNUMBERCRUNCHING! super computer but TBH I would shoot down the funding for it if the power consumption of that box suddenly doubles our power bill. And since this IS made for the industrial market in mind, power consumption is one of the top 3 priorities in making the initial purchase.

Price, Performance, Operating cost. And this server idea seems to be a little heavy on Operating cost unless they are using cherry picked GPUs out of the runs.
 
Status
Not open for further replies.