HP Puts 1000 Cores in a Single Rack

Status
Not open for further replies.
G

Guest

Guest
"Specifically, HP claims a 60% performance-per-watt advantage over a cluster of Dell PowerEdge 1955 servers"

Right, compare a blade to a classical 1u or 2u server. Don't compare it to Dell's blade servers because it'll smoke the HPs 8 ways to sunday.
 
G

Guest

Guest
Due to the efficiency of the power supplies and the solid-state disk drives, I would still think it would beat the Dell blade offerings in terms of performance per watt. But this is really a density play, not necessarily an efficiency play though it does beat just about any rack mount in that too. With Dell's engineering not pushing the envelope in blade hardware, it may be awhile before they can answer this. Just $6400 for two quad-core Xeon servers? That is also a price point other vendors (IBM, Sun) are going to be loathe to compete with. If I was Google or Yahoo, I'd by rooms full of these and retire all of that desktop hardware they are using for servers.
 

koreberg

Distinguished
Jun 13, 2008
2
0
18,510
@Thranx

The 1955 is not a rack mount server, that would be a 1950. 10 1955s fit in a 7u chassis, it is not their latest product, but it is in fact a blade.

It would be a more equal comparison if they had chosen the dell m600 or m605, which is the new blade system. However there are numerous other reasons to go with HP.
 

recones90

Distinguished
Jun 13, 2008
1
0
18,510
Specifically, HP claims a 60% performance-per-watt advantage over a cluster of Dell PowerEdge 1955 servers"

Yeah, compare it against DELL's previous generation of blades to get big numbers. I bet you that HP doesn't do so well against DELL's current generation of Blades (M600).
 

markhahn

Distinguished
Jun 14, 2008
3
0
18,510
why do vendors get off on this kind of engineering masturbation? people who are in the market for significant compute farms are simply not interested in paying more for this kind of absurd density. density, after all, does not improve price/performance, or power efficiency, or managability, or peak performance. it's just a number to brag about, and it's not all that impressive anyway (commodity parts can easily put 4 sockets in 1U, and thus 672/rack. such systems are cheap, commoditized without vendor lock-in, and yes, have more dimms/socket and 90% PSU's.)

when I see actual blade installs, I always have to laugh, because they're usually some easily impressed PHB buying a penis substitute, which winds up with one chassis alone in a rack because the machineroom can't handle the power density.

blades: just say no to boutique packaging of commodity parts.
 
G

Guest

Guest
Its all marketing hype..

just like when they claimed you could fit 42 x 1U servers in a 42U rack..

if you've ever tried to cable up one of those babies you will soon realise that

A) the cabling doesn't fit.
B) the BTU output is way too high and would cause all of the servers to overheat.
C) if your using UPS theres no way you can deliver enough power to that many servers in a single rack.
D) the weight of a rack loaded that much is near on impossible to move and will put holes in most computer room floors.

sure it looks nice but ask them to show you a free standing fully loaded rack that is turned ON.

hahahaha


 

razor512

Distinguished
Jun 16, 2007
2,134
71
19,890
if needed the floors can be re enforced and you can use 32 gage wire if there is not enough space to fit the standard wires

while there will be a few more fires by using wires like this, you will be able to show off to your friends the new server that you have

PS smaller servers = bad because someone can easily put on long loose clothes then steal a server and walk out with it, then use that server to host thousands of lol cat pictures which will then be sent to your company
 
G

Guest

Guest
It's sad that HP can only compare to the 1U rack servers because Dell isn't willing to use the standard power measurement benchmark on their blade servers. So HP played Dell's own game in measuring performance per watt in a different way than using a standard power benchmark and came out with this: ftp://ftp.compaq.com/pub/products/servers/benchmarks/hp_proliant_bl260_specjbb2005_032808a.pdf

One correction needs to be made to Toms' article. The server has four cores per socket, or up to 8 cores per cut-through server. That's intense.

Another shot at Dell while I'm on it. Dell has two blade server models. HP has nine. That alone is killer, but then again. HP has been at it for two year longer than Dell.
 
G

Guest

Guest
[citation][nom]alphi[/nom]Its all marketing hype..just like when they claimed you could fit 42 x 1U servers in a 42U rack..if you've ever tried to cable up one of those babies you will soon realise that A) the cabling doesn't fit.B) the BTU output is way too high and would cause all of the servers to overheat.C) if your using UPS theres no way you can deliver enough power to that many servers in a single rack.D) the weight of a rack loaded that much is near on impossible to move and will put holes in most computer room floors.sure it looks nice but ask them to show you a free standing fully loaded rack that is turned ON.hahahaha[/citation]

In a raised-floor datacenter of yesteryear, that's true. If you have side CRACs or a water exchanger, the heat's not a problem. Also, if you actually use velcro cable ties or use something smart like FC, 10-GbE uplinks, or InfiniBand, the cabling isn't a problem either. Then, if you use a scalable UPS that can push between 36-60kW in a rack, you can fill a short isle with these blades. You just have to realize that the customers for this solution have those capabilities. Those who are not willing to update their facilities but think they can use increaed computing density are not being realistic with themselves because even the modern 1U rack servers will likely pull more power and produce more BTU than they can handle.
 
G

Guest

Guest
I'm thinking that Virtualization is the way to go. Have fewer, more powerful servers and use VM ware to host your multitude of low demand servers. This eliminates the cabling/power problem and is oodles cheaper.

For another topic, since they are talking about power effiency, when are they going to combine power supplies and battery backup units. UPS's have to convert power to DC to store in the power, and then back to A/C to feed to your computer which then converts back down to DC. HP/Dell should have hardware that not only has battery backups, but supplies power directly to the servers so you don't need all that converting.
 

jjt3hii

Distinguished
May 24, 2007
3
0
18,510
Every major hardware vendor has made DC powered servers, storage, and switches for many years. DC blade servers as well.
 
G

Guest

Guest
Its all marketing hype..

just like when they claimed you could fit 42 x 1U servers in a 42U rack..

if you've ever tried to cable up one of those babies you will soon realise that

A) the cabling doesn't fit.
B) the BTUhttp://en.wikipedia.org/wiki/British_thermal_unit output is way too high and would cause all of the servers to overheat.
C) if your using UPS theres no way you can deliver enough power to that many servers in a single rack.
D) the weight of a rack loaded that much is near on impossible to move and will put holes in most computer room floors.

sure it looks nice but ask them to show you a free standing fully loaded rack that is turned ON.

hahahaha
it is wonderful stuff ; we have run it & no probs
 
G

Guest

Guest
pfff. Sun Microsystems do more. And without even trying hard. Their t5240 has 128 threads in 2u. With crypo acceleration built on chip. With 128GB of RAM. 16 hard disks, and 2* 10 Gb ethernet interfaces.

So in one rack:
2688 Threads
336 hard disks
2688 GB of RAM

http://www.sun.com/servers/coolthreads/t5240/specs.xml

threadz, Sun haz dem.
 

tipoo

Distinguished
May 4, 2006
1,183
0
19,280
[citation][nom]pogsnet[/nom]Compare that to Roadrunner, how about that?[/citation]
[citation][nom]pogsnet[/nom]Compare it to Roadrunner[/citation]

its a SERVER, why would you compare it to a supercomputer???
 
Status
Not open for further replies.