IBM z Processors Climb to 5.5 GHz

Status
Not open for further replies.

fudoka711

Distinguished
[citation][nom]nevertell[/nom]Not X86, so don't really jump to conclusions about the performance. Nor can you natively run Crysis...[/citation]

But we all wish it could run Crysis! =D
 

mlopinto2k1

Distinguished
Apr 25, 2006
1,433
0
19,280
Wouldn't CUDA destroy the performance of these outdated racks? That is, if you replaced every tray with 4 high end streaming cards. Just sayin. Even the PS3 supercomputer the military made would probably rape this thing.
 

freggo

Distinguished
Nov 22, 2008
2,019
0
19,780
[citation][nom]back_by_demand[/nom]Up to 20% more speed for only 50% more wattage, OMG!!!...Contribute to global warming much? I though performance per watt was supposed to go up not down[/citation]

It's like in cars or pretty much any other machine... the higher in performance you get the more unproportional the required power goes.
Like making a car go from 0 to 40... any small engine can easily do that. Make it go from 160 to 200mph takes some major hardware upgrade :)


 

mlopinto2k1

Distinguished
Apr 25, 2006
1,433
0
19,280
[citation][nom]freggo[/nom]It's like in cars or pretty much any other machine... the higher in performance you get the more unproportional the required power goes.Like making a car go from 0 to 40... any small engine can easily do that. Make it go from 160 to 200mph takes some major hardware upgrade :)[/citation]I understand this logic, but why even bother when there are alternatives? These companies need to combine their tech and making something truly inspiring and amazing. Kind of like when full color television hit the scene. People were blown away .
 
[citation][nom]mlopinto2k1[/nom]Wouldn't CUDA destroy the performance of these outdated racks? That is, if you replaced every tray with 4 high end streaming cards. Just sayin. Even the PS3 supercomputer the military made would probably rape this thing.[/citation]

That is only for a few specialized tasks. For most computing you need a CPU. Otherwise why would anyone give a rip about Intel and AMD CPU's? We'd just get an Atom and only worry about which nVidia or ATI GPU we have.
 

obsama1

Distinguished
[citation][nom]mlopinto2k1[/nom]Wouldn't CUDA destroy the performance of these outdated racks? That is, if you replaced every tray with 4 high end streaming cards. Just sayin. Even the PS3 supercomputer the military made would probably rape this thing.[/citation]

Depends entirely on the task. Also, Kepler doesn't have great CUDA performance.
 

jsrudd

Distinguished
Jan 16, 2009
927
0
19,060
[citation][nom]mlopinto2k1[/nom]Wouldn't CUDA destroy the performance of these outdated racks? That is, if you replaced every tray with 4 high end streaming cards. Just sayin. Even the PS3 supercomputer the military made would probably rape this thing.[/citation]

Even if that were the case, the companies that still use these do it because porting the programs that run on these machines and transitioning the infrastructure would be really expensive.
 

mlopinto2k1

Distinguished
Apr 25, 2006
1,433
0
19,280
[citation][nom]velocityg4[/nom]That is only for a few specialized tasks. For most computing you need a CPU. Otherwise why would anyone give a rip about Intel and AMD CPU's? We'd just get an Atom and only worry about which nVidia or ATI GPU we have.[/citation]I understand, totally... but people do NOT want to code for the CUDA language. It's a huge barrier. Anything can be thrown at these cards depending on how it is coded. It just takes time and money... IBM is lazy.
 

mlopinto2k1

Distinguished
Apr 25, 2006
1,433
0
19,280
[citation][nom]jsrudd[/nom]Even if that were the case, the companies that still use these do it because porting the programs that run on these machines and transitioning the infrastructure would be really expensive.[/citation]Agreed. This needs to change. It's ridiculous.
 
G

Guest

Guest
It's 5.5 GHz continuous clock speed, with mainframe service levels and with obscene amounts of cache. That's not with cores shut off, burst mode, nitrogen cooled with a 5 minute lifespan, etc. -- it's pedal to the floor sheer performance in the world's most reliable server. And every processor is crafted in the U.S. of A.

Bravo, IBM.
 

d_kuhn

Distinguished
Mar 26, 2002
704
0
18,990
[citation][nom]mlopinto2k1[/nom]I understand, totally... but people do NOT want to code for the CUDA language. It's a huge barrier. Anything can be thrown at these cards depending on how it is coded. It just takes time and money... IBM is lazy.[/citation]

You could throw anything at those cards, but much of it would run slower than it does on your $1000 commodity PC. GPU acceleration is great for certain specific tasks, but at GP-CPU type tasks they would be VERY inefficient. Big Iron is still a player for a lot of reasons, code base is one but not the only one... the architectures of those platforms make them good at their market focus areas. Sure you can cluster smaller servers for things like big DB apps... but by the time you get done putting a comparable cluster together you're may find yourself in a similar price point. $75k isn't a lot of money for an enterprise platform (though I'd be REALLY surprised if you could actually get a Z system for the 'base price' and be able to do much with it).
 
[citation][nom]mlopinto2k1[/nom]Wouldn't CUDA destroy the performance of these outdated racks? That is, if you replaced every tray with 4 high end streaming cards. Just sayin. Even the PS3 supercomputer the military made would probably rape this thing.[/citation]

The purpose of a Mainframe is to be extremely reliable while performing high IO transaction processing tasks. They are not designed or meant to run simulations.

Mainframes and Supercomputers are two different things
 
G

Guest

Guest
mlopinto2k1:

"It's blatantly obvious."

It's blatantly obvious that x86 is vastly inferior to PowerPC and POWER, yes I agree with that ;-) However what is not obvious is why Apple didn't strike a deal with IBM to use their POWER CPU's instead of off the shelf x86 crap. Whatever happened to "Think Different" Apple???

Go IBM for being one of the last companies to actually "Think Different".
 

mlopinto2k1

Distinguished
Apr 25, 2006
1,433
0
19,280
[citation][nom]IJustWantToPost45[/nom]mlopinto2k1:"It's blatantly obvious."It's blatantly obvious that x86 is vastly inferior to PowerPC and POWER, yes I agree with that ;-) However what is not obvious is why Apple didn't strike a deal with IBM to use their POWER CPU's instead of off the shelf x86 crap. Whatever happened to "Think Different" Apple??? Go IBM for being one of the last companies to actually "Think Different".[/citation]Yeah, Go IBM for outsourcing jobs to non-americans and throwing people out on their asses after devoting half of their lives to the company. Give me a break. Off the shelf X86 crap? What are you using to reply to these comments? Apple's A6? Guess what they use to design the operating systems? Apple is a leader in the technology sector and has contributed much.
 
[citation][nom]IJustWantToPost45[/nom]mlopinto2k1:"It's blatantly obvious."It's blatantly obvious that x86 is vastly inferior to PowerPC and POWER, yes I agree with that ;-) However what is not obvious is why Apple didn't strike a deal with IBM to use their POWER CPU's instead of off the shelf x86 crap. Whatever happened to "Think Different" Apple??? Go IBM for being one of the last companies to actually "Think Different".[/citation]

You people don't seem to realize that there is no indication of performance in this article, only clock frequency. For all we know it might have the performance of an i3 or the performance of a six-core i7.

Also, Bulldozer can clock a lot higher at the same power consumption. Whether or not it would be similar performance, we don't know. If I had to guess, no, but it doesn't matter what we guess if we don't know for sure.

Also, guys, IBM probably uses Power and such because they don't have an x86 license and even if they did, they probably wouldn't want to change architectures like that. I'm also quite sure that back when Apple switched, their Motorola Power CPUs were weaker than the x86 CPUs of the time, not stronger.
 

Dangi

Honorable
Mar 30, 2012
192
0
10,690
[citation][nom]mlopinto2k1[/nom]Wouldn't CUDA destroy the performance of these outdated racks? That is, if you replaced every tray with 4 high end streaming cards. Just sayin. Even the PS3 supercomputer the military made would probably rape this thing.[/citation]

Though more complicated to implement one FPGA would destroy CUDA performance.

CUDA is very, very good for a very few things, one FPGA is incredibly awesome for one thing and sux for the rest, for everything else you have CPU
 
Status
Not open for further replies.