Nvidia Owning Top Supercomputers List With GPUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

MrKKBB

Distinguished
Jul 7, 2008
44
0
18,530
[citation][nom]dragonsqrrl[/nom]I believe Fermi based Tesla and Quadro GPU's do enable ECC memory.[/citation]
Quite right, time to get myself a Tesla C2050 to play with.
 

iamtheking123

Distinguished
Sep 2, 2010
410
0
18,780
Grid computing (decentralization) is where it's at (think XXX@Home). You don't need to be a big shot researcher to have access to massive amounts of computing power when you can send your request out to a thousand normal servers and let them crunch the data and send it back to you.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]Fokissed[/nom]Links:MPCTHGBT[/citation]
lol... I'm not certain what you're trying to say. Maybe you should reread my original post. It doesn't really make sense to cite sources that basically support my original argument.
 

gm0n3y

Distinguished
Mar 13, 2006
3,441
0
20,780
[citation][nom]iamtheking123[/nom]Grid computing (decentralization) is where it's at (think XXX@Home). You don't need to be a big shot researcher to have access to massive amounts of computing power when you can send your request out to a thousand normal servers and let them crunch the data and send it back to you.[/citation]
If there was a grid computing system that would pay people for packets that they process, I think that could be the future of super computing. They would just have to pay enough to cover power costs plus a small amount on top of that. All companies and universities could then install this software set to run outside of 9-5 hours to help bring in a bit of extra cash. Assuming that the extra wear on the machines was worth it.
 

nevertell

Distinguished
Oct 18, 2009
335
0
18,780
[citation][nom]sunflier[/nom]"having 7,168 GPUs"Are those in one giant daisy-chained SLI mode??[/citation]

They use CUDA, and cuda doesn't need an SLI bridge. Cuda is a lot different than SLI, it's not like if you have 2 1 gb cards with the total memory of just 1 gig because they both are using the same data. Most of the cpu's in these computers are loaded full just because they are distributing data to the different gpus.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]alexapi32[/nom]What? Only 1.34 MW? /sarcasm.How much power does the "Tianhe-1A" need than?[/citation]
You have to think about the power consumption within the context of super computing. In that respect 1.34 MW is very low, considering its reported computing performance. Even the 4.04 MW Tianhe-1A is considered low when you take into account the fact that it's 2.507 petaflop system. Apparently a similar 2.5 petaflop system built entirely with CPU's would consume in the area of 12 MW.
 

thechief73

Distinguished
Feb 8, 2010
1,126
0
19,460
Didnt Tom's report this just a few weeks ago? Or did I read it accidently somewhere else? well either way its really cool how they construct these supercomputers using GPU's, and make the so efficient.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]Fokissed[/nom]Overall, the Radeons show the strongest in Crysis.[/citation]
"I can't recall the HD5870 outperforming the GTX480 since the initial release drivers." Again, I don't understand what you're saying. You're contradicting yourself. It's like you're trying to disagree with me, by presenting evidence that supports my argument.

Every link you've provided, in this case repeatedly, cites reviews from the debut of the GTX480, running on its "initial release drivers", and I've already stated that this is the only time I can recall the HD5870 outperforming the GTX480 in this particular benchmark. And even then, the HD5870 performance advantage isn't as large as I assumed, the two cards are basically even.

So basically you're showing me old performance figures, and my original comment was based on current figures. So again, this is what I've been trying to ask: can you show me recent benchmarks that can back up what scrumworks said? "Mmm, actually HD5870 beats GTX480 in Crysis which is kind of hilarious."
 

gm0n3y

Distinguished
Mar 13, 2006
3,441
0
20,780
[citation][nom]alexapi32[/nom]What? Only 1.34 MW? /sarcasm.How much power does the "Tianhe-1A" need than?[/citation]
1.21 gigawatts? Great Scott!
 

Fokissed

Distinguished
Feb 10, 2010
392
0
18,810
[citation][nom]dragonsqrrl[/nom]"I can't recall the HD5870 outperforming the GTX480 since the initial release drivers." Again, I don't understand what you're saying. You're contradicting yourself. It's like you're trying to disagree with me, by presenting evidence that supports my argument. Every link you've provided, in this case repeatedly, cites reviews from the debut of the GTX480, running on its "initial release drivers", and I've already stated that this is the only time I can recall the HD5870 outperforming the GTX480 in this particular benchmark. And even then, the HD5870 performance advantage isn't as large as I assumed, the two cards are basically even. So basically you're showing me old performance figures, and my original comment was based on current figures. So again, this is what I've been trying to ask: can you show me recent benchmarks that can back up what scrumworks said? "Mmm, actually HD5870 beats GTX480 in Crysis which is kind of hilarious."[/citation]
At release the 480 was 12.56% faster than the 5870 at 1920x1200 in Warhead. Now the 480 is 10.47% faster in the same benchmark and settings. The 480 gained 13.17% speed while the 5870 gained 15.33% speed in the same time. Assuming the same for the original Crysis, the 5870 would still be faster than the 480. And the 5870 wasn't improving upon release drivers, since it was out 6 months before the 480 while the 480 was improving upon release drivers.

can you show me recent benchmarks that can back up what scrumworks said?
Can you prove him wrong? The link you provided only proves that the 5870 improved more than the 480, and since the 5870 was faster in Crysis to begin with, I don't see why you assume that things are different with Crysis than in Warhead.
 

nielnield

Distinguished
Dec 4, 2009
111
0
18,680
[citation][nom]saint19[/nom]Guys!!, Do you really thing that those supercomputer works with GTX series? that's very funny.[/citation]

well, if i am correct the Tianhe-1 of China used to have Atis 4870 So gtx could be a possibility lol.
 

Fokissed

Distinguished
Feb 10, 2010
392
0
18,810
[citation][nom]gm0n3y[/nom]if nvidia can pull that off with a relatively low power consumption, I'd like to see what ATI could do (assuming they ever provide proper support for super-computing).[/citation]
Assuming that the 5970 gets 4640GFLOPS it would take 567 cards and consume .167MW, but thats not including any overhead.

Edit: There seems to be a quirk with the Radeon vs. Geforce cards. The 5870 has almost twice the compute power of the 480. (and they both support FMA)
 

climber

Distinguished
Feb 26, 2009
325
0
18,780
[citation][nom]MrKKBB[/nom]I think AMD really dropped the ball with this one. They have great GPU products, but they let NVIDIA's CUDA take foot hold while largely depending on OpenCL. It will take some time before OpenCL catches up to CUDA; there are a number of compilers that have CUDA optimizations (e.g. Portland). With that said, the one thing that GPUs are still lacking is error checking in their memmory. There are many HPC applications that require this. I look forward to the day that ECC comes to GPUs. My 2.5 cents.[/citation]

Nvidia's Quadro 6000 and I believe their tesla 2050/60 have ECC memory checking, but as we all know these cards are expensive, but worth it in scientific computer or financial analysis where single bit errors can add up to some serious wrong conclusions.
 

knowom

Distinguished
Jan 28, 2006
782
0
18,990
Nvidia coup de grace and people still spew absolute nonsense that they've been brainwashed into believing from Semi Accurate.

Anyway website with a name like that isn't very credible particularly what Charlie posts he's flat out bias.

Nvidia super computer has more power at less power consumption by a large margin. Hate them or not that isn't changing and will continue more in that direction.
 
G

Guest

Guest
So, do they perform better than traditional super-computers in non-synthetic benchmarks? Nobody builds a super-computer to run the Linpack benchmark(except maybe China).
 

nebun

Distinguished
Oct 20, 2008
2,840
0
20,810
[citation][nom]yao[/nom]any response from intel?[/citation]
intel is not able to compete with the computing power of GPGPU...they have tried to make video cards and they are not very good at it...CPUs is what they are good at.
 
Status
Not open for further replies.