AMD-powered Jaguar is 'Fastest Supercomputer'

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]MustWarnOthers[/nom]To be honest, I doubt even this computer can run Crysis effectively. Judging by the fact that a 3 year old game still can't run at max settings on even some of the highest end consumer hardware, it's pretty obvious it's just a poorly coded piece of shit.[/citation]

Not efficient coding is one aspect. The main reason why PCs still can't run it is the game is still the best looking and the most graphically intensive game ever created. Crysis 2 with the Cryengine 3 will actually be a downgrade so consoles can play it. And technically, the new dx11 ATI video cards CAN max out Crysis. A single ATI 5870 can play Crysis at 1920x1080 with everything set to Very High.
 
G

Guest

Guest
Can it play Crysis though?

No that joke never gets old. Hahahahahaha
 

dtm4trix

Distinguished
Oct 21, 2008
106
0
18,680
[citation][nom]socrates047[/nom]this just sounds awsome, hopefully it is used for the good of humanity. [/citation]

I think they use these things to run nuclear explosion simulations to keep the US stockpile of weapons up to date.
 

Ehsan w

Distinguished
Aug 23, 2009
463
0
18,790
[citation][nom]BlueScreenDeath[/nom]Not efficient coding is one aspect. The main reason why PCs still can't run it is the game is still the best looking and the most graphically intensive game ever created. Crysis 2 with the Cryengine 3 will actually be a downgrade so consoles can play it. And technically, the new dx11 ATI video cards CAN max out Crysis. A single ATI 5870 can play Crysis at 1920x1080 with everything set to Very High.[/citation]

that's really nice
but my laptop with a nvidia 9600m gt can play it at that res
the question is how many fps does it get?
 

mr_tuel

Distinguished
May 23, 2009
288
0
18,780
"The whole system has 300TB of memory and 10PB of hard disc space"

2GB per core*224,162 cores=448.324 TB of memory. Not sure what the actual amount is, but that is what the given data actually adds up to.

Good for AMD and Cray. I am interested in the kind of models and simulations that this beast runs...

Makes my i7 look like a 486 lol!!!
 
G

Guest

Guest
[citation][nom]zingam[/nom]4004???[/citation]

Basic gradeschool math fail:

2GB*224,162 = 448,324GB

1 TB = 1,024 GB thus 448,324GB = 437.816406 TB

(Yes, he was wrong too, but didn't fail like you did)
 

crabsncancer

Distinguished
May 19, 2008
31
0
18,530
Realistically speaking, a computer like this would be a great backend SQL host for a large ecommerce site such as Amazon or google. Instead of having a host of thousands of load-balanced microcomputers, we can start going back to 1 large macro/supercomputer for serving.

Also, another thing something like this would be used for is Virtual Computers; one Supercomputer hosting Virtual Spaces. This would be infinitely more practical than how IT runs today... our company has over 300 servers. We have more than 10 servers just dedicated to Exchange!
 

amdgamer666

Distinguished
Oct 24, 2008
101
0
18,680
[citation][nom]andboomer[/nom]Basic gradeschool math fail:2GB*224,162 = 448,324GB1 TB = 1,024 GB thus 448,324GB = 437.816406 TB(Yes, he was wrong too, but didn't fail like you did)[/citation]

And you fail the most. He was clearly referencing the Intel 486 and was saying that it made his i7 look like an Intel 4004.
 

XD_dued

Distinguished
Dec 23, 2008
415
0
18,810
[citation][nom]mathiasschnell[/nom]"petaflops per second"Redundant. FLOPS = FLoating point Operations Per Second.[/citation]

No no silly, it's accelerating XD
 

Ehsan w

Distinguished
Aug 23, 2009
463
0
18,790
[citation][nom]ajcroteau[/nom]They probably cheaped out on the video card with an Intel GMA 9000 POS chipset... or something similar[/citation]

they don't use a monitor, it's more like an 8-bit screen that gives numbers
like cheap calculators
 

paradiddle

Distinguished
Nov 13, 2009
5
0
18,510
Well, the only reason this was done with AMD chips is because they were free. Intel actually sells their chips and wasn't willing to donate them for this project.
 

Raidur

Distinguished
Nov 27, 2008
2,365
0
19,960
@ Zingam

Both! Crysis still has the all around best graphics out if you ask me, the detail on that game is mind-blowing. BUT it could be optimized better. We will be seeing games with better graphics that run much better very soon.

Don't forget Crysis came out in 2007!

@article

Good god AMD!

This just in, Intel's 1.5Billion is spent! =P
 

ohim

Distinguished
Feb 10, 2009
1,195
0
19,360
[citation][nom]mr_tuel[/nom]"The whole system has 300TB of memory and 10PB of hard disc space"2GB per core*224,162 cores=448.324 TB of memory. Not sure what the actual amount is, but that is what the given data actually adds up to.Good for AMD and Cray. I am interested in the kind of models and simulations that this beast runs...Makes my i7 look like a 486 lol!!![/citation]not so long ago was an article about the chinese supercomputer that is a hybrid between intel CPUs and AMD GPUs and compared to a household PC the figures was something like this: what that supercomputer does in 1 second a normal PC will take 160 years to compute, so all things said these means that the fastest supercomputer in the world would do in 1 second probably more calculations than your your I7 could have done since the time of Cesar till now ;)
 

mapesdhs

Distinguished
Some relevant info...

Many scientific apps do not scale well beyond 32 CPUs/cores, or so
I was told during my training course for Cray systems. It depends
on how well a task can be parallelised. Sometimes a single shared
memory system like Altix is far more effective when data access
patterns are very mixed & complicated between nodes and RAM,
other times a cluster makes more sense when a task can be easily
split into smaller units. Lots of other issues aswell - cache/RAM
behaviour, data I/O requirements, etc. (eg. ANSYS runs often
need massive amounts of RAM but not many CPUs).

Some tasks marry better onto different architectures such as
Vector machines. It varies.

It's much more likely that the Jaguar system is used to run
multiple different jobs at the same time, partitioned into
numerous separate OS instances (some variant of Linux I expect).

Typical applications include cosmology, weather modeling, QCD,
CFD, GIS, and yes modelling nuke explosions, though actually
other common tasks for that programme include things such as
modelling corrosion, wear & tear, etc. - what happens to the
casing after 30 years? Is it still reliable? Problems such as
how to design the electrical systems so they can withstand the
radiation, circuit redundancy, simulation, that sort of thing.
Modeling a bang is a small part of the whole approach.

In reality, 80% efficiency on systems such as this is a solid
achievement, if possible. Usually though the logical approach
is not to even try and run codes which would be better suited
to a different architecture, or just use it to run multiple
different tasks.

I've always had a soft spot for shared memory systems (because
they can handle just about anything) but it's hard to scale
them beyond a few thousand CPUs and cost-wise they're not the
best choice for tasks which can be easily split up such as
rendering, unless custom sw is written to exploit the
architecture properly (which is what ILM used to do with its
32-CPU Origin2000 systems). Still, it's no less fun watching a
32-CPU system doing a Blender render. :D

I doubt Jaguar's top spot will last long though.

Also, lookout for SGI's new UV system being announced soon (AFAIK,
it's an i7 XEON NUMA shared-memory design - I hope so anyway),
and meanwhile we should start seeing large Nehalem clusters
appearing which ought to boost performance by a huge margin
given the big improvements available over the older XEON series.

Ian.

 
Status
Not open for further replies.