Phenom II On Ice: AM3 Overclocked With LN2

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
At a speed of 5.6 GHz, we were able to run SuperPi 1M in 13.000 Seconds. That is more than 89% faster then stock run at 24.609 seconds.
Wow it takes 5.6Ghz to hit 13s SuperPi 1M? It only takes 3.7Ghz on Q6600 air cooled to get 13s. I guess AMD needs LN2 set up just to compete with aircooled Intel.
 
accessgranted, actually we're both right; as cangelini implied,
you're referring to performance, as the article does (and thus
the figures were indeed wrong) whereas I always think in terms of
time reduction (where I'm right). Ce la vie. 😀


Ian.

 
Back to the article though...

Something about AMD's new releases seem rather odd to me. In all
the forums I've read, I've seen a heck of a lot of people saying
they were waiting for AM3 in order to do a new build, whereas
AMD's comments seem to suggest they're only thinking in terms of
the uprgade market. I'm certainly very surprised to see there is
no 2.8 or 3GHz AM3 CPU; seems like a glaring ommission IMO. Given
AMD's better pricing, I would have thought such a part would be
competitive right now.

Are there people here who were planning to do a new build based
on AM3? What CPU were you thinking of? Top-of-the-line?

I can't really see the benefits of AM3/DDR3 shining through if
the clock speeds are only up to 2.6, while the tri-core doesn't
help for my own target task (video encoding).

Pity, I was looking forward to seeing how a 3GHz AM3 would stack
up against an i7 920, especially re overclocking. Ah well.

Ian.

 
[citation][nom]megabuster[/nom]Wow it takes 5.6Ghz to hit 13s SuperPi 1M? It only takes 3.7Ghz on Q6600 air cooled to get 13s. I guess AMD needs LN2 set up just to compete with aircooled Intel.[/citation]
I don`t understand why you all ppl like to compare CPUs in sythetic benchmarks .. i don`t realy care if a Q6600 is 3 times better than any given AMD CPU in 3Dmark or SuperPI, i look at real aplications like let`s say if i`m a gamer Crysis and the difference is quite minimal.Example when i see ppl telling me a CPU is far better than the other and i look at a chart and see X CPU 3rd place and Y CPU 7th place and then i look at the results .. an impressive 5-10 fps difference ... wow such a big fukin difference.
 
[citation][nom]cadder[/nom]Interesting to see how the LN2 is actually applied.(But my E8500 on air runs superpi 1M in 12.4 sec.)[/citation]
i guess you do onlu superPi on your pc and feel good about it :)
 
But wait, I forgot about this... seems that I read somewhere that SuperPi gives way different results on AMD processors vs. Intel, i.e. it runs faster on an Intel processor. That might explain the disparity between a 3.8GHz Intel and a 5.8GHz AMD. I would like to have a way to know how to compare SuperPi results with AMD vs. Intel, because I have a couple of old AMD computers here that I would like to do meaningful comparisons with.

(Actually I run AutoCAD and Revit on the computer and it does real well with them.)
 
Looks good for AMD again, they have a REAL good triple core and quad core CPU for a good low price! Now they need to get together before and make a 32nm CPU like the core i7... :) good work though!
 
After reading the article, which is pretty cool, it occurs to me that rather than messing around with all of the insulation and things to protect the m/board and components from condensation, wouldn't it be easier to just put the whole board inside a polyethylene bag (with just the top of the pot exposed through a taped joint in the bag), then run a low pressure nitrogen or helium purge through the bag to remove all of the air from inside. Then when cooling the cpu, there would be no air surrounding the board, thus no condensation. You could then use the bag and purge system to test any board without the need for all of the prep work. You could probably even buy one off the shelf, as they are used for welding of exotic materials in an inert atmosphere (although this could be over the top!)
 
I've been waiting to upgrade from 939 to AM3. They supposedly launched Monday though I have yet to see any parts listed? Is the issue with DDR3 fixable with a bios update or is it physical and thus no AM3 parts have actually shipped except to review sites? Is there an actual ETA?
 
[citation][nom]JustPlainJef[/nom]No, AccessGranted, you are wrong. If the first one benched at 28 minutes, and it ran OC'ed in 14 minutes, that's a 50% increase. Since it ran in 16:36, that's less than a 50% performance increase. Same with the second one. 18.8 / 2 is 9.4. Benchmark finished in 10.6, which is less than a 50% increase. You are doing the math backwards as you always compare to the original number. 10.6/18.8 = 56% of the original or a 44% decrease in time.[/citation]

Wow, some people need to go back to elementary school math class. Let's say it takes you ten minutes to move a pile of bricks from point A to point B to start. Through some miracle (maybe you've been working out?) you manage to find a way to move twice as many bricks from point A to point B in the same amount of time. You've increased your productivity by 2x, or 100%. That is THE SAME THING as saying YOU CAN MOVE THE SAME AMOUNT OF BRICKS from A to B IN HALF THE TIME! In other words, if it takes me HALF the TIME to move the SAME AMOUNT of bricks from point A to point B, I have ALSO increased productivity by 2x or 100%, because I could also move TWICE the AMOUNT of bricks in the SAME AMOUNT OF TIME!!!!

If you can do TWICE the amount of work in the SAME AMOUNT OF TIME you've increased your performance by 100% (you are now twice as fast at doing the same task). On the same note, if it takes you HALF the time to do THE SAME AMOUNT OF WORK you have also increased your performance by 100% (you are twice as fast). They are the same.

If you stick with the logic of 50% reduction in time = 50% improvement in productivity it would have to take ZERO time to accomplish the task to get 100% (or 2x) improvement!!!!
 
Doh, and I should go back to whatever class that was where they told you to always read the instructions before you start. Just realized I was at the BEGINNING of the comments when I posted! And no way to edit/delete your own posts??? Oops.
 
I would not mind if you guys could do the testing with air and water cooled solutions to see where you can get a cpu/ gpu to clock. As your avg Joe is not going to have access to Nitrogen. this is great to see where a processor is possibly able to get to, but it always comes down to how well it's cooled. So I believe that the real test is air and water solutions, maybe we need an update on these solutions as well as it's been awhile, especially the water solutions which are starting to grow in popularity. Just my 2 cents worth, great article by the way.
 
I just wish they'd all quit screwing around and break out those 12/24/48/96/128 core setups on us rather than bleed us out for years with pitiful increments when they have the tech to do so much more!

BTW, before anyone says it... Don't Say They Don't or Can't!!!
(((Grin)))
 
Trouble is it wouldn't help much in the first instance. The vast
majority of apps are not written to utilise multiple cores. Even
games are barely out of the starting block with only a few usefully
benefiting from more than 2 cores, which is why one will often
get better results from using a highly oc'd E8400 than various
quad-cores. In some cases the very way the gfx system works holds
back potential speed improvements (the sw API side of things I
mean).

New Scientist reported some time ago that the industry has a more
general problem atm with a major lack of talent in parallel
programming. A legacy of a market that evolved based on marketing
its products on a simple number (MHz) with no need for parallel
coding skills has created an industry base of programmers who
have little experience in such things.

So yes, right now I'm sure Intel and even AMD could churn out,
say, a 32-core chip if they wanted to, but what would be the point?
The speedups for most applications would be pretty woeful (think
of how many audio apps only use 1 or 2 cores at present, and even
some of the video encoding apps are not written to properly exploit
multiple cores atm).

But even if the applications side were magically fixed, feeding
a large number of cores with data is another problem. Conceptually
speaking, there's not that much difference between a multi-core
CPU of this kind and the large single-image systems SGI makes;
SGI learned long ago that unless the bandwidth scales as the
number of CPUs grows, performance will not increase in a useful
manner (see my Alias benchmark results for good examples of this,
eg. compare the 18-CPU bus-based Onyx to the 2-CPU NUMA-based
Origin300; the Onyx is starved for bandwidth since it has a shared
bus design).

Thus, right now the real bottlenecks to the useful exploitation of
multi-core CPUs is how to feed data to so many cores, and the
lack of relevant expertise in the sw industry.

What is certainly obvious though is that, multiple cores aside,
a much higher-clocked CPU could easily be released if they so
wished, given how well Core2, i7 and Phenom2 all oc even on air.
But that's not how business works - they are, afterall, supposed
to be profit making enterprises with shareholders to satisfy.
They wouldn't release a chip that's suddenly 100% faster all in
one go when much more money can be made by releasing a larger
number of CPUs in stages, each with 20% faster performance every
few months.

On the other hand, if someone did try something like that, these
days they'd probably just be hammered by competition laws. The
Erebus2K chip had the potential to be a major performance step-up,
but alas the company foundered and the technology was bought by
SUN, so grud knows where all that expertise went to (SUN's T2 CPU
perhaps, who knows).

And btw, don't moan if you choose not to take the parallel
programming course at University. ;D I remember very few did so
when I was at Uni (1990), because they couldn't see its relevance
outside research applications. I took it because it interested
me (I could see its usefulness in modelling neural networks, AI
being another option I went for), but it was certainly a hard
topic to learn, especially when paired up with related sw areas
such as functional programming languages. If you think writing in
C is easy, try writing in Prolog. :}

We'll have chips with lots of cores eventually, but they'll be
much more useful outside of the consumer desktop arena long
before we see them in home PCs. I'm sure Intel figures HPC is a
better target market to begin with for such things. SUN is already
using an 8-core CPU, but it's not exactly a consumer product...

Ian.

 
Heh i remember when 20 seconds was an insain SuperPi score to have, P4's @ ~7.4ghz just hit that 20 second mark, then a 4ghz pentium m, with AMD's oh so far away, then the core 2 duo's - my E6600 @ stock got 20 seconds, and 15 seconds @ 3200/1600, now AMD finally is there at sub even 15 second mark (for a benchmark that loves Intel cpus) - amazing!

Still, i want to see i7's pushed like this to see what they can pull off - wprime benchmark with 8 threadts @ ~6ghz could be interesting 😛
 
why doesn't anyone use a dry box so as to not have to do all the mobo prep?? you just ven t dry n2 in to the box to evacuate in atmospheric moisture.

karl(surf2di4)
 
Status
Not open for further replies.