Stress Test MK II

Page 16 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I doubt the 840EE could run a month at 90C, even if the surrounding components could.

I wonder if Intel would honor the RMA?

Maxtor disgraces the six letters that make Matrox.
 
Given that thg has done everything possible to make the intel system run, don't you think they should just up the priority on the Divx thread, given windows problem with effcient thread allocation.
I don't consider this cheating just changing the characteristics of how the threads run.
They could do the same on the Intel chip (not that it would beneft much).
Or even switch HT off, to show how the CPU's perform on an equal core count (this would show porky how OS thread schedualing under windows works, badly)
 
I don't understand why people keep saying this is a problem with windows.. its not a problem, its a *feature*. And quite honestly, it works.

The DivX encoding is set to low priority, that simply means windows will make sure anything else gets cpu time first, especially the active app (which is Farcry in this test).

Really, if you care about divx encoding speed as much as the other tasks, all you need to do is change the piority so they are equal; I really don't see windows being at fault here, nor the X2. If I was simultanously gaming, running a gameserver and ahm.. well doing something else cpu intensive apart from encoding a DivX *cough*, I'd be *glad* my low priority DivX job wasnt stealing so many cpu cycles from my game or game server.


= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
I wanted to kind of point out a few issues I see with the tests:

The FPS listed for Far Cry are the current not average, so they jump around a bit. If the CPU is laging a bit, it does not drop the FPS much, but makes it much choppier in fealing. I also do not have a lot of confidence in their software to track the results too much. They have been having a lot of trouble with it.

As for the scores themselves, I do not see them as being very important. They are so tylored to the P-EE it is rediculous. DivX with XMPEG has never been very popular. It is very buggy and massivly favours the P4. Things like DVD2AVI and VutualDub are much more popular. They are also more stable, updated more often, and do not particularly favour any one platform. Xvid has also become VERY popular too. I still think DVD Shrink is a much more rellavant program as the cost of recordable DVD drives and media are much cheaper now and the ability to play your "back-up" in a standard DVD player is a big plus. Turning on Deep Analyse and Enhanced Image Quality and backing up onto a single DVD a long movie like any of the Lord of the Rings movies would have been a much better test. Also a game that loads the CPU more like Doom3 or HL2 would have been better. If this did not load the P-EE all the way (not sure if DVDShrink is threaded at all) then encode a HD-DVD with WMV-HD. That is multi-threaded pretty good and also massively loads the CPU.

Really according to Intel the P-EE is an "enthusiast" CPU, so I do not see too many people compressing big WinRAR files often either (I've never done it myself). Ripping MP3s off a CD and converting DVDs to DivX is outdated and FarCry does not push the CPU that hard. It is more graphics bound (especially when not running SLI which really does not run that big a boost in SLI). Another advantage over DVDShrink is that it runs VERY well in the background and does not drage a system down. I would not want to try to play a game with it running on my CPU, but on an X2 it would run great.

There is the queastion of why the two Intel boards did not work at all and why the Epox board fried (well it was an Epox board... 😀 ). I stated before that this CPU is likely drawing 150w or more. What is it rated for... 120w? If the board makers based it on the 120w and you drop a chip in that draws 150w, then a lot of boards ARE going to fail. Now the Epox board said it was rated for the P-D 3.0Ghz and faster and did not specify the P-EE. Maybe the HT does add a lot of power draw. Biulding a system around the P-EE would not be very fun if it is so picky about the board, power supply, heatsink and fan, and case cooling. I think this is one point were BTX is really NEEDED.

I also noticed the fan on that heatsink was supposed to be rated for 3500RPM, but is running around 4500RPM now. I guess the P-D/EE heatsink fan was a little more than just a tad faster like THG said... it is more like 50% faster and is likely pretty loud. Isn't that thing like a 92mm fan. That would be a screamer at 4500RPM. Even an 80mm fan at 2500RPM is around 28-30db. This thing must be running at 45db or more. Ouch! Even in a workstation that is getting loud, but as I said... and "enthusiast" system is a home system and a loud fan like that would not be tollerated by many (my loudest fan is just over 20db). In case you did not know, every 3db is 2x louder. If that thing is a 92mm fan, even at 3500RPM it is running 45db and at 4500RPM it would be around 53db (the 92mm Vantec Tornado is 56db at 4800RPM).
 
No, they shouldn't change the priority, they shouldn't turn off HT, they shouldn't put in a regular 840....they should leave it as is. Porky doesn't need to be shown anything, he knows what is what and wants to only talk about what's not.

They should stop the whole test, get a board that works in SLI for Intel and restart with all the originally planned components. The Intel's accompanying components failed in the TRUE stress test (mind you, THIS IS NOT A PERFORMANCE TEST....the numbers are only there to show if SOMETHING IS REALLY WRONG [nothing is, Windows and the CPUs are doing their job perfectly fine]), so it should have already lost, but I realize that the boards didn't play well and that's not fully Intel's fault.

IMO, the stress test is over and now THG is trying to save Intel's face by getting it's numbers back up and getting at least a 24hr runtime.

Maxtor disgraces the six letters that make Matrox.
 
I agree that this is a feature or characteristic of having two real core vs 4 virtual cores. Switch off HT on the Intel and you would have the same behaviour.

It might just get the X2 to process even more (it's what I do on a daily basis with large business systems to ensure they are running at peak throughput)
 
Hey Stimpy you read my mind. With all the concessions that the intel chip has had (from new motherboards, coolers, change of graphics card etc), I think it would be only fair to cut the X2 some slack as well by sorting out the Divx encoding - you never know, it may benefit the intel by spreading out the X2's dominance in the other areas.

By the way, are THG changing the AMD configuration in accordance to what the intel is running (for example, are they changing the motherboard, cooler etc) as I thought this was supposed to be a direct comparison (using the same makes of mother board / memory / similar specced components).

I see that the intel is catching up a bit now - if this is because of hardware change, then surely the X2 has the same rights? Why should it stick with the same hardware that it started the whole test with? Surely it is now at a disadvantage (hardware config wise)

..and surely this invalidates the whole experiment...

I know that keeping the same config goes to show how good the stability of the X2 chip is, but if a hardware change (like intel's) could alter things slightly then it should have been done - when they restarted both systems.

(unless it already has been done - then ignore all the last bit)

However, I still think THG should sort out the priority issue, they've f*cked around with the intel enough.

Did I sound like a novice?

My first post :)

Later


<P ID="edit"><FONT SIZE=-1><EM>Edited by jayyyb on 06/09/05 11:17 AM.</EM></FONT></P>
 
I mostly agree, so I can keep this post short 😀

One thing perhaps: a test like this needs a framework. They need to line out what the objective is, and how they are trying to achieve that:

- Are they just running a stress test to gauge stability of the cpu's, then its a nonsense test apart from showing the issues with all the intel MBs. If you want to isolate CPU glitches, you need ECC ram at the very least, and you need to run code that is known to run without issues. Not games and freeware divx stuff. Oh and you'd need a couple of years of uptime and several samples on top of a nuclear shelter if you want meaningfull results. CPU's produce errors maybe once every 3 years if you have bad luck. 2 of them will likely be caused by cosmic radiation.
-If they are testing motherboard stability, then they have proven their point, but there is no need for the performance charts.
-If they are testing performance, well, I agree with your comments. They should ensure it produces meaningfull results, which it doesn't now. This is like a triathlon where competitors are free to choose how much they swim/run or cycle and then charting the results. Unless someone wins on ALL the different disciplines, how could you ever conclude anything from that ?
-if they are testing thermal/power issues or its impact on stability ot whatever, for crying out loud, buy another watt meter and measure the amperage per system, not both combined. Although that would also require identical PSU's again to be meaningfull.

As someone else posted elsewhere: its still a good stress test for the LCD's 😀

(ok, so it wasn't THAT short, sue me :)

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
Those spikes are probably just some artifact from a benchmark finishing and restarting
A single benchmark restarting allows the cpu to cool by 10c? I think not. Especially since all the cpus drop in usage, not just 1.
throtteling introduces also keep the cpu busy as far as the os is concerned.
That would be true if windows was the monitor. I dont think it is.
 
In My Humble Oppinion (and to keep it short my gf is waiting).

Yes if it were meant as a performance test: A user wants to run 4 applications at the same time with everyone of them running the fastest they can. -> They should set the priority of the divx encoding.

Disagreeing a bit with the cosmic radiation, ufo's 🙂P), ??? and crashes. Theyre in the same room so it would be about even for both of the systems and thus something you could ignore. (Besides it would also be a test of a systems ability to survive wotever solar flares.)

But totally agree on that they should have a clear goal for the test and they need to define how that goal is measure: Eg divx encoding minutes? or farcry framerate stability? or number of reboots?

And I agree on the systems thermal / power issues.

BTW: both systems came up with an error now ... wonder what it is :) -> Farcry dropped to 14 fps on the amd after error i think...
 
>A single benchmark restarting allows the cpu to cool by 10c?
>I think not. Especially since all the cpus drop in usage, not
> just 1

How do you know ? I think that chart is an average of all cores, and the scale is adjusted dynamically, ie, they may only drop to 90% or so. The temperature drop also is closer to 5°C, even though I agree that is still non trivial.

>That would be true if windows was the monitor. I dont think
>it is.

It has to be, one way or another. Even an external app hosted on another machine could only get this info from windows kernel on the tested 840EE. nothing else can know how busy or iddle those cores are.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
>Disagreeing a bit with the cosmic radiation, ufo's 🙂P), ???
>and crashes. Theyre in the same room so it would be about
> even for both of the systems and thus something you could
>ignore. (Besides it would also be a test of a systems ability
> to survive wotever solar flares.)

No, not true. AFAIK cosmic radiation constist (among others) of high energy particles like protons, heavy ions or neutrons. When a cpu (or memory module) gets hit by one of those, it can cause bitflips, or worse, potentially even kill the component. Now these particles are thankfully pretty rare down here, but it would be pure chance wether one hits the AMD or Intel chip. Its not like both chips would be subjected to the same radiation, just because they are in the same room. The frequency of this happening is low enough that you might need hundreds of testsystems running several years to make randomness statistically insignificant.

Here is some good read on the this, just in case you think I'm making up this issue:
<A HREF="http://www.edn.com/article/CA529381.html" target="_new">http://www.edn.com/article/CA529381.html</A>

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
>All I need is an invitation...

I think you'd also need to accuse me of something, no ?
:)

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
So the charts are adjusted dynamicly, but only by windows?
The temperature drop also is closer to 5°C,
Look again. The range is between 5 and 10 degrees. Right now it is to 58c. It was lower last night, when the running temp was higher. As well, thier 3500rpm fan seems to be running at 4800 rpm.
 
>So the charts are adjusted dynamicly, but only by windows?

I assume the server generates the charts, but the data has to come from the windows kernel on those machines (probably through permon).

>Look again. The range is between 5 and 10 degrees. Right now
> it is to 58c.

yes, only a single spike. Maybe someone opened up a windows in the office or someting :)

But it seems generally the core runs around 68°C stressed with spikes downwards to 62/63°C.

I still find it slightly odd as well, but this isnt what you get when a cpu throttles. Have a look here:
<A HREF="http://www.digit-life.com/articles2/p4-throttling/" target="_new">http://www.digit-life.com/articles2/p4-throttling/</A>
and note the time scale. Besides, someone else in this thread claimed the 840EE doesnt even support thermal throtteling, which seems likely considering it kept running at 90°C until it died.

No I still believe its a benchmark script artifact. Maybe each time it finishes encoding someting, the script has to delete something on the disk, or read or whatever, and during that time, one virtual cpu becomes mostly iddel, allowing it to cool down. You wouldn't see this on the Athlon, since there would always be 3 other threads available to be executed, so neither of its cores would ever iddle. Maybe its only during those short moments that the X2 gets any work done on the DivX test ?

= The views stated herein are my personal views, and not necessarily the views of my wife. =<P ID="edit"><FONT SIZE=-1><EM>Edited by P4Man on 06/09/05 01:53 PM.</EM></FONT></P>
 
False advirtisement. Or something like that....my lawyer's mad nice, he'll figure something out.

Maxtor disgraces the six letters that make Matrox.
 
don't you think they should just up the priority on the Divx thread,
Absolutely not. You are missing the point. The EE is stealing FPS from Fart cry to encode. Once again, HT is proving how bad it is for gamers. There is no way that the A64 should be 66% faster in games than the EE. What is worse is the number of minimum fps. (not being shown) In actual gameplay, the EE would easily be considered unplayable.
When you also take into account that the demo is being run "on the rails" so to speak, with SSE2 at optimal, and prefetch and branch predition batting 1000, the real margin between the two systems would be much higher.
HT is bad for gamers.
 
Yes, when you take the timescale into account, they do look very similar.
yes, only a single spike.
Or at least, now only a single spike, since the fan rpm has gone up to 4800. BTW, that happened at 2:51 eastern standard time, the same time that the intel system stopped auto-refreshing, and crashed to desktop. (I saw this happen, but nowhere is it recorded on the charts)
As far as throttling not being included, would you build a deluxe auto, without airbags? Be serious, throttling is there to protect the chip. Would anyone be so stupid as to leave the best safety feature off the top of the line product?
I took another look at the charts, and it is clear that all cpus are affected, and for a duration of 1 to 3 minutes. That aint no single prog.
 
>Yes, when you take the timescale into account, they do look
>very similar.

Euh.. you mean when you ignore the time scale I hope ?
Anyway, I just found the datasheet on the 840EE
<A HREF="http://download.intel.com/design/PentiumXE/datashts/30683101.pdf" target="_new">http://download.intel.com/design/PentiumXE/datashts/30683101.pdf</A>
It does support throtteling and Thermal Monitor 2.
Tmax C is indeed also 70°C, so its certainly not impossible the cpu is throtteling, but I still doubt that explains the charts we are seeing. throtteling happens extremely quickly. TCC operates in 3 microsecond intervals, and as I said, is invisble to the software.

In short: it might well be throtteling, but that is not what we are seeing in those charts IMO.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
Euh.. you mean when you ignore the time scale I hope ?
According to that article
it takes about 20-25 seconds to reach the final stage since throttling begins
and
We only need to see the throttling ending and to note the time. It takes Northwood about 5-7 seconds to do that, after which it returns back to normal performance in the same stepwise way it has gone down.
that seems very much like what we are seeing.
 
I have a dual zeon with ht.

Running 3 f@h instances, with nothing else running, all 4 cpu's run at 75%. For whatever that's worth to ya. I think i even have a <A HREF="http://home.comcast.net/~apesoccer/srv_files/xeon2.JPG" target="_new">screen shot</A> or <A HREF="http://home.comcast.net/~apesoccer/srv_files/xeon3.JPG" target="_new">two</A> of that from some time ago...

Just to show that when one thread dies, the other 3 still fall to all 4 processors.

F@H:
AMD: [64 3000+][2500+][2000+ down][1.3x2][366]
Intel: [X 3.0x3][P4 3.0x2][P4 2.4x5 down][P4 1.4]

"...and i'm not gay" RX8 -Greatest Quote of ALL Time
 
THG Germany has the 4th update. They state they did not want to change the priority of the DivX test as most users would not do that, but then again most users would not run those 4 programs at the same time... EVER. They are talking about restarting the tests with HT disabled to see how that effects the load balancing. I wonder if it will effect the temps too.

They fessed up about the temps it the translations are right. It looks like they put the wrong cooler on. They got two coolers with the Intel 955x boards. One was for a P-D up to 2.8Ghz and the other was specifically for the P-EE 3.2Ghz. Wow what a differance that made. The RPMs seem to be off though. (again maybe do to the translator) as it says the normal box cooler runs at 2000 U/min (which likely means revolutions per minute or RPM) and the one for the P-EE is rated for 3500RPM (or U/min), but it was actually running at ~4500. The temps averaged around 67'C. THG pointed out the max temp the P-EE was rated for was 69'C. 2'C differance and that was with good case cooling in a temperature controlled room.

Oh and I guess the TDP is 130w for the P-EE. I still think it is higher than that, but I guass it is pussible the P4 bus or the NF4 IE chipset was drawing 20w or so more.