Is the X2 truly that bad?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Some run hotter than others and also some dissipate much more heat than others, too. There is a big difference between a Geode that draws a few watts and a Xeon Paxville DP that draws well over a hundred. CPUs used to not need heatsinks or fans at all, up until the 486DX4s or so. Then they just needed a passive heatsink, finally an actively-cooled heatsink around the PII/PIII era. After that, chip power dissipation generally crept up as time went on until the figure reached about 130W or so rated TDP, whereas the heat and power draw just became too excessive.

Today, we have a dichotomy in power draw. The average CPU draws less power today than in the past, but the highest-performing ones draw very large amounts. AMD and Intel are both guilty here; the mainline X2 and Core 2 Duos are rated at 65W TDP but the highest-performing parts from either maker run nearly to exactly twice that. I'd personally throw my lot in with a lower-clocked CPU that burns a ton less power and makes a ton less heat and noise than one that's clocked a little faster and resembles a blast furnace.
 
hehe.. I know. I pretty much remember though the years

But the size of the die compare to back then to now... and the size of the HSF, not to mention water cooling.. I still comprehend any chip running hot, since you shouldn't run a CPU without a HS.

Ahem... When I can cool my beer by sitting it on top a CPU, I will say, now DAT's a cool CPU!! :lol:
 
Both Intel and AMD label their heat sensors as accurate to within 1 degree Celsius of the actual. Different sensor, but I would hope they use the same temperature scale lol :)
But they don't measure at the same location.

All C2Ds are rated at 65w power draw, which yes is less than most X2s However, there are a couple lower-end 65w X2s out there.
But they're much slower. On the other hand, Intel's hottest C2D, the QX6700 still uses less power than the 6000+. Clearly at this point in time, power dissipation and therefore heat, is in Intel's favor.
 
C2D had X2 completely beat in price/performance when I bought my X2 3800+ Windsor last summer...but now my proc is 33% cheaper! (bought for $153, now just $98 or something) After looking at some benchmarks here at THG, I found that with the most recent price drop makes price/performance between C2D and X2 to be...about the same!

THG Benchmarks, Newegg prices:

E6400: $222
Better In:
3D Mark - Graphics
AVG Antivirus
CoD2
DivX
F.E.A.R #2
LAME MP3 encoder
Powerpoint
Word
Multitasking #1
Multitasking #2
Ogg
PC Mark 2k5 - CPU
Photoshop #1
Photoshop #2
Premiere Pro
Price/Performance Index
Quake 4
Serious Sam 2
SiSoftware 2007 - Multimedia Integer
SiSoftware 2007 - Multimedia FP
SiSoftware 2007 - Arithmetic ALU
UT: 2004
WinRAR
Xvid

X2 5000+: $215
Better in:
3DS MAX
3D Mark - CPU
Clone DVD
F.E.A.R #1
iTunes
Mainconcept H.264 encoder
PC Mark 2k5 - Memory
Pinnacle
SiSoftware 2007 - Memory FP
SiSoftware 2007 - Arithmetic FLOPS
SiSoftware 2007 - Energy
SiSoftware 2007 - Memory Integer
Windows Media Encoder Streaming
WMA 9.1

Pretty good matchup, seems the slightly more expensive C2D wins in about 50% more benchmarks. I think one thing to add is that the e6400 tests were performed with DDR2-800 RAM while the 5000+ tests used slightly slower DDR2-742. It's also worth pointing out that the 5000+ has a much higher clock speed, so it's arguable that the e6400 does provide better overclocking (there's also the fact that we all know that the C2Ds are better for overclocking). On the other hand, the C2Ds just run hotter.
By F.E.A.R. #1 and F.E.A.R. #2 do you mean the expansion vs. the original? If that's the case, I'd rather have better performance in the original because the expansion sucked big time, and doesn't support widescreen.
 
accord99:
You need to stop assuming I'm a fanboy. I didn't use that evidence to say that X2 is better than C2D. It's pretty damn obvious that beyond the 65w X2s, C2D totally wins...which is why I mentioned that $230 top price someone should pay for an X2, like I said, beyond that, there's no question; C2D all the way.

How is the location of the temp sensor different, and how do you know that the difference, if, any, makes the temps on X2s have an unfair advantage over C2D? I'm not saying all this isn't true (you'd LOVE it if I completely did not believe you), I'm asking you to cite a source.

Heyyou27: that's the original FEAR with different setups, just check the THG CPU charts. I wouldn't use FEAR performance as a deciding factor, but if you really like the game, that's ok.
 
accord99:
How is the location of the temp sensor different, and how do you know that the difference, if, any, makes the temps on X2s have an unfair advantage over C2D? I'm not saying all this isn't true (you'd LOVE it if I completely did not believe you), I'm asking you to cite a source.
The Core 2 Duos have new sensors located nearer to the core hot spots. This point is called Tjunction. They also report temperatures as a delta between the throttling point, which is assumed to be 85C. So temperatures reported by utilities by Core Temp have been calculated from the 85C point. A temperature reading of 60C means you've got 25C of headroom before throttling, which is a lot.

AMD documents show that their temperature sensor gives temperature for TCase, which is the temperature at the center of the IHS. In the past, the temperature sensor was particularly in-accurate and could be off by more than 10C. See page 73.

http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/31412.pdf


So there's no point in comparing temperatures between two completely different processor families. Plus since the Core 2 Duos use less power, it's clear that they will be cooler.
 
So, process technology and design has no impact on operating temps and tmax? I don't understand your last sentence in your post. Brisbane cpu's are very cool running chips, even with added vcore.
 
So, process technology and design has no impact on operating temps and tmax? I don't understand your last sentence in your post. Brisbane cpu's are very cool running chips, even with added vcore.
Since all power used by the CPU is converted to heat, a CPU that uses less power is clearly going to be cooler. So far, reviews show Brisbane uses less power than Windsor but still can't match Conroe.
 
i consider my self an gamer as my spec says so 8800 4gb ram 4x hdds in raid 0 my AM2 and my 939 X2 3800 have done me fine (abit thay have been running at 2.4ghz all the time Nom 2.0ghz for an X2 3800+)

the only thing thats been holding me back is i never used an higher performace intel mobo or the nvidia intel boards as i not been sure what to expect i been building pcs useing amd for the last 5 yrs

some mobo manufacturs are letting them selfs down like Asus/asrock Useing an P4 PMW fan header on an AMD motherboard in turn Brakeing the quite fan so the fan runs at 3500RPM so i had to go out and buy an load of fan mate 2's

i probly jump on the core 2 duo soon as i like to get my 8800 into more use and i be able to use the m2n32-sli Delux mobo as an server motherboard and the 3800 X2 as it has like 2 Nvidia gigabit ports on it (that are HW based) so i get no more 90% kernel use on my server any more as gigabit PCI based cards suck when thay been used

it will be intreating if the next chip from AMD will be good or not
 
So, process technology and design has no impact on operating temps and tmax? I don't understand your last sentence in your post. Brisbane cpu's are very cool running chips, even with added vcore.
Since all power used by the CPU is converted to heat, a CPU that uses less power is clearly going to be cooler. So far, reviews show Brisbane uses less power than Windsor but still can't match Conroe.

It's too bad that almost all reviews of Brisbane omit publishing operating temperature's. I found one lousy review that reports the findings. However there are numerous posts around the web praising how cool these chips run. Which reviews are you refering to? I'd love to see them as i'm comming up practically empty.

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2889&p=8
 
Accord99, your links do certainly prove that the Brisbane does in fact use less power than the Windsors do but they don't say that Conroes use less power than Brisbanes do, in fact, some say the exact opposite.

1. Your first link compares a 2.13 GHz Core 2 Duo E4300 to a 2.60 GHz X2 5000+. More GHz = more heat if everything else is equal. That's simple math. The X2 5000+ should have been compared to a 2.67 GHz Core 2 Duo E6700 or the E6400 should have been compared to a 2.1 GHz X2 4000+ Brisbane. That would be a much more telling tale.

2. Your second link shows a 2.5 GHz Brisbane consuming 41 watts, 12.6 W less than the C2D E6700. Oddly enough, the Prime 95 wattages for the same chips in this LostCircuits test differ greatly from the BeHardware test even though the same test is being run on the same chips. LC has the X2 5000+ Windsor consuming 68.6 W at full P95 roar but BeHardware has it sucking 90 W. I wonder what the difference is? Supposedly both groups used a DMM to tap into the +12V lines going into the VRM. And oddly enough, the C2D E6400 in the BeH test consumes 49 W at double Prime95 and LostCircuits has the faster-clocked Brisbane 4800+ at 41.

3. Xbit also tests using a DMM on the +12V going into the VRM. But they use a different test than the others and still compare a 2.5 GHz Brisbane to a 2.13 GHz Core 2 Duo.

These tests are a bit dodgy at best, especially comparing the same test run by two different groups having wildly different results. It is very hard to compare the wattage drawn by just the CPU by measuring the +12V going into the VRM as the VRM may be very inefficient and burn up a fair chunk of the watts. This is what I think causes the wide differences in the BeHardware and LostCircuits tests. Also, if different board makers for the same CPU have differing VRMs, then why would board makers use a similar VRM for different CPUs? Also, if they want to do an apples-to-apples test, test the wattage burned by similarly-clocked CPUs. Sure, my undervolted X2 4200+ will run a lot cooler and draw less watts than your Core 2 Extreme at 3.73 GHz. But that's not really meaningful and so are these tests.

I call BS on this one. It's a good measure of how much the CPU + VRM burn but if we're debating just how much the CPU burns, then this test is about as accurate as a drunk trying to throw darts. I say either have a big old disclaimer about this OR at least map the VRM for efficiency at varying voltages and amperages and then correct the draw figures. Of course measuring the actual CPU draw by tapping into the hundreds of little power pins on the bottom of the CPU would be ideal, but that would be pretty hard. The guys who publish these +12V-into-the-VRM figures are doing flawed science and I call them on it.
 
EDIT: Wow MU Engineer, I'm not sure Accord wants to read all this, after seeing your post. Oh well, he got himself into it.

Accord certainly wants to be smart enough to turn something REALLY simple into something with lots of industry vocab words.

All that talk for nothing! You still haven't provided me evidence on the difference between the sensor in the C2D and the X2. You simply have not. You have made a statement and given me a source for one side (AMD), but not the other (I'm about to prove your conclusions about AMD totally wrong anyways...with your source!) Come on, stop screwing with us. No one just knows something, you had to have heard of all that somewhere, because I'm pretty sure you don't work for Intel. Good try, but you need to show me a source for Intel. AGAIN, I'm not saying that what you're trying to tell us all isn't true, I'm asking you to cite a source because as of now, what you're saying...it's just talk.

After looking through that AMD PDF, I found something interesting:

"The processor provides an on-die thermal diode with anode and cathode brought out to processor pis. This diode can be read by an external temperature sensor to determine the processor's temperature"

In addition, the term "Tcase" is NOT used in the entire document...did you make it up? From this quote in the document:

"The temperature offset is used to normalize the thermal diode measurement to reflect case temperature at the worst case conditions for a part"

My translation is that the value Toffset (just some value that AMD calculates for every individual chip it makes) is used in calculating the sensor output for the CPU, but it isn't the actual reading used by the external sensor in the motherboard, as accord believes. Instead, "Toffset should be subtracted from the temperature sensor reading" (yes, this is a quote from the AMD document). What this means is that the person developing motherboards for the CPU needs to take the actual temperature reported by the on-die thermal diode (mentioned above) (on die means it's actually ON the processor itself, so that's the reading used) and subtract Toffset (some calculated temperature that REFLECTS worse case conditions) from that to end with a temperature reading that is actually reported by the motherboard. I am confused about the number that Toffset really is, but my guess is that it's a very low number because AMD wants the safest temperature to be reported...if you haven't put two and two together, this means that the temperature reading you read while looking at SpeedFan (or BIOS) is the maximum temperature that that chip could possibly be experiencing, because in the calculations, they want to assume the case temperature (heat NOT produced by the CPU but contributes to the heat around it) is at a maximum.

Why would they do this? Why not have it report low temps? I'll tell you why: so they don't get sued. Ok that's a little bit of a stretch...what I mean is that AMD wants their core temperatures to be read as the maximum that they possibly could. If someone was using an AMD chip and it failed because it was being run at too high a temperature, but was reporting temps that were within the operating statistics of the chip that AMD supplies, then it's AMD's fault and they're screwed. HOWEVER, the way AMD does it is that they have the chip report the highest possible temp. If the chip fails then, then AMD can say "look, the max operating temp on that chip before tested failure is (let's say) 85C. Your motherboard was reporting a core temp of 90C, because it was assuming that you had horrible cooling solutions. You should have noticed the 90C reading and done something about it." Do you see the intelligence in this setup?

AMD does it this way, and Intel, blackmailing scumbags though they are, probably does it the exact same way. It's all about liability.
 
Accord99, your links do certainly prove that the Brisbane does in fact use less power than the Windsors do but they don't say that Conroes use less power than Brisbanes do, in fact, some say the exact opposite.

It's 2-1 for Conroe. Most reviews of system power consumption also give the edge to Conroe. With the current evidence that's available, I give the edge to Conroe.

Here's Behardware review of the 6000+ that includes the E6600:

http://www.behardware.com/medias/photos_news/00/19/IMG0019367.gif

The E6700 may have roughly the same power consumption as the 5000+ but then its also much faster in Prime 95.
 
All that talk for nothing! You still haven't provided me evidence on the difference between the sensor in the C2D and the X2. You simply have not.
You have made a statement and given me a source for one side (AMD), but not the other (I'm about to prove your conclusions about AMD totally wrong anyways...with your source!) Come on, stop screwing with us. No one just knows something, you had to have heard of all that somewhere, because I'm pretty sure you don't work for Intel. Good try, but you need to show me a source for Intel. AGAIN, I'm not saying that what you're trying to tell us all isn't true, I'm asking you to cite a source because as of now, what you're saying...it's just talk.

Here's what the author of CoreTemp has to say about the sensors:

In Rev F chips from AMD, the reported temperature also seems to be quite accurate, but from different reports and white papers I've seen, the CPU leaves the factory without having the DTS properly calibrated. AMD claims it could have an accuracy range of ±14ºC.

It is possible to detect the Tjunction temperature in Intel CPUs and it is possible to detect the TCaseMax of Rev E AMD chips via software; unfortunately, I've yet to find the info on how to get TCaseMax for Intel chips or Tjunction for AMD (which is the most important).

http://www.overclockers.com/articles1378/index.asp

In addition, the term "Tcase" is NOT used in the entire document...did you make it up? From this quote in the document:

Page 78:

http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/31411.pdf

And speaking of evidence, where is your evidence that Core 2 Duos run hot?
 
If you read the last part of my post, there are other things at play other than just the CPU in the computing of power measurements. Like the VRM, for example in the plug-into-the-+12V-lines test. CPU power draw is very hard to isolate without a ton of work, especially between different motherboards, so I don't buy the numbers that are put forth in these tests.

The entire system power draw is certainly an important metric and probably the one that really matters the most, at least in my opinion as that's what heat I have to pay to make and to take out of my room. Intel does tend to win those more often than not (almost only while under load, AMD generally wins at idle.) But that's not the question- the question is if AMD's 65W TDP CPUs draw less power than Intel's 65W TDP CPUs. I have to say that the results are inconclusive due to the fundamentally flawed nature of the tests and wildly inconsistent results for the same experiment.

So at the end of the day, we can compare total system power draw for similar TDP CPUs and CPU temperature. The first isn't that good at predicting just how much just the CPU eats and the last isn't comparable between AMD and Intel CPUs nor particularly accurate, if some people are to be believed. So we'll just have to say that this question either needs some VERY in-depth work to solve or just remain unanswered.
 
Performance is measured in many ways. Personally I don't perform well around systems that make a lot of noise over long periods of time. Also I live in the southeastern part of the US and my home can get pretty warm in the summer time. So I want a CPU that can run dead cool without blasting fans at 3500 RPM's. I don't bother with gaming and I certainly am not about to bother with over-clocking. To each there own of course but I've never been able to justify cranking up the voltage and risking a CPU meltdown for some minimal performance gain. I'd be more likely to just buy a faster proc/RAM/hard drives/whatever.

As to the previous notes about the accuracy of reviews, my belief is that most reviews are about as accurate as Bush's diction, the 5 o'clock news and the National Inquirer.
 
Is it Intel's propaganda that is responsible? or the ignorance of the average idiot (they don't call them that for nothing ya know)

I personally feel that too may people blame corporations when in fact their own laziness and lack or initiative to do a little research and remain ignorant is the real culprit... Trust me anyone who has EVER worked Help Desk knows where of I speak.
 
I was wrong about Tcase, it's possible I made a mistake in my search.

From your sources, it looks like the author at Overclock was able to get only some of the temperature information off of the chips he used. In addition, your quote about the DTS is misleading, you're missing the next part! The author believes that the CPU leaves the factory without having the DTS properly calibrated, but then he goes on to say that "The only thing I've noticed is some older AMD CPUs either have a very large delta between two cores or sometimes give some really low temperature readings. I guess this is understandable as this feature was unofficial in those CPUs..." so AMD had DTS in its chips as early as Athlon 64, but only now is it "endorsed" by AMD. The author sees nothing wrong in the temps reported by the DTS in current chips, so I don't see why you do either.

It's true that the temperatures are calculated differently, but that doesn't mean one is wholly less accurate than the other, or that one will always be reported cooler than the other. The author makes no conclusion of the sort.

And speaking of evidence, where is your evidence that Core 2 Duos run hot?

...well let's compare the X2 3800+ and the C2D 6300
Any forum you read at all, the 6300 will go into the 50s and even 60s. Check here and notice that 20,000-post user (which is why I'll trust him) saying that stock everything with an e6300 will idle in the 50s and go into the 60s in under load. The threads here and here also talk about 6300s idling in the high 40s to 50s and under load being in the 60s!

My 3800+ is ATM running at stock everything, and temps have NEVER gotten beyond 45C, under full load for hours even. But don't take my word for it! Here a user says his 3800+ is running too hot...and his max temp is 48! That is too hot, for a 3800+! At Overclock.net, this user says his 3800+ is running at 55C maximum. A user says a couple posts down the that temps are WAY too hot. My friend has a 6300 on his computer, which runs Ubuntu. His temps go into the 50s while loading. Again bonestock cooling.

I mean come on, everyone here knows that a C2D will simply run hotter than a similar X2. Luckily most of us have decent cooling solutions.
 
The Intel push-pin heatsink retention mechanism is sometimed hard to get seated perfectly, as people here on this forum have demonstrated. Having a less-than-optimal fit between CPU lid and the bottom of the heatsink will cause very high temps. Also, Intel's cooler is smaller and lighter than AMD's, maybe that has something to do with it also. Somebody needs to do a test where the same third-party heatsink is used on both an X2 and a C2D and temps are compared. That would tell us *roughly* how hot the CPUs run in relation to each other, if the temperatures were collected from a specific point in or on the heatsink in both cases (integrated HS thermal diode?)

My X2 4200+ Manchester at stock speeds never gets above 41 C under load, even when running in a warm room and with the CPU HSF spinning at 60% of full speed, which is about 1800 rpm. I think a bit of that is because I remapped the CnQ FID/VID points to undervolt the chip to 4200+ EE levels. I could try to set it lower than the 1.25 V it is right now, but it's rock-solid so I might just leave it there.