joefriday :
Yes, but it's not the capacitors, it's the mosfets that are burning out. Simply too much current draw for a 3 phase VR circuit. Interesting we didn't see this same issue come up during the short -lived D805 overclocking fad.
There are a few reasons we didn't see this happen with Pentium D 805s but we do with Phenoms:
1. All of Intel's CPUs at that time drew quite a bit of power, so even a crappy board designed for low-end chips still had to be able to support 115 watts or more of CPU power draw for P4 Prescotts. This meant that *all* boards had beefy power supply circuitry. However, low-end and midrange AMD chips are 45-65 watts, so an inexpensive board targeted at low-end and midrange chips does not need to support the current draw associated with triple-digit wattage. So if you put one that does draw that much in there...it might just fry the system that was not designed for it.
2. Voltages on a stock Pentium D 800 are around 1.4 volts while voltages on Phenoms are ~1.2 volts. This means that for an equal power draw, the Pentium Ds are pulling less current than the Phenoms and high current blows electrical parts- wires, MOSFETS, etc.
3. Pentium D 805s had locked multipliers, requiring high bus speeds to overclock heavily, whereas Phenom BEs have unlocked multipliers and do not require high bus speeds to overclock. It takes a good board designed for overclocking to reach high bus speeds and such boards have heavy-duty VRMs. The result was that people didn't overclock Pentium D 805s with $80 motherboards with small VRMs, they put them into $200+ boards with very large ones to do that. If you were to get a low-end 945 board that supported overclocking and put a Pentium D 840EE in it and goosed it using the multiplier, I betcha you'd see some blue smoke just like you do with a Phenom BE in a low-end board doing the same thing.
amdfangirl :
Intel's Pentium D was never that high in TDP tho... but still, this popping occurs in 780Gs mainly... and some matx Nvidia boards... must be cheaper manufacturers...
The Pentium Ds had a maximum rated TDP of 130 watts, so it's a 10-watt difference in rated TDPs. The 45 nm Intel Core 2 Quad QX9775 has a 150-watt TDP. Big deal- they all make chips that are rated to need roughly as much cooling and power supply as the others. Also, most boards with the 780G or NVIDIA GeForce 8000 chipsets are not designed to be an enthusiast board- they are home/office or HTPC boards for the most part.
yomamafor1 :
Uh... no. Server chips are rated no different than desktop chips or mobile chips.
They are. AMD gives two figures for their server chips but only the TDP for the desktop units. They have a TDP and an ACP, where the ACP is an "average" figure of power usage. Note that 75 W ACP = 95 W TDP and an ACP of about 100 W is a 125-watt TDP.
A Phenom 9950 running at 3.6Ghz (calculated) will likely consume 264W, 4.0Ghz will likely consume 337W.
Comparatively, a Q6600 running at 3.6Ghz consumes 175W, 4.0Ghz consumes 192W. You do the math.
Can you list the voltages you used in those calculations? I am curious.
By the way, TDP is the power required to cool the chip, not the power consumption.
It is the manufacturer's figure for how much cooling capacity a given chip needs for its intended usage to not thermally damage itself. Actual maximum power consumption figures are given later in the processor specification sheets. AMD's TDPs are their Vcore_max * Icc_max (maximum power) for all of the chips I've looked at while Intel's are some different figure.
Also, power consumed by a chip == quantity of heat dissipated by the CPU, just as it is in all ICs.
kassler :
Today I have spent some time with a friend that is running linux with C2D. We where installing some applications om a vmware session running windows xp but it took some time. So he wanted to show quake wars on linux. We switched to the main linux installation ande started the game. it was slooooooooooooooow. after 5 minutes the computer crached.
I use vmware everyday, sometimes two different sessions and surfing in windows. it runs very smooth, no problem what so ever. ok I have a phenom (cuad) 9750. the difference is like night and day
I call shenanigans. If your Linux system with the C2D ran slowly and then crashed when running ETQW, it's almost certainly not the CPU that is at fault. It sounds like a slow GPU causing the slowness during playing the game and a buggy GPU driver locking up X. I've seen that quite a bit before in peoples' systems but never have seen a modern, fast CPU that wasn't overheating and throttling or damaged by overclocking run very slowly like that and then all of a sudden hang. You could also look at the RAM usage- maybe he had little RAM in the box and tried to stuff ETQW's 1.4 GB memory footprint into that, caused swapping, and then hung as the OS ran out of memory.
darksupreme :
And yet it had to be the CPU. It couldn't have been bad install or, dare I say Linux core? It is possible your "friend" rewrote the Linux core to better suit him and messed up a line of code that caused that specific operation to fail. Maybe it forced the CPU to try to divide by 0. And I do hope you know you cannot divide by 0.....[/quote]
A bad install or data corruption (typically due to hardware errors- Linux FSes are pretty corruption-resistant) could explain issues, but it is not likely that the friend "rewrote the Linux core to better suit him and messed up a line of code that caused that specific operation to fail." If he did mess with the kernel, library, or other code, he would have had to recompile it to get that version to be loaded and cause trouble. Anybody who's programmed on anything a tenth as complex as the Linux kernel knows that changing something willy-nilly will 99% of the time result in a failure to compile and thus you never build the modified code into any binaries. A coded-in divide-by-zero error will be caught by GCC at compile-time, by the way, as will obvious causes of instability like double frees and illegal memory modification.
So according to kessler since I have a C2Q and it doesn't multitask well, I should not be able to surf the web, listen to music, download files, transfer files on my HDD to other areas and encode music all while installing and uninstalling programs and playing TF2 (especially since TF2 is based off Source and very bound to the CPU). But I do that. On a daily basis. And TF2 still runs at oh.....150FPS average easliy.
The OS you choose has more of an effect on how well a computer handles multitasking than the brand of CPU. For an example, my 6-year-old P4-M laptop that is now my hacked-together HTPC (er, HTlaptop) runs Debian Lenny can do all of the things you listed above except for playing Team Fortress 2 at 150 FPS and do it reasonably well. However, that is a more painful experience under Windows as Windows tends to lock up when some process has its CPU and not let go until the process is finished. Ditto with my X2 4200+ desktop, C2D U7500 notebook, and the Q6600 I use at work. The multi-core processors remove a bunch of the lagging/lock-up behavior of Windows when under heavy load but it still isn't as smooth as under Linux. I can tell the difference between the single-core P4-M and the multi-core machines as far as under-load smoothness is concerned, but you could put me on any of the multi-core machines and clock them the same and I couldn't tell them apart without looking. The OS doesn't care whose name is on the CPU, only about the number of cores and how much bandwidth the cores have. The FSB on Intel notebook and desktop systems does not bottleneck the CPU, so they are all the same to the OS.