[citation][nom]Pinhedd[/nom]Overclocking absolutely does introduce instability. It takes the processor outside of the specification and outside of the testing parameters performed by the manufacturer. Taking it outside of the specification introduces additional strain on connected components. Many errors that may occur due to overclocking might not even be noticed because they're corrected automatically.To make matters worse, the VID that is programmed into the processor at manufacturing time leaves enough voltage headroom to guarantee stability for the duration of the warranty and ideally a long time after. Most overclocking guides have the user find the lowest voltage which is stable for a narrow range of tests. Over time this voltage will become insufficient and at that point it's not a matter of whether or not an error will occur but when. Stability can usually be restored by simply bumping the voltage up but this is a constant game of cat and mouse.Overclocking is also a complete no-go area for enterprise and business computing which is where Bulldozer is supposed to excel. Opteron's are barely used at all outside of large supercomputers. Right now Intel has a larger share in the server and workstation market than they do in the desktop market.They could tell if you overclocked the CPU if they really wanted to. Comparing an output frequency from a PLL against a reference frequency burned into the CPU at manufacturing would be a trivial component to design. Committing warranty fraud is frowned upon.You're right, it is held back by extremely poor design implementation. If it was so easy to implement properly then why wasn't it implemented properly in the first place?The execution resources aren't shared between the clusters in each module. Each module has its own front end and back end. If one cluster is disabled, those resources are not made available to the other cluster in the module. The only parts that are shared between the clusters in the module are the L2 cache, L1 instruction cache, and a pair of gimped FPUs. Bulldozer's CMT implementation is not the same as Intel's SMT implementation. With one SMT thread disabled, the entire core's execution resources can still be used because only the front-end is duplicated. With a CMT cluster disabled, the backend resources dedicated to the disabled cluster are lost as well.The thermal conductivity of the oxides used in the insulator layer is extremely low. In microprocessors the current flows almost entirely across the surface of the chip so it's in very close proximity to the highly conductive IHS. This is one of the primary reasons why chips are 2 dimensional, current semiconductor technology does not allow for stacked semiconductors to be cooled properly.Disabling half the cores may significantly reduce heat dissipation but it will only eliminate some of the heat generating components. The rest will still generate heat and this heat will be spread to the IHS where it can dissipate more effectively.Phenom was supposed to beat the QX9650 into the ground. The X4 9600 got demolished by a Q6600. Phenom II's flagship 980 was supposed to beat Sandybridge but could barely keep up with midrange Nehalem processors. Bulldozer can barely keep up with Phenom II in some applications. I'm starting to see a pattern here.I'm not sure how you can say that "Bulldozer can be worked to compete with Ivy Bridge exceptionally well" when there's a load of benchmarks, both synthetic and real world, showing otherwise. Piledriver is a nice improvement but it's what Bulldozer should have been released as. AMD needs to continuously deliver on gains that are larger than Intel's. If they do not, the gap will continue to widen.[/citation]
No, it doesn't. A CPU, even at stock, can be no less stable than one that has a proper overclock. It's only increasing instability if done improperly.
I've already explained how Bulldozer can be used to be highly competitive with Intel's current CPUs. You're obviously not blind because you have replied to me. I don
t care what benchmarks of Bulldozer in different context show. That's like caring how Radeon 7000 cards perform with their original driver despite there now being Catalyst 12.7 and 12.8 out which show great improvements, but the Bulldozer improvements with the methods that I've described are even higher.
I never said that the execution resources are shared. Those are what I'm saying to disable. Have you read anything that I've said? The front end is shared and that is where a big part of the performance boost is coming from.
I'll reiterate. Bulldozer FX-8120 or 8150. Disable one core per module. Average performance per Hz increase is likely to be between 25% and 35%. I get this from benchmarks that show mere thread scheduling improvements where a benchmark was told to only run on one core per module and increased performance by 10-20% (usually closer to 20%) despite having a frequency disadvantage and being mere thread-scheduling rather than complete disabling. Overclock the CPU/NB frequency. This increases the L3 cache frequency and that's a big part of a CPU's performance.
Just bringing it to the CPU frequency means that you now have a full-speed cache like Intel does. Now, you're competitive with Sandy and Ivy Bridge in stock CPU performance with the more expensive i5s and you haven't even increased the CPU frequency. There is enough headroom for overclocking to stay competitive even when overclocked to the max. Also, this is an architecture that is designed for high clocks. So, no, it is fine when overclocked. It is one of the most stably overclocking CPU generations ever and CPU overclocking is already not an unsafe thing to do when you know how to do it. Instability and other such problems are caused by making mistakes that no enthusiast should make such as pushing the frequency too far for the voltage and pushing the voltage too far.
Like it or not, AMD is more competitive than most people seem to think that they are. They don't win in power consumption, but they do pretty well there too when this is done.