BSer,
Thanks for the links, on LostCircuits article it has some inaccuracies such as this one:
The 955X Chipset
For the launch of the dual processor series, Intel has released a number of additional chipset spins that are pretty much based on the original 900 series design, albeit with minor variations. Flagship is the 955X chipset with support for DDR667 and HyperThreading, <b>followed by the 945 chipset that is limited to DDR533 and does not support HyperThreading</b>.
The 945 supports both 667 DDR2 and HyperThreading.
I've found this benchmark useful because I use WME9 to encode with:
<A HREF="http://www.lostcircuits.com/cpu/p4_840/10.shtml" target="_new">http://www.lostcircuits.com/cpu/p4_840/10.shtml</A>
Intel does better single processor over the AMD offerings but look at the dual core. Kinda interesting benchmark. Their I would have expected Intel to continue the tread but didn't. Still the 820D should do well there especially if OC to 840D speeds and beyond. I just don't see the benefit spending extra here for the performance improvement of the X2. In any case any dual core scenario beats any single core setup.
The Lightwave benchmark is just plain screwy and not sure if valid, if so then this particular area the X2 shines rather well. Now if I used Lightwave and this was done valid, X2 would be the only way to go!
<A HREF="http://www.lostcircuits.com/cpu/p4_840/14.shtml" target="_new">Caligari TS5.1 Vases</A>
This is more applicable to what I use but a much older version was tested in the benchmark, the newer versions multitask leaps and bounds better then TS5.1 did, even on my AMD system ( Ts will multithread as many cpu's you have available, even hundreds now):
<A HREF="http://www.lostcircuits.com/cpu/p4_840/18.shtml" target="_new">http://www.lostcircuits.com/cpu/p4_840/18.shtml</A>
In this case the 840D (if I OC the 820D to 3.2gh should be very similar) was 20% slower then the X2 4800+. I, myself don't consider the addition cost worth while for the extra 20%. Now if I use faster timings on the ram and OC the 820D higher then 3.2gh (I expect to) then the difference will probably be mute and really not worth the additional cost for me. How will the X2 3800+ performance compared, I would give it a -3% to -5% for the smaller cache, minus another 20% for the speed difference between the X2 4800* but once again I am sure that processor would OC as an option as well, now thinking about it, with the smaller cache of the X23800+ it may even OC better, hmmmm. The X2 3800+ is definitely an option here.
Now I have to eat some crow here, yes indeed the Intel 840D EE uses 60w more power then the X2 4800+. Hot mother!!! Figuring 75% efficiency of the power supply 60w/.75 = 80w more power from the Intel happy family at peak power of processor :redface: . I wonder how much Intel will consume oc to 3.6ghz???? How much does a 100w lightbulb cost running 120hours a week???
Now I found this very interesting from that article:
</b><font color=red>In the case of AMD?s Athlon 64 X2, the dual core has one distinct disadvantage, that is, each core has essentially only access to a single channel of memory. That is, both cores have to go through the system request interface and a crossbar switch to get access to the memory and, in a situation where both cores simultaneously request data, this can lead to contention. . .
<A HREF="http://www.lostcircuits.com/cpu/p4_840/" target="_new">http://www.lostcircuits.com/cpu/p4_840/</A>
</font color=red></b>
This showed throughout the testing with the dual Opteron benchmarks included with this review. Maybe this explains why at AnAndTech the multitasking scenarios Intel won out more then AMD but at single tasks AMD just blew away Intel.
Now if Tom's Hardware can do a real test of the dual core plateform with 2-4gb of ram (fast timings on both plateform) and do some rendering in the background and gameing in the foreground. That would be very interesting. Video encoding while rendering and playing a game

. 64 bit testing as well.