Q6600 isn't real quad?

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Question being, did the Q6600/etc. use DDR-1066 too? My theory is that the extra bandwidth is doing the Phenom's HT good... Could you link us to?
 


Sure are throwing out alot of excuses today, aren't we? :lol:

DDR2-1066 is faster than DDR2-800. You're correct on that much.

http://www.digit-life.com/articles3/mainboard/ddr2-yorkfield-phenom-page1.html

I can tell you there's no big "conspiracy" by "the man" who's trying to keep you down and fool you into buying stuff. Numbers are data and data doesn't lie. Phenom can keep up with a Q6600 sometimes. They're -good enough- if you need something on the cheap.
 


Yes it looks like they used the same RAM in all the tests. The Q6600 generally beats the 9850BE handily in the "games" and "applications" categories. The 2 procs are very close in 3D rendering and video encoding, with each winning its share of benchmarks. Finally, the 9850BE destroys the Q6600 in WinRAR and all of the memory benchmarks.

For media encoding or for a HTPC the phenom is certainly a good choice. For most other uses in a desktop environment the Q6600 looks like the better option.

I don't think this is really earth-shattering news to anyone.


I agree with amdfangirl that AMD's vido division is certainly in a good position in the current market, and ATI's success has definitely been good news for consumers of both NVidia and ATI alike. The 9800 has already dropped down in price to try to compete with the 4850. I'm looking forward to see what performance the 4870 brings. At the $300 price point it certainly looks hard to beat, especially with the option of crossfire on AMD and Intel chipsets.
 

I'm not quite sure how that is a problem if they never plan to take the architecture beyond the six core Dunnington Xeons, especially as they're already phasing out the Core arch next year before they go into the more heavy multicore design. I am fairly sure that most processors have several areas of weakness, but you seem to be referring to something very specific, which is interesting. Is the performance of the architecture significantly hindered in its current form, not just if it was theoretically extended to double digits and beyond, and is this reflected in any popular programs in use?
 


I have made no excuses. I have pointed out a discrepancy... not a conspiracy. You can choose to ignore the discrepancy. Most people will anyway. (But ignoring something doesn't make something else true.)





Here is the problem. You say "handily beat" but the benchmark data does not agree with you. Most of the benchmark results are too close to declare a victor.


EXAMPLE: Lets look at an unimportant game benchmark:

(I say unimportant because in my opinion games are the least important benchmarks. But for some reason people feel compelled to judge the performance of a multi-core CPU based on single threaded games. Not exactly the brightest thing in the world to do... but such is life. But it can be used to illustrate my point.)


One game in Tom's review shows:
Q6600 -- 110.00 FPS
9850 --- 104.1 FPS
DIFFERENCE: 5%

NOW: Immediately here is a problem. With results as shown above many people will stop and claim that one chip beat the other in this benchmark. That would not be an entirely accurate statement since the scores are within a range of standard deviation. In other words the scores are tied. Of course some people won't understand that and will still claim there is a winner and a loser. (And I'm sure some will read this review and post that viewpoint. Yet again. But it still wont' make them correct.)

But even if you could only see in black and white and will not accept that the scores fall within the range of a standard deviation; you would not use the terms "handily beat" or "wiped out" unless you were trolling for a response on a forum.


Now let us look at another website's scores for the same game. They used a lower resolution so the score are a bit higher.
(BUT regardless the delta should be close to the same.)
Q6600 -- 195.77
9850 --- 177.85
DIFFERENCE: 9% <<-- The delta is 4% larger.

The delta is 4% larger than the previous benchmark's delta: Very probably due to this reviewer using DDR2-800 ram. Of course there is no easy way to test that hypothesis at this time. If we want to trust Tom's review then we can ascertain that a 5% delta is more accurate than a 9% delta. If we don't want to trust Tom's review but we still want to be accurate then we have to discount the validity of this result set because we know of the discrepancy. (Unless you are biased. Then you will blindly accept it since it supports your argument. It still won't make you "correct".)

If you did not know that there was a discrepancy when the benchmark was executed... you would see that these number are outside the range of a standard deviation and you might incorrectly declare a winner and a loser. However, knowing about the discrepancy it would be wise to discount these results if you wish to actually know what is really going on.

But many people will either not know there is a discrepancy or will ignore the fact because they don't care or are fanboys. (Just as some people will read what I have typed, not understand a word of it and write it all off as me attempting to "make excuses".) But as I said above I make no excuses. I point out data that is not accurate and skews the result sets. This problem is only compounded by the fact that there are additional sites that have also used less than optimal RAM speeds.

So the debate goes round and round. There is no winner. Or loser. (Except for a truthfully unbiased person that really wishes to know what is going on.) But as always... people will ignore those pesky little things called "facts" and tomorrow many posters on this forum will not choose to remember my explanation of why many of the reviews are not correct. (Just as many will look at this post, fail to read or understand it, and then say something that shows that they are completely oblivious.)
 
Sorry for the double post... but to be helpful:

TLDR version of my last post:


1. If a benchmark has a score within 5% it is a tie.

2. If another benchmark of the same application from a different reviewer has a difference of 9%-10% there is probably a problem.

3. You should figure out why.

4. If you find a probable reason then you might need to ditch the bad benchmark OR adjust the scores of that benchmark when you analyze at them. (If you want good results.)


THE END
 
Good review sites generally run custom tests+ fraps to avoid easy driver tweaks that can boost built in performance test scores. Because of this the test results can vary alot depending on the part of the game they used. Thats why you can't directly compare fps in 2 reviews from 2 diff sites even though they may have tested with identical hardware/video settings
 
1) You can call them statistically equal if you want, but there's a reason that the Q6600 beats out the Phenom 9850BE in almost every test out there - because it's the faster processor.

2) You shouldn't be surprised that there's a higher difference between the faster and slower CPU at lower resolution. When the resolution is different it's not the same benchmark, but thanks for showing that when putting more emphasis on the processor in the test, the benchmark shows the Q6600 ahead by an even larger margin.

3) It's pretty clear why the benchmarks show different results; they are different benchmarks.

4) Both benchmarks are valid.

Also, as Ogdin pointed out review sites use different clips from the game to run their benchmarks so clearly the results will be different.

You are obviously going to toss out any test that shows the Phenom to be an inferior chip so I'm not going to argue with you. I think a reasonable person who has read enough reviews and benchmarks knows the truth anyways. The Phenom is by no means the PoS chip that some people claim, but it's still lagging a bit behind the C2Q in most applications.

What was the point of this whole thread in the first place anyways?


 
keithlm wrote :

Now let us look at another website's scores for the same game. They used a lower resolution so the score are a bit higher.
(BUT regardless the delta should be close to the same.)
Q6600 -- 195.77
9850 --- 177.85
DIFFERENCE: 9% <<-- The delta is 4% larger.

The delta is 4% larger than the previous benchmark's delta: Very probably due to this reviewer using DDR2-800 ram. Of course there is no easy way to test that hypothesis at this time. If we want to trust Tom's review then we can ascertain that a 5% delta is more accurate than a 9% delta. If we don't want to trust Tom's review but we still want to be accurate then we have to discount the validity of this result set because we know of the discrepancy. (Unless you are biased. Then you will blindly accept it since it supports your argument. It still won't make you "correct".)


As anyone who has ever read a cpu comparison knows, as the resolution drops in game benchmarks the cpu becomes the limiting factor.If there is a difference in performance between 2 cpus it is magnified on the lower resolution runs.
 


Standard deviation is used one using statistical data to project numbers onto an entire population.

However, when you are looking at benchmark numbers, which are reproducable and absolute, you don't have to worry about standard deviations.


Bottom line, the processor got beat in that benchmark.
 


Nope. That means, one loses and the other won. If there was a tie, there would be no difference. Why just make it 5%? Why not within 10%?

2. If another benchmark of the same application from a different reviewer has a difference of 9%-10% there is probably a problem.

Nope. Different builds can cause different results. If the same application shows the same winner, then there is no problem. You can blame the 4-5% difference in applications on different reasons, if you want. The simple fact is, both gained, and both showed the same results. If a third shows the same results, then you are just crying a river.

Let's look at 2 Crysis benchmark results. Both are running at 1024x768, with medium quality. One is using 8800GT, while the other is using 8800GTX. One is using DDR2-800 for the AMD system and DDR3-1066 for the Intel system. The other is using DDR2-1066 for both systems. So according to your own theory, the AMD system using only DDR2-800 should score over 9-10% lower, due to it's handicap, right?

X-Bits Labs http://www.xbitlabs.com/articles/cpu/display/phenom-x4-9850_7.html#sect1
Q6600 - 57.46fps
9850BE - 46.13fps

Anandtech http://www.anandtech.com/showdoc.aspx?i=3272&p=12
Q6600 - 53.7fps
9850BE - 46fps

Doesn't look like the memory had any effect on altering or changing the benchmark. Don't see any 9-10% difference that you claim happens. Maybe it's just the application, right?

Let's look at 3DMark06 test results.
CPU scores
X-bit Labs http://www.xbitlabs.com/articles/cpu/display/phenom-x4-9850_7.html#sect1
Q6600 - 3600
9850BE - 3490

THG http://www.tomshardware.com/reviews/amd-phenom-athlon,1918-28.html
Q6600 - 3507
9850BE - 3525

Hmm...such huge differences. Shame they are both using DDR2-1066 memory. So, where is your proof that DDR2-800 is a limiter in benchmarks?

3. You should figure out why.

Don't need to. That is why one shouldn't rely solely on 1 review sites benchmark results. That is why there are several sites, that use different configurations, to show what you might expect to gain and/or lose.

4. If you find a probable reason then you might need to ditch the bad benchmark OR adjust the scores of that benchmark when you analyze at them. (If you want good results.)

In other words, if the benchmark doesn't meet what you like, complain about it, and find something that is different, and use that as an excuse to claim the results are invalid. Of course, this is without any solid proof of how the item found can be the reason.

I hope so.
 




The q6600 doesn't handily beat the 9850 in games as you can see here http://www.guru3d.com/article/cpu-scaling-in-games-with-quad-core-processors/1
It Intel chip does start to pull away as you overclock though,but its usually only noticeable at lower resolutions.

 


You also remove the FSB bottleneck. Higher resolutions use more memory, more communication with the GPU and more traffic that goes through the FSB.
There isn’t many that plays games on 1024x768 or lower.

You need to clock the FSB over 500 MHz just to get memory bandwidth closer to 10 GB/s. DDR2-1066 has a theoretical bandwidth of 17 GB/s. AMD with the IMC is able to get bandwidth over 13 GB/s and memory don’t need to compete with additional traffic. The FSB on intel needs to handle memory traffic and traffic for the GPU.
 


You're flat out wrong on that one.

Every game I own offers 1024x768, new and old. In fact, every game I can think of, including Crysis, offers 800x600.


Once again, your assertion that the FSB on an Intel machine is saturated is not correct.
 


hmmmm... but if you play on those resolutions you don't need a quad 😉
 
lol? you don't

try to play crysis with those resolutions on pentium4. i know what i'm saying. the fps diffrence between 1024 or 1280 isn't much with my 8600GT. because my cpu can't handle the games enymore. and i'm not talking about crysis, but DirT and css, cod4, bioshock
 


Pentium 4 might have some problems yes!
 
Well, it seems there's a lot of people up here who aren't knowledgeable enough about computers to recognize a quad core processors from a package.

Intel gave the Q6600 the "Core2quad" designation for a reason. The chip has two dual cores on it placed side-by-side. This is called a PACKAGE or a Multi-Chip Module (MCM). It is not quad core.

http://techgage.com/article/intel_core_2_quad_q6600/

Look at the illustration of the chip at the above link. You can clearly see.

The two dual cores communicate with each other via a narrow bottleneck. Which means it will never reach its full potential.

In order for a quad core claim to be justified, the chip must have all four cores on one die.

AMD's Barcelona and Phenom processors are quad core. Both chips have 4 reworked processors on a single die.
 


True. But I was talking about the difference between the delta or amount of change in the FPS for both CPUs at each resolution. (The difference in the difference.)

The performance itself is dependent on the resolution... but the CHANGE in performance between CPU should not be extremely dependent on the resolution.

(It should remain fairly relative. Well... unless the CPU itself performs at a different level of efficiency dependent upon the game resolution. Which is not likely. A different GPU could be a factor... but in my example it was the same GPU.)




I did not claim to expect a 9-10% difference for every application.

I was talking about the difference between the deltas on ONLY the one review being used in my example. Attempting to apply those numbers to a different review would be pointless since not every application is affected by memory speed. (Like the example you point out.)

 


Um yes it will since it is an older architecture. Just like a Athlong X2 3200 will have problems. Heck people with older CPUs are having lower FPS in TF2 since Source is a very CPU dependant engine and thats why I wounder why most sites do not use it.

Keithlm, you are talking mainly about your experience in servers and databases. Um yea we all know AMD PWNs there in 2P+ easily.

As for when it comes to what kessler keeps saying, ok your not getting the point. When you put it at a lower res that takes the GPU out of the equiation that means that the CPU is doing all of the processing. So if it is it will need to access the memory to grab the info that is stored there for the game. This means it still have to use the FSB to communicate with the memory. Also on a C2Q, the 2 dies communicate across the FSB as well.

So if these are true then it should saturate the FSB like you keep saying right? Well then if so AMDs chips should do better than Intels right? Hmmm... I guess I must not be seeing it.

Bottum line is that the FSB does not get saturated. I have yet to have a game be outperformed by a AMD chip over my Intel ship that use the same everything else meaning GPU and memory. Until then Intels FSB works fine.

God I can't wait till Nehalem comes out and people can stop trying to use the crap AMD feeds them about the FSB being saturated as truth.
 


Hmm... last time I checked a Processor is defined by the Package. Yes a C2Q is a MCM but it is still a Quad core PROCESSOR due to it being in one package. Now Phenom is a "Naitive" Quad core meaning its not a MCM but all 4 cores on a single die.

Seriously man. C2Q is a quad core using MCM, Phenom is a naitiv quad.
 

Not when you are playing singlethreaded games or at low resolutions.

Why do you need PCI Express 2.0 x16 ?

Racedriver GRID scales to 4 cores, there are a lot of games that will be out soon that are threaded and need more memory on the GPU card.


 



WRONG.

The CPU is the bottleneck at lower resolutions. The GPU is the bottleneck at high.

Gaming 101 dude. One zero one.



You're saying anything and making up your own facts to defend your favorite company, AMD.
 


No the monitor will bottleneck on low resolutions. Most monitors only has 60 frames per second. They can't show ~276 fps