Q6600 isn't real quad?

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Again, I won't dispute the Opteron's memory bandwidth superiority over the Xeon, and its benefit for multi socket server applications.

However, this has nothing to do with either a Core 2 Quad or a Phenom, and unless you are using a server at home, it is irrelevant to the comparison
 


I give up. They keep using that. Its like you say desktop and they hear "4P Server". So I take it that kessler thinks because the FSB does not scale well in the server market desktops will be effected as well. Whatever. I give up. They just don't like Intel and think AMD are the roxor no matter what.
 
This is like a person that keeps sticking there hand in fire over and over again. I actually feel bad for these fanboys. The same feeling I have for the mentally handicapped/challenged I have for them.
 
LOL.

So, now you are using a die-hard AMD fanboy as a source to prove you are correct? Hell, why not link Sharikook's blog too? Or Scientia's blog? Or just link AMDZone? At least link an active blog. Abinstien hasn't written anything since April, and your link is from Aug. 2007 and April 2008. Oh, and can you tell us what the configurations for each of the systems that Mr. No-Bias used for his tests in the April 2008 link? Uh huh. So, he runs his own tests with nothing to show what each system has installed, and claims this and that. Well, anyone can do that, too, I suppose.

Again, everyone has said that in server (that is SERVER) the FSB does get saturated, and is affected (and is also why Opteron is much superior in memory intensive applications over Xeon), but you have yet to show one benchmark showing your claim that a desktop (that is DESKTOP) system's FSB gets saturated.

And I am still waiting for that desktop system benchmark link that will saturate my Quad and/or Dual core's FSB, too.
 


Can’t we do the other way around? I think that is much more interesting also. You say that the FSB will not be saturated on desktops. Then could you say how much the FSB can handle if you know that i isn’t any problem on desktops.
 


Can you show a test that sends a lot of data and I/O so there are some numbers? I don't think that games played in 1024x768 is something that could be used to measure that. Do you?

Also, how is this problem shown in tests counting fps, if the fsb is bottlenecked? How do you se it in the numbers?

 
The point you keep missing, Assler, is that under normal conditions, desktop applications such as games will NOT saturate the FSB. Therefore, Intel has a clear advantage in DESKTOP applications.

In server applications, the FSB DOES get saturated because server apps are much more I/O intensive. Servers are required to do tasks that desktops can't... that is why we have SERVERS and DESKTOPS.

Yes, you are correct in saying that Opteron scales better than Xeon. You may even be correct in saying that Phenom would scale better than Core 2 in the same situation... however DESKTOPS are not the same as SERVERS. Even if Phenom does scale better the way Opteron does... you will NEVER notice it on the DESKTOP.

Do you get it now? Or do we need to get a jackhammer to get through the concrete bunker you call a skull?
 


Doesn't games use a lot of I/O?
 


Obviously, not as much as server apps, otherwise Phenom would benchmark better than Intel in games... which we all know it doesn't. Your reasoning is flawed.
 

Do you have any test to show this?
 


Why is it so common that gamers OC their computers if the FSB isn’t saturated? The processor isn’t the limiting factor when resolution goes up a bit
 


Stop what? Pointing out facts just because some people don't like them?

Here is the reality of this situation: The companies have gone down different paths to get to the SAME PLACE.

One refined their core design and are now putting together a new architecture. (But are currently selling the refined core in the old architecture.)

The other put together a new architecture and are now refining their core design.

<soapbox>
As a personal choice I must say that it is much more logical to have the architecture or framework in place first. When you build a building you put the framework in before you do the finishing work. If you try to do the finishing work and then insert the frame afterwards... you are going to run into difficulties. (Ever have to run cable into a finished basement because the builders didn't bother thinking about it?)
</soapbox>

Concerning the recent discussions... the fact is chips from both companies do multi-task. But they do not do it equally. One has the architecture in place that is already proven to be better at multi-tasking. The other is putting that architecture into place at this time. This is not "FUD" as some like to claim... but a harsh reality that some don't want to accept. But just because this is a fact, that does not mean that the older architecture can not multi-task. And nobody has claimed that they can't. The older architecture might be good enough for some people, they have to make that choice. (I personally chose not to accept the soon-to-be-abandoned architecture.)


Personal Diatribe Regarding Benchmarks:

Actually many people don't understand that benchmarks aren't done to present a "winner" or a "loser". Benchmarks are done to show you relative performance. You must take a bunch of benchmarks and consider all of the results before you can decide which one shows better relative results on the average.

(Some of the "benchmarks" you see in reviews are actually suites of multiple benchmarks that attempt to do this automatically so that there is less room for error. And when read a review you should consider these scores differently than you would a single FPS score from a game for example.)

CURRENTLY: The benchmarks show that the chips that are currently available compete with each other when comparing clock and price. You can buy more expensive chips if you desire... but they don't compete in the same bracket. To make it even more complicated, there may be discrepancies in how the systems were setup that would effect the average relative performance differences. But even if you want to ignore that... the benchmarks still similar relative performance.

So when results are looked at or analyzed as a "whole" you can get an idea of relative performance. In this case the chips are close enough in relative performance that there is no "winner" or "loser". (Unless you demand to accept single results as an absolute. Which some people will do so if it supports their opinion. But I'd call that a "lose".)

 


Because gamers like to squeeze every bit of performance out of a CPU. They're not content to run everything at stock when they can overclock to a more expensive CPU's level. Also, overclocking the CPU isn't going to do much to alleviate FSB saturation anyway... if the FSB is going to be saturated, it will be saturated at any clock speed.
 

If you read about the Intel technology you would understand how wrong you are. The speed of the FSB is vital for performance on Intel computers. They are doing a lot for increasing this and you could ask almost anyone who is overclock their FSB that they notice a performance gain immediately if the run demanding games.
This effect is not at all as strong on AMD computers. AMD has huge I/O performance without overclock as all who has written in this thread have confirmed. Overclock one AMD will of course be noticed on performance tests or doing very CPU intensive tasks.
 


You forgot to add, "Just because there is no proof to back up this 'proven' fact that one cpu multitasks better I know it is true because it has a green label on it and I have some kind of irrational loyalty towards cpus with green labels on them."

Actually many people don't understand that benchmarks aren't done to present a "winner" or a "loser". Benchmarks are done to show you relative performance. You must take a bunch of benchmarks and consider all of the results before you can decide which one shows better relative results on the average.

Indeed, some might even say that you'd take a bunch of benchmarks and consider all of the results and pick a "winner" and "loser" between the 2 products.

Perhaps we should follow the trend these days and apply to computer hardware rules similar to what some little league and soccer programs have started doing; don't keep score and everyone can go home a winner.


Opterons generally "win" in high IO server applications, especially when scaled to 4P or higher.

Core2Quad generally "wins" in desktop applications.
 


Overclocking the FSB is how one overclocks a non-extreme Intel CPU. People do it to improve the speed at which the CPU operates, not because the FSB is saturated.


Do you get it now? Or do we need to get a jackhammer to get through the concrete bunker you call a skull?

Zoron, please dust off that jackhammer. I think it's steel-reinforced concrete. I really don't think Baron Massler here knows what he's talking about.
 
They ARE NOT FACTS. Just BS from people like you, kassler, baronmatrix/thunderman, or whatever other AMD loon is out there.

AMDs kick the hell out of Intel in Winrar. What other desktop tasks besides memory benchmarks does The IMC and "true" quad core put a spanking on Intel. Or what desktop applications take performance hits due to the FSB and double cheeseburger quad? Where is Intel getting hurt in the desktop with there arch....? AMDs handle multitasking better when you have more cpus/sockets in servers. The more you add the better they do. In the desktop its a load of bs that fanboys spew out.

All that intels dont multitask is just bullcrap. You know it, I know it. We all know it. If i was to swap out your system for a q6600 you would not notice the difference in multitasking. Encode, Virus scan, photoshop, CAD play a game do whatever. Its in all your AMD biased head. Talking about other people may not care about the performance hit or not notice it is just more bs.

If anything it makes AMD like more like crap. Intel with there ancient crap FSB and double whopper cheapest quad core is neck and neck with AMDs top of line 125w native IMC cpu on the DESKTOP. From what you clowns keep yapping about intels faults one would think it was ol netburst days with intel getting destroyed.
 
Ok... I was thinking overclocking by upping the multiplier... I forgot that you need an Extreme CPU to do that.

So you're saying that if clock speeds are equal and if the Intel hasn't been overclocked, the AMD will beat the Intel? If that's what you're saying, then I am the one who will need to see some tests to back up YOUR claims. And please, NO SERVER BENCHMARKS! Show some gaming benchmarks to back up your claims.
 


Why did they develop Nehalem?