Q6600 isn't real quad?

Page 10 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Kassler are you friging kidding me with that FSB crap. There have been test that show its not a huge difference. People have taking extreme editions intels. You know the ones that are you beloved black editions from AMD? They have unlocked multi. And tested all kinds of multi and fsb combos that get you to the same clock speed. Even people with regular ol C2D or C2Qs have done so. lower the multi and overlock the fsb and compare it to the higher multi and lower FSB. You will NOT see huge peformance hit. All ends being about the same for just about everything.

For somebody that sits in here and talks nonstop about all the techinal crap why Intels dont do this and that and server this and that. You know nothing about it. Just another fanboy troll.

GO AWAY
 
Why did they make Nehalem? HAHAHAHAH. for real man. just go away. After all the frigging server crap you keep bringing you ask that? Thats amazing.

Let me see, how many cores is the intel cpus going to go up to? With how many threads will it process due to new version of HT at once.

Nah they just wanted to copy AMDs Barcalona thats all.

AMD4Life!!!!!!!!
 


I'd assume the main reason they'd develop an IMC with Nehalem would be try to match Barcelona's performance in the 4P or more, high IO server systems where AMD chips have been dominating.
 
I would also guess that it was to prevent a potential future problem with saturating the bus on more than just the server apps - it could happen in the future, though it is nowhere near saturated right now.

As for overclocking? People use insane bus speeds because of a locked multiplier. If the multiplier were unlocked, it would make more sense in many cases to use a higher multiplier and keep the bus the same to overclock, as sometimes when using the bus to overclock, you will hit other limits well before the CPU is truly at its limit. For example, my work computer right now is at 437FSB (1750 effective)*8 for 3.5GHz. Why? Not because it needs anywhere near that (it's actually a duo, not a quad), but to get the core speed up. I would be equally happy with the stock 333FSB (1333 eff.), and a 10 or 11x to get a similar frequency, but since it is an E6750 (and therefore the multiplier is only adjustable in the range of 6x-8x), I have to go with the high bus speed instead.
 

Blah, blah, blah.

When confronted with a request for this elusive FSB saturating benchmark for desktop systems, you keep running in circles about how this works, and how that works. Well, if it suppose to work the way YOU claim, then prove it. Give us a benchmark that will clearly show a saturated FSB on an Intel desktop system, that won't show on an AMD desktop system. Period. We don't need to know how you are claiming this or that. Hell, I can read a trade tech manual or an engineering white paper, too.

Unless you can prove your point, that the FSB on a desktop Intel system will be saturated more than an AMD desktop system, you are blowing smoke outta your a$$.

Why did they develop Nehalem?
HAHAHAHAHA! After all your lovely server memory application talk, you ask that? Seriously, you have to be either the biggest troll or (insert your own thought) here.
Hmm...let's see. Intel is getting killed in multi-socket server due to a lack of IMC, so they developed a product that will now close the gap in that arena. Good enough answer for you?

Concerning the recent discussions... the fact is chips from both companies do multi-task. But they do not do it equally. One has the architecture in place that is already proven to be better at multi-tasking. The other is putting that architecture into place at this time. This is not "FUD" as some like to claim... but a harsh reality that some don't want to accept. But just because this is a fact, that does not mean that the older architecture can not multi-task. And nobody has claimed that they can't. The older architecture might be good enough for some people, they have to make that choice. (I personally chose not to accept the soon-to-be-abandoned architecture.)
Please, show us this difference in running multi-tasking. Where exactly is your proof? The architecture is different, yes, but one product has shown (in various places) to out perform the other product in multiple benchmarks and applications. So, where exactly are you getting your evidence that one is better at multi-tasking?

It is FUD, unless you can proof it is not. It's not harsh reality, it's YOUR reality. This from the same person who discounts all benchmarks, yet claims one is better, just because of an architecture? HA! That's great. The simple fact is that you cannot accept how one companies "glued on solution" can blow away the more "elegant solution" in almost test that they are compared in. So, who is not accepting reality?

Who ever said an older architecture cannot do multi-tasking? No one has said that. Why is it, you and kassler have to add extra things that were never mentioned?

So when results are looked at or analyzed as a "whole" you can get an idea of relative performance. In this case the chips are close enough in relative performance that there is no "winner" or "loser". (Unless you demand to accept single results as an absolute. Which some people will do so if it supports their opinion. But I'd call that a "lose".)
So, it's okay to use benchmarks, only if there is no "winner" or "loser". Hmmm... So, if one CPU, of xxx amount of speed and xxx amount for price, beats another CPU with the same amount of speed and price, it's not a "winner", just better at relative performance because "the chips are close enough"? No. It simply shows that one CPU is better than another CPU. That it "won" in most benchmarks over another. WON, as in winner of the benchmark. If CPU A wins a benchmark by 2 pts, it still won. If CPU A wins all benchmarks by 2pts, it still won all the benchmarks. It doesn't matter if CPU B is close. It still LOST.

Take 100m runners. If one runs it faster than everyone else by 5 seconds, that runner is still not a winner, because the relative performance of all others "are close enough"? HAHAHAHA! Riiiight. Serioulsy, how hard is that to really understand?

 


WRONG: If you traded my main work machine for a desktop I would definitely notice.

I run a database server, a web server, a job server, a data integrator server, a network performance monitoring server, virus scanner, and I'm sure I missed 2 or 3 things that probably also have "server" in the name. But GUESS WHAT? This is my desktop computer.

I use a 1P dual core Opteron workstation. (We plan on moving to a quad soon... still only 1P.)

Last summer some yahoo manager decided we should try to switch this desktop workstation for a Xeon. That lasted for about 2 days before we switched back. The Intel Quad machine was awful. As in "forget it" type awful. This wasn't a situation where you would need a benchmark to see the difference. A developer in another group said they just tested a Phenom machine and it actually worked okay for this workload. But I don't think my manager will ever allow us to try one... not after last summer's Intel fiasco.

Some people do use the Xeons and have no problems with them. But they don't generally run as much stuff. Of course on this forum you'll get the people that will just start screaming the "but that's a server" garbage. Just because you run does not mean you are running a SERVER. Those "servers" I run on my workstation are for personal development use. The "real" servers are on remote 64 core machines. People get confused because the word "server" is used in the application name.

NOW: How does this filter down to my main home machine? It generally doesn't. But I'll take the architecture I know actually works better from personal experience. I don't really need to see a benchmark. Besides... it helps knowing that Intel is going to abandon the FSB and MCM. Perhaps they know a thing or two about the products that they sell.
 
By your logic, Assler, why did AMD develop Phenom? Why did they develop Athlon or Athlon 64? CPU manufacturers are constantly looking for and finding ways of increasing performance. The day a chip maker says "it's good enough" is the day we have one less chip maker. The only way these companies can continue to make money is to continue to innovate and release better and better products. If they sit on their *ss, they risk another company overtaking them and losing out on the market.

Use that brain once in a while.
 


If you would try an AMD and feel the smoothness you will understand what I mean.
 


You do know you are a one trick pony don't you? "Benchmark... benchmark... blah blah blah".

Benchmarks are not really that important. They can be a useful tool for making a relative comparison on some things. The problem is that often there are situations that will degrade a machine's performance below acceptable levels. These situations don't happen on other machines.

How is your precious benchmark going to show that? Oh.. that's right... it can't. Relying solely on benchmarks is foolish.
 


Don't go there... they would rather see high benchmark numbers than actually enjoy a better performing machine. It doesn't matter that you can easily notice the difference... because they can't measure it with their precious benchmarks.

(And they will fight you to the death on that issue. Just so that they can defend their obsolete and soon-to-be discarded architecture.)
 


Are you using your brain?
The point is that nehalem is almost one exact copy of K10 (phenom).

You say that intel is much better but if it is, why does intel create a clone of phenom? I don't think you can se the real answer to that
 
Have fun gents....

I'm off now... I get to go and reload my home machine.

I just put in 2x4850's last night... and now my 3dMark06 score has no problem keeping up with a Q9450's score. My score is a little bit less because they used a 9800x2... and the CPU is clocked 166Mhz faster... but hey... the fact that I'm within spitting distance is very promising. I got 14188 with a stock Phenom.

Oh.. and after I get done checking my relative score against others to make sure that the machine is performing about where it should be... I'll never benchmark again. Because benchmarks are basically useless.

(Useless: Just like this conversation. But it is good to waste hours whilst at work.. worthless in the real world.)
 


Hmmm....I would take benchmark results from several sources than "But I'll take the architecture I know actually works better from personal experience." as a determining factor for choosing any product.
Let's see, your personal experience is telling people that AMD is better because....???? But benchmarks are telling people which CPU runs certain tasks (things people might use more than others) better. Hmmm...hard choices. Data that can be recreated, tested personally, and can be determined to reliable, solid results, or someone's personal opinion, because of their experience? Wow. You got me there.

Relying on someone's "personal experience" and very obvious bias, is much more foolish.

One trick pony? HA. Until you prove my one trick pony wrong, why do I need more ponies?

If you would try an AMD and feel the smoothness you will understand what I mean.
I have tried AMD before, and although I have no hate/dislike for their products, I just don't have a need to use their products. People can use whatever CPU they want, can afford, or are given. That's their choice.
I haven't had an issue with my Intel systems. Not one. I can do what I need to do on all 4 of them, without a hitch.
So, enjoy your AMD system. Sing it's praises. Tell the world how great it is.
I will just continue to show the data and facts (from various sources, too) of both companies CPUs, and let the people decide themselves.
 
No, I can see the answer. IMC was exactly why Athlon was so strong against the P4. You're still failing at explaining why Intel performs better on the desktop than AMD. Until you can explain that, anything else you say is pointless. If Intel is beating AMD now... just wait until they have an IMC. Your precious Phenom will be crushed on the desktop even more so than it is now.

You're still not using your brain. Why would I care if a CPU scales well when it is still beating the best the other CPU has to offer? When will scaling start to matter? Two or three years down the road? I'll have a newer, faster CPU by that time... so your point is moot. The point is, no matter what you buy today, there will always be something that outperforms it tomorrow.

And if you claim that Nehalem is an exact copy of Phenom, then you must not have a brain at all... let alone be using it.
 


For most of us it doesn't matter what processor you are using, it may be more important next year or so.

Myself need power, I work a lot with databases and is running vmware (two 30" monitors) so for me there isn't any hard decision.
 


Dang it... I'm trying to get out the door....

I see you didn't notice that I used the word "solely" in my sentence.

I did not say "relying on benchmarks is foolish".

I said "relying solely on benchmarks is foolish".

Major difference. You replied as if I had not used the word solely. (I suspect you would not have replied if you had noticed... since it does tend to slightly trivialize your argument.)
 


Ah, grammar patrol is out, now.
Okay...
It wasn't your personal experience that was alone in the sentence, if you read it correctly.
Relying on someone's "personal experience" and very obvious bias, is much more foolish.
It also has "very obvious bias", which kind of makes the use of "solely" unnecessary. Since, I wasn't referring only to your personal experience, but your very obvious bias, as well. You used solely, because you are pointing out one thing, I was not.

It wasn't that I didn't see the "solely" in your remark, it just didn't fit my response. So, that should trivialize your attempt at whatever it was you attempted to prove.

edit -
I'm done here. No replies will be generated any longer.
 

I know you have a lot of benchmarks that are used to check the raw performance of the processor, like games that executed at very low resolutions. But this isn’t what I have been talking about. The problem is that you can have one processor running at 1000 GHz, if it sits and wait for memory it doesn’t matter. If there are a road that can’t swallow all cars it will be jammed.
 
The thing is, not all the benchmarks have been low res. You haven't shown anything, and yet nothing we provide is sufficient. Give data (on DESKTOP apps on DESKTOP cpus), or stop complaining.

You are the one making the unproven and nonstandard claim, therefore you must provide the evidence in support of it, not the other way around.
 


Blogs are normally unreliable as they may have extreme biases.



You s'ppose he's a troll like Thunderman?
 

Yes I have, and I have explained. But it seems that some here have some problem to understand. Some of you think that servers is just multisocket motherboards and that’s why amd is better. AMD is better when it comes to multisocket motherboards but it is also better when it comes to I/O for other parts (GPU for example), it is also better because it has different roads for different traffic. The roads isn’t jammed as easy as they are on Intel.

The strange thing is that this type of performance seems to be totally unimportant to some of you. But when it comes to execute one single thread as fast as possible it seems to be extremely important.