Q6600 isn't real quad?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Looks its not about saying anything good about AMD or Intel. Its about facts. For a fact I can tell you that a Q6600 multitask very well. How you may ask? I have one. I run many programs in parallel. This includes games, web browsers ( lots of those), music players and audio encoders and FRAPS as well.

One time I decided to test my Q6600. It was mainly to see it jump from the 6x multi to the 9x multi to see it at 3GHz in CPUz but still this was what I ran: 3 Copies of Eudemons (basic but still running), 2 copies of 2Moons, 1 of Rappelz Epic, 7-10 IE7.0, WMP, Divx encoding a 200MB file and my antivirus software.

Here was my result: Not only did it not jump from 2GHz to 3GHz it never once slowed down at all. Intel is just as good at multitasking as AMD. And thats a fact since I have one. To top it all I was running Vista and only had 2GB of memory at the time.

In truth Phenom is outperformed by C2Q in everything except memory sensative benchmarks/programs.

And TC I always love your attitude. But in the end there is no changing peoples minds. Some will like AMD no matter what and others will like Intel no matter what. The rest will like the best bang for their buck.

Oh and Kethlm, why are you not waiting for a 4870? I can tell you that the GDDR5 is supposed to use less power and since it will be clocked much higher on the GPU and about 2x higher on the memory you will get a much better system for your buck than with a 4850.
 


Really? Of course we know that. Except the best silicon goes towards server chips.

The reason why AMD chips scale better in MP when its more than 2 chips is due to the IMC. But thats the problem with your argument also. In a normal desktop enviroment, the IMC does not make a difference. Since we use sinlg socket systems.

The other major flaw in your argument is that server apps are very memory bandwidth hungry. They actually use that extra bandwidth that AMD gives them where as in the desktop market most apps don't need that much.

There is no corolation between server and desktop markets to even begin to compare them. So in short in the desktop market, Intel has the best performance. In MP servers its AMD. Why is it that people who like AMD find that hard to believe? I love it especially when they don't own a Intel chip and don't sit down right in front of it every day using it.
 

If you are surfing the internet you are using a very memory intensive application (the browser). It is also threaded.
 

As long as the processor doesn't need to comunicate with the "outside" Intel is better. The C2Q is even better the Nehalem then. But when it needs to comunicate with the outside it is loosing speed.
 


You obviously are very new and don't remember the Pentium 4/D Prescott days. During that time AMD had the Athlon X2s and the FX series and were the talk of all the sites stating how great they were and this and that. The P4/D was always talked about how hot it ran and how much power it took.

In fact there was only one Pentium D CPU that got a decent review about how nice it was. Wait let me corretc myself 2. The Pentium D 805 since you could OC the living crap out of it and compete with AMDs high end for about $150 and the Ceadermill Pentium Ds. But that was pretty much right on the verge of Core2.

Now stop changing your argument and posting crap that means nothing. nVidia does help game companies optimize their games to run the best they can on nVidia GPUs. ATI does too or at least used to. Heck HL2 still runs the best on ATI GPUs. And yes I have a ATI card, the HD2900Pro 1GB which I play TF2 on every day and can easily get about 150FPS and thats with a Q6600.
 


Say what? I am using IE7.0 that uses 72MB but thats mainly due to the Aero interface it has. I fail to see how its memory hungry. You failed to see the fact that I was using a butt load of IE7.0 with all those other things going in the background without it jittering once.



The outside? You mean access memory? I am truly sorry and I am not sure how TC even put up with you for so long.

You fail to see that I have a Q6600. Do you have one? Do you use it every day from 3PM PST till 10PM PST? Until you have a Q6600 and can truly say you have used it and worked with it don't talk about how well it multitasks. I can because I have one and I will tell you it is multitasking very well right now with my IE7.0, Steam open and WMP without even taking so much as a break.
 


What I said was that you need to read carefully and don’t believe everything. Intel, AMD/ATI, nVidia etc are companies. Companies aren’t charity organizations. Sites making reviews isn't charity organisations.
I have been a software developer for over 15 years.

 

If your IE doesn't cache data then it will be slower because the hard drive is much slower than memory. The operation system will cache some file accessing even if not the application does it.
Cache on the processor isn’t 72 MB, IE isn’t a 72 MB application.

I am running firefox now, it is using over 200 MB
 


Do you even know what a typical server runs?

Your sentence makes me laugh. Out loud.
 



In all the organizations I've worked in there is a very different workload between servers and desktops.


I won't discredit AMD's server products, especially 4P and 8P, they do very well there. However, the desktop can best be described as underwhelming.
 

There's nothing stopping you from documenting, and distributing your own unbias benchmarks. If you really wish to do something, why not do that? It'd be good to see more of the forum'ers taking benchmarks into thier own hands to support thier points, especially if you think that existing benchmarks are wrong. Just gather what you can and share it, it'd make your points look so much stronger.

Because making sentences without the aid of structure or grammar, not to mention a gross love affair with exclaimation marks is certainly not making you look bias. Why are you breaking into a ranting tirade? Can't you make your point without breaking into some sort of shouting of random comments, I mean who on earth mentioned Nvidia? You're a pretty random person. You moan about reviewers being bias and handicapping AMD, but acting like some sort of propaganda robot is somehow better behaviour? You have spirit and will, you want to support AMD, and there is nothing wrong with that. But you can channel that energy and enthusiasm, not just to spin sentences and to make random statements, but to actually display factual superiority, things that you found out yourself. It looks to me as if you're just trying to troll, but if this isn't your intent then maybe there are better ways to spread your ideas and convince other people to agree with those ideas? It just strikes me as odd, very odd.
 

I'm not understanding. How exactly will certain applications increase the benifit of Vista being NUMA Aware?

You make another interesting point, deminishing returns and overhead management. You can't continually increase the amount of threads a processor will execute or a program will be spread across, the overhead of keeping them all together in real time poses quite a challenge and is eventually at some point, depending on the nature of the application and the hardware architecture, starts to decrease performance rather than increase. But what do you mean by "on Intel"? As far as I am aware, that deminishing returns aspect applies across the board, not just to one company. Are you simply referring to a perticular architecture or processor family at fault, or is there something about the entire company product range that is somehow debilitating?
 


Oh you mean kind of like the fact that you and several others can't understand that your definition of "winning" is not the same as everyone else?

People point to benchmarks that not everyone agrees upon and claim a "victory". In order for there to be a "winner" everyone would have to agree on the validity of those benchmarks. But the validity of those benchmarks is in question. But many people, such as you, still come here and claim that you are "right" and everyone that does not agree with you "can't or won't understand". Can't or won't understand what? That you want to quote questionable benchmark results and force me to admit that they are correct when I do not accept their testing methodologies?

Only someone with a major bias or a complete lack of maturity would attempt to force their opinion on others. And let us be VERY clear: that is EXACTLY what we are talking about. Opinion. Attempting to force your opinion on others so you can feel better with your choice will never be mentally healthy.

 



I'm sure you were talking about the validity of benchmarks back when AMD had the performance crown, right?

Excuses are like assholes; everybody's got one, and they all stink.

Stop making excuses for AMD, Phenom falls short, period. Budget quad core? Yes. Better\faster than Intel? No.
 


You bet your butt I was questioning the results of benchmarks in those days.

I've been questioning the lack of being able to actually test for better multi-tasking since the early 80's. I started questioning benchmarks because Motorola chips were ALWAYS better than the x86 architecture when it came to multi-tasking. The Intel chips were ALWAYS gimped and ran like crap because they had to try to maintain compatibility with older chips. It made for awful performance. But of course the benchmarks didn't show that. So even then there were Intel fanboys telling us how it was all our imagination. Of course that was on Compuserve, and local BBS back then. (Compuserve: $12.00/hour with a 1200 baud modem. $6.00 with a 300 baud modem.)

And YES: There were tools to multi-task or at least attempt to try to make it appear that the system was multi-tasking. Deskview? Multiview? I don't remember all of the things I've paid hundreds of dollars on from decades ago.

And I don't have to make excuses. The C2Q of the same frequency is NOT faster than the Phenom like some people claim. That is a fact that some people just can't seem to get into their heads. Faster? No... it's pretty much a tie. But sadly most people don't see that even when their precious benchmarks actually even show it. They choose to ignore simple things called "facts". Can Intel run games faster? I don't personally CARE. Can Intel run a database and business objects server faster? NO... Java apps run better on AMD.
 



Thanks for a polite reply, even though I wasn't so polite at first.

Phenom and Core 2 Quad are neck and neck at the same frequency. However, Core 2 is sold at, and overclocks to much higher frequencies.

Remember when AMD made the original Athlon processors and at the point frequency no longer mattered? Guess what! Frequency still doesn't matter. Intel's processors are faster. Just because you want to underclock them to run at AMD speeds doesn't prove anything.

Technology enthusiasts no longer look at frequencies. We look at benchmarks and price points. AMD does alright there do to agressive pricing, but have NO OFFERINGS beyond the $200 mark because their product is inferior to Intel's product and can't compete with Intel's upper-mid range and high end desktop parts.
 


Hey I get rude sometimes. [:keithlm] (I was rude to Badtrip. I even thought about apologizing. Then I reviewed some of his posts and realized I needn't bother.)


So saw me. [:keithlm:1] (Ooops... I meant Sue.)


BUT You are correct. AMD doesn't have anything to compete at the Q9450 level and up.

Lower than that level... they DO compete... and very often succeed. I think one of the main issue on these forums are the trolls that come into the forums and blindly claim superiority over all. The very benchmarks that I question the validity of no longer support their particular viewpoint. But I never accepted those benchmarks anyway... why should I start now just to support a lesser stance. In my opinion the benchmarks were invalid before... and they are still invalid. Even if they support my opinion.

ANYWAY: many of us are hoping for some really good stability fixes with the new southbridge coming out soon. We could see 3.8 and maybe even 4.0Ghz overclocks. (With 3.4 and 3.6Ghz being the "norm" for air cooling.) That would be VERY good. First because then there will be a whole new round of interesting discussions on these and other forums. Second because competition is always good... it will make Intel lower their prices... and the Nehalem might not be sky high in price when it is released.

And that would be with the current 65nm Phenoms. If the new southbridge works as many hope AND they release a 9950 @ 2.66Ghz then AMD will be competing with everything that Intel has at the mid price point. And the benchmarks will very likely show it. (And I will STILL question the validity of the benchmarks even if they do.)

I'll be getting one of the new motherboards with the SB750. And maybe 2x4850. Someone asked earlier why not 4870? Well perhaps 1 of those OR 2x4850... depends on the performance, price, and performance per price. But I think 2x4850 might be a better choice than 1x4870... since there will be twice the number of universal shaders. I might try my hand at doing some GPGPU programming. I'm tired of Java and would like to go back to nice straightforward C programming. Maybe for FreeBSD.

With the new AMD sb750 southbridge, and the new 4850 and 4870 video cards... along with a new 9950 Phenom @ 2.66Ghz... AMD could be poised to actually start doing well. Regardless of the AMD doomsayers post. (Which I'm sure this post will attract some of their wrath.)

Oh. And even if AMD changed and suddenly became 200% faster than Intel... I would STILL not feel good about benchmarks that are done with a handicap to one or the other side. Plus I personally don't think that at this point in time that ANY "professional" should have more than ONE single threaded benchmark in any multi-core CPU review. Pick one just to show single thread performance.
 
You first claim that you don't accept benchmark results for Q9450 and higher, but then claim that the lower models of AMD CPUs compete and even succeed against Intel's lower model of CPUs.

Based on what? The same benchmarks you claim are invalid? You can't have your cake and eat it too. Either you are totally against all benchmarks, or you agree with them. You cannot just cherry pick the ones that you like.

So, if you believe that every benchmark is invalid, then run your own set of benchmarks, and prove them all wrong. If your claim of using inferior memory is your only reason, take a look at THG's own review of Phenom X3 and X4 against Intel's CPUs. Same memory...same results in benchmarks.

http://www.tomshardware.com/reviews/amd-phenom-athlon,1918-13.html

So, you can claim that review sites are "handicapping" AMD all you like, but even with the same memory, the results are the same. What handicap are you going to claim that THG did?
 


That is actually one of the benchmarks I've seen that uses DDR2-1066 for the Phenom. And guess what? This review doesn't make the Phenom look bad. The results see-saw back and forth between the Phenom 9850 and the Q6600/Q6700. There is no definitive winner or loser in this particular benchmark. The results are basically equal. (I'm not sure what you mean by "same memory... same results".)

BUT most of the other popular review sites use DDR2-800 for their Phenom reviews. Their end results are dramatically different. And it is these "questionable" benchmarks that most people will refer to when they want to declare a winner and/or a loser. They will ignore these THG's results because they do not show what they want to illustrate. (And they will ridicule you if you do not agree that the other sites are "correct".)

AS FOR PERSONALLY BENCHMARKING: I don't really need to. I have had experiences with 2P dual core Xeon and 2P dual core Opteron workstations. Since I often run the same applications on my personal machine: I'll take the CPU that I personally have more positive personal experiences with. I MUST mention that the Xeons didn't work badly, they just did not work as well as the Opterons. People will claim that the C2D/Q and Phenom are not the same... but they are a lot closer than people realize.
 
But what do you mean by "on Intel"? As far as I am aware, that deminishing returns aspect applies across the board, not just to one company. Are you simply referring to a perticular architecture or processor family at fault
Yes it is the architecture that Intel has.
There are compilers today has support for generating threads and scale workload without having superprogrammers that can do this by programming by hand. The intel architecture (core processors) handle that type of code badly because cores don’t communicate well. Small threads or threads that shares memory need more communication. Small threads is not that fifficult to do, large threads that don’t share memory and can run standalone is very difficult and takes much planning to create if the functionality is dependent on other events also running.
OpenMP ( http://openmp.org/wp/about-openmp/ ) is one interface that is used to create parallel programming.

 



LOL, I'd argue where that level is, but I think we can call it a day on the circular arguments, as neither of us are going to change our minds!

Thanks for the healthy debate Keith and Kassler.
 


Yup, I'll go with blindly agreeing with whatever TC says 'cause I don't feel like reading...



It seems that AMD will dominate the graphics scene for the moment. Look at this, right? AMD's die size is alot smaller than Nvidia's but they perform similarly in many tests and etc. With the smaller die AMD is winning even if Nvidia is holding the performance crown. Since both companies both use TSMC as a fab we'll assume they get similar prices from them. A larger die costs more to make and also, AMD has the superior nm techology anyway, reducing costs further. This about it this way too, since Nvidia's GTX whatever creates more heat a better fan is required thus increasing the cost.

Therefore I conclude that AMD is in a better position (for the time being) as AMD can easily lower prices to a point not possible with Nvidia with their more expensive, similiar performing cards...

...but... there is still one factor left... nvidia's CUDA... that will take time to implement... sit back and watch a new age of consumer cheap-ness...

Somebody dilute this into proper logical English...