Q6600 isn't real quad?

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Do you think that you will continue to use the intel when games starts to scale to more cores? Do you know how much Intel degrades when there are some real scaling?
CAD and rendering it may handle well, it depends on how hard Intel will push new development technology to use thread for taking advantage of their Nehalem.

 



Dude. You had a really big dose of green Kool-Aid this morning.

No one has ever said that IMC is only good for server. However, attaching an IMC to a **** core will get **** results, even if you and Kassler distort the truth until you are blue in the face.
 

Here's a hint:

I have a core 2 quad. I am in college. I use matlab and solidworks for some of my classes. These (especially matlab) scale quite well to far more than 4 cores. I also play FSX. This also scales to more than 4 cores. Yes, I would take a core 2 quad over a phenom for this application, because as TC said, even though there are benefits to an IMC, you need a halfway decent CPU core first, and the phenom just cannot deliver in that department. You can sit there and claim the Phenom scales better until you are blue in the face, but from my experience, it simply isn't true.

Trust me, I would run a Phenom if it were the best available within my budget as of the time I built the system, but the Core is simply faster right now. If AMD comes out with something better, I'll be happy to switch, as unlike some people, I do not go by brand, but by performance (heck, I own both a 3870 and an 8800GTX for example). The simple fact is though that AMD has blown it for now, and does not appear to be on track for anything better before Nehalem (which actually disappoints me, as if AMD had something halfway competitive with Nehalem at the time of launch, all CPU's from both companies would be cheaper).
 

Somehow i think they will take a on a different color. Nice one though.
 


Perhaps you did not say that... but it is a VERY common excuse used often by Intel fanboys. On this very forum.
(I guess the blue jello you had today clouds your mind.)



If AMD took a Phenom and removed all but the L1 cache... and INTEL took a C2Q and removed all but the L1 cache... HOW do you think the two would compare in benchmarks?

We DO know that the current Intel design depends on a large cache to gain much of it's performance. We also know that the AMD is not as cache dependent.

What would this reveal about the quality of the cores?

(You DO realize that this situation can be approximated using real hardware... it is not just a "thought experiment.")
 


Yes, matlab is one type of application that will run better on Intel. The reson is that it doesn’t use memory. Calculation is just pure processor speed and data will fit in the cache. It is also a controlled operation and that means that threads can be optimized and dosen’t need to communicate. Communication between threads means that data needs to be synchronized and that isn’t something Intel likes.
So you are right, you Intel will perform better on matlab. As a said in the previous message, CAD is another type of applicaton that probably will run faster on Intel. Not that much memory but lot of calculation. Rendering, end very complex encoding.

Games on the other hand, if the move from using specific tasks in one thread to divide those tasks to two threads, then they need to communicate much more between threads. And that also means much more traffic to and from the cores on the processor because you need to add the talking and also the that two cores can process memory from the ram on the motherboard.

 
Scale Scale Scale. Intels Scale just fine on the DESKTOP. ah the hell with it, no matter what anybody says these guys will keep drinking the kool aid and spitting it up on the forums.
 


Isn't the real problem here that some says that AMD does something better on the desktop compared to Intel? Intel isn't doesn't win 100% on the desktop and that might be problamtic if you have taken that beleve to your hart.
 



I don't care. The fact is that they both have L1 cache.

I don't understand your need to make up imaginary situations. Let's benchmark reality. Reality is AMD's best quad competes with Intel's slowest, and everything else Intel has blows AMD out of the water.

However, the doesn't totally matter because most people aren't going to drop much more than $200 on a processor. However, when it comes to financial statements (PROFITING) it will matter for AMD, because they are bargaining off their best products while Intel makes a hefty premium and sells their lowest binning parts with the top binning AMD ones.

AMD's in trouble, they totally screwed themselves over with the launch of K10.
 

That's odd...

Here I am, getting all these low memory errors and watching "matlab.exe" using up >2GB of memory in my task manager, and I thought that it was using memory. Glad that I had you to clear that up. Now I know better.

:sarcastic:
 
Kasser just stop it, Nobody said anything like that. Its people like you latch on to the things that AMD does win at and then run off the mouth about everything intel, FSB is saturated, IMC Scale. And just spread FUD

The fsb is not saturated on a single socket system.
The intel does scale fine. Look at the E6600 compared to a q6600 in mulithreaded applications.
The IMC is a plus but it has been proving that what they have been doing with the FSB up intil now has worked GREAT. Make any exuse you want, it has more cache. almost every application is built to favor intels.. blah blah.

You are nothing but baronmatrix replacment. Or a thunderman that knows how to speak english and not type AMD4Life.
 


Wow... you guys don't even try to pretend it's not a Mantra or chant anymore....

"Everyone repeat after me: The fsb is not saturated... The intel does scale fine... the FSB up until now has worked GREAT."

Keep chanting it... perhaps you might actually get more people to chant with you. (Then you might actually start believing it... even though it isn't the truth.)

And TC: I notice that even though the last line in my post tells you that this can be re-created in a real benchmark to show you how bad the MCM/FSB can perform when placed in a heavy multi-tasked situation you had to pull the: "I don't care that the things you're talking about are REAL... I only care about what I want to be real" card and attempt to play it. It's funny how so many of you live and die by every little benchmark result... even the ones that aren't that important... but presented when with a suggestion to improve benchmarks to actually reveal the deficiencies in the architecture... you guys put your head in the sand and cry "not important".
 


Link the benchmark. If it is not a desktop application, but a server based benchmark, then you are just full of it. Go ahead. I will gladly run it on both my E6400 and QX9650, and see just how much my FSB saturates. Hell, I will run it while I keep my F@H running.

Keep chanting your own mantra: "It's good enough. It's good enough."

I'll be looking for that link to the benchmark.
 


Do you even realize what you have said? Nehalem is set ti hit next year. We have had dual core CPUs for what 3 almost 4 years? How long has it taken for a game to truly use more than one core? Heck to even utilize it to its full potential.

Seriously its like a MAC add here with keithlm and kessler. No matter what anyone can prove with benchmarks and results they always pull the same lame excuses to say its "superior".

If K10 came out and blew Core2 out of the waffer, then TC would still love AMD and you would all be praising it and hell I would say good job as well. But since it flipped and then flopped in performance, power usage (this must hurt the most since that was their AMD64 claim to fame) and not priced low enough since they will loose too much you have to sit there and defend them.

I don't give a rats left behind about the server market. If I did then I would be building servers with the best chip for that specific area. I care about what the benchmarks for the games I play and programs I use say. If they for some ungodly reason run better on Intel then thats it!!!! There is no, "Well its because Intel had them optimized for their chips (which by the way is great because that just means it will perform to its best on my chip and AMD should to the same) or "AMDs chip has a superior design and the IMC and the and and and and" AND NOTHING!!!!!!!

Just stop trying to use the same crap again and again. Especially when you are arguing with people who own the chips you are claiming are not that good and all you have are benchmarks and probably AMD fansite BS.

I will tell you the day my Q6600s FSB gets saturated. It probably wont happen for a dam long time but when it does (unless I have a Nehalem by then) I will tell you. Then again when Nehalem comes out if it truly does perform well you will find something that makes it not as good as it looks. Wont be the IMC or the "it not true quad" BS but you will find something like you always do.
 



Again:

The problem with FSB right now is in multi socket server systems, especially quad socket. In a single socket system, every single benchmark shows the same thing: the FSB does not significantly handicap modern chips on modern apps. I will be the first one to snap up a Nehalem when it comes out, and I agree that a new memory controller is nice, but this is due to the potential for future apps, not any problem with current ones.

Honestly, why the ridiculous loyalty? I find it easier, less stressful, and often less expensive to pick based on my budget and current benchmarks rather than brand, which is exactly why despite using Nvidia for the past couple of years on my gaming systems, I will probably go ATI on the next one. As I said, I would absolutely love it if AMD came out with something amazing, but I just don't see it happening soon with their current record (though if they pull something out of their hat, I will gladly build a next gen Spider system instead of a Nehalem based setup when I build my next system around Christmas).
 


The Phenom has three or four independent L2 caches (one per core) and shares the L3 cache.

Besides, didn't you make a lot of hot air about Vista's NUMA awareness being a major advantage for the Phenom processor over Windows XP? That turned out to make squat all difference. How do we know all these visions you're making won't simply fall flat on their face as well? 4x4 was supposed to be great too, haven't seen anything from that in 18 months, guess that still has growth room for the future? I think AMD just realised it was a scrapheap and simply cancelled all development on the 4x4 platform. True potential indeed.

The Phenom does not use NUMA as it has a single bank of RAM, hence a unified memory arrangement. Only the multiple-socket, multiple-IMC, multiple-RAM-bank Opteron 2xxx and 8xxx use NUMA.



The numbers are not absolutely reproducible and you DO have to worry some about standard deviations between benchmark runs. A well-set-up system should have minimal deviation between runs, but that is not always the case. I wonder how many runs of each benchmark gets performed by the HW review sites and what the SD, mean, and median values are for each run. I have a hunch it's only one run of each benchmark on many HW review sites.



The OP for that comment probably meant to say "you do not need a quad to get playable framerates at those resolutions," Yes, the CPU is a bottleneck of top absolute performance at that resolution, but it may not be enough of a bottleneck to hinder gameplay.



I do, a Core 2 Duo U7500 running on a 133 MHz QDR FSB. Not many Core-based CPUs run on such a slow FSB and no desktop units do but a few mobile units do and run decently enough, apparently.[/quotemsg]



Hah! It takes 300 MB memory just to *start* the application on the Windows XP x86_64 machine I use at work. Let's just say that there is a very good reason that there is an x86_64 version of MATLAB and that I've run my machine with 8 GB RAM out of RAM several times working on some data sets. On a side note, that machine runs a Q6600 and I find its multitasking ability to be perfectly acceptable. I can run four instances of MATLAB to light up all four cores and it still chugs along nicely. The execution time of four identical calculations at once is pretty much the same as three, two, or one, so I think that my 1.5 GB data set per thread calculations both hit the memory hard and show that the FSB is not a bottleneck. We DO see a big bottleneck with some Xeon servers doing OpenMPI apps, where the scaling pretty much drops off all of a sudden in a "hit the brick wall" fashion but the Q6600 doesn't show this kind of behavior with what we're doing.
 
Actually, based on the way most AMD people like to define "real" multi core, the C2D does indeed qualify. The split cache of the Quad is one of the reasons it supposedly doesn't.
 

I have not used matlab but I know that this is a application used to calculate. Applications can use a lot of memory for other things but not when it comes to cpu intensive tasks for what the applications is used for. I just told you what type of operation that Intel is good at and what it is not good at. If you doesn't understand then skip it.


 
Kieth just STOP IT.

The Intels do scale GREAT for single socket desktops. In just about every app that uses all the cores. Its a FACT. Again take a look a e6600 vs q6600. Or god forbid actually LISTEN to the people that have them. They run many things at once without problems. The FSB IS NOT saturated.

You are the one that keeps chanting FUD to support AMD no matter what.

Go look at how the intels scale in FSX or photo/video edting, folding.

All I have ever seen you two guys do on here is what you are doing now. Spreading FUD. Not helping people with builds or ?s about which cpu is best for them(AMD or Intel, X2,c2d c2q) Just in threads like this spreading crap and talking nonsense. Just thundermans that try to use or make up facts/assumptions to support AMD. You're both fanboys/clowns, trolls or whatever else you want to call it.

How anybody can be loyal to a frickin company is beyond me. Reminds of those apple cult people that line up outside apple stores to get lastest pile of junk when it comes out.
 


I support AMD... but still stay realistic when talking to people... I'll be biased in my own time... hehe I love my Core 2 Macbook!
 


I agree 100%. I buy whatever will perform best for what I use given my budget. I don't personally feel any loyalty to Intel, AMD, nvidia, or ATI.