Q6600 isn't real quad?

Page 11 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Are all databases executed on servers? Do you know that there actually exists desktop computers that are using databases and they are running on the desktop.
AutoCAD is in fact not just a CAD application, is stores all data internally in a special database.
 

The thing is, even when you run a database software on a single socket system, the actual access pattern to the CPU is different than it is in a two socket system, and that is different than a 4 socket system, which is why Intel does better in 2 socket than 4 socket, even running the exact same workload.
 

No, the thing is that AMD has faster memory transfers, lower latency when it comes to transfer memory between the processor and the cpu. In the real world tough, the user will probably not notice any differences.
You can buy almost any processor today and it will run most of the applications out there.
Games are not built to scale using small threads so most of them can run an almost any processor also. This will probably change. The new GPU cards from ATI have taken the market by storm I think. Game developers can use more power there and they will have a lot of potential buyers. This market was very small a year ago.
 

True (that AMD currently has better bandwidth/lower latency, not all of the above), but the bottleneck is not memory transfers in any current desktop app, with the possible exception of some compression software (WinRAR comes to mind). In games, the CPU core and GPU tend to be the bottlenecks, and memory is way down the list, which is why even DDR2-667 is adequate in most cases.
 

Good! I don't think that you are that stupid that you don't understand what I am saying.
 


I'm BAAAACKKK. And I had TEQUILA! BONUS!!!!

But I see we're still crying about immature applications that don't bother using the existing available system and the current benchmarks. So we'll betch about how new tech can't show any advantages. (Even though these apps don't challenge what we have NOW. Current processors look at the current average applications and chew them up for breakfast. Then ask for lunch because they're still hungry.)

The kessler guy did make a good point: Applications, especially games, are written to use the most amount of cheap hardware that they possibly can. Even if the Nehalem was released for the desktop today... and it was $100.00 for a 5.0Ghz part... it would take the application developers 2 years to CATCH UP and actually use any advantages. (other than raw speed.)

So when the application developers REALLY start using the hardware... it is going to become very sad for Intel fans on their current hardware. But hey... rejoice... there will be a lot of you having the same problem at the same time! BONUS!

As some others have said... it is pointless trying to point out to you thick headed people that the problems that already exist and will end up causing major problems in the future when the application developers actually catch up to the current hardware. (Oh look! This one benchmark optimized back when P4 Prescott was king works much BETTER on my new already obsolete chip than it does on a newer chip designed for actual WORK. Oh.. joy!)

Perhaps we need to realize that most applications are written for 2.8Ghz-3.2Ghz P4 machines. POP QUIZ: Raise you hand if you think any of those applications might challenge a newer dual or quad core processor.

What? No body?

So.. perhaps... it could be possible that none of those applications will challenge the MCM/FSB because they were written and optimized for that platform?

You mean that newer applications that might actually stress the system? OH HEAVEN FORBID...

<insert maniacal laughing at the troll girls and boys that really honestly believe that the old tech is currently better and will last longer than the newer chips>

EDIT: I need more beer for the responses.
 
Actually, that would be inaccurate. When I was in the industry we were for the most part targeting what was expected to be cutting edge when the product shipped. Of course the presentation and game play was made to be acceptable on mainstream hardware by scaling back detail, but the bleeding-edge stuff was needed to get the most out of the software. Market research had shown us that it was the people with newer hardware who were purchasing the most software.

It's analogous to designing, developing and testing your product on XP, but making sure it runs on Win2k for those customers who haven't moved on yet.

We'll start to see more and more multi-core applications, but a multi-core aware application won't necessarily stress the system. In game programming specifically, tight loops that work mostly off the cache rule the day. That won't change in a multi-core environment. Pathfinding, AI and physics engines will be just as tight as in a single thread environment.
 
I think I see what Assler's issue is...

He's jealous that we all had the cash to purchase Intel quad core CPUs when most of us don't really need a quad core. He's trying to make us feel sad about our choice by showing us his cheaper chip scales better in multi-threaded applications. As most desktop apps are single-threaded anyway... the point, once again, is moot. Yes, we want to run a single thread as fast as humanly possible... because most of the programs we use (especially games) are SINGLE-THREADED. Is it sinking in yet?

When apps start taking advantage of multi-core CPUs and hell even 64-bit, that's when we'll start caring about how our DESKTOP chip scales. We'll all have upgraded to better CPUs anyway by the time that actually happens.
 

Don’t games need a lot of synchronization? Creating ~10 threads and there will be a lot of synchronization.
All applications are mostly in the cache, but a small cache will handle most of the data. The differences in hit rate for huge caches compared to small caches aren’t that big. The problem with Intel computers is that they does almost all they can to avoiding traffic in the FSB. The performance hit is very big if the processor need to use the FSB.
Games are also very I/O intensive. I/O is also something that is hurting the FSB performance badly.
Intel is one player that is working hard on developing compilers that scale to threads automatically. The programmer doesn’t need to be a super programmer because programming threaded applications is much harder than creating single threaded. Also one multithreaded applications runs slower on a single core processor. Switching from one thread to another takes some extra time. Intel processor also doesn’t like their cache trashed.
Creating over 20 threads on Intel and you can take a coffebrake before it is ready .
 
Here is one processor test that show how AMD performs better when the resolution gets higher or the game is more I/O intensive.
http://www.overclockersclub.com/reviews/intel_q9450/

That test is done with 8800GT. Faster cards will have more dramatic effects in favor of AMD.
 



Finally, some actual proof of what you are talking about. There are a few benches there that show AMD besting by a few frames per second at higher resolutions.


(Mocking Keith\Kassler:) But how how many pixels per inch can the human eye see? Because the human can only see so many pixels that means that Intel wins. (JUST A JOKE)
 


It isn’t easy to find those test that show the difference in I/O performance, you need to have at least one AMD processor compared to Intel processors and also have some sort of tests that differ in heavy I/O performance.
 
AMD more responsive? I believe so

My current system is running a Q6600 3Ghz and I am really pleased with the performance it produces when playing games, It smashes my old AMD 4400+ X2 to pieces in such task and should also be better in encoding tasks, but I have yet to test that. Overall I consider it an improvement over my old AMD 4400+ X2 in application tasks.
However For everyday computing I actually believe my old AMD system was more responsive, probably because AMD is better by design. I know it sounds ridiculous to believe, but AMD seems to keep more consistent when running Multiple applications.. Benchmarks don't recreate such situations only set applications, so people (maybe wrongly) think Intel is always going to be the best solution for their needs.

I still use my AMD as a second system and I wouldn't go back to it as a main one as I definitely think my new system is better for my needs I just wanted to share my thoughts. Don't dismiss AMD just because it loses in 3D mark.
 
Speedbird,

Out of curiosity, did those machines both have the same OS installed, same amount of RAM, etc... What are the full specs on both systems?

My home AMD S939 Opteron is more responsive than my Intel E6600 system at work. However, I attribute it my home system having 3 gigs of RAM, performance HD's in RAID0, and that my home system has very few programs launching at startup, where my work system is the opposite.

I'm just curious of the other variables.

I think it was Speedbird that once recommended creating some kind of "user experience" or "responsiveness" benchmark, if I remember correctly.
 
It can never be a fair test on two different systems I suppose that's true, but my Intel system does have faster components in it's favor like an sata2 HD and 4GB of DDR2 800Mhz(AMD 2GB DDR400). The two systems do run XP SP3 although I have used Vista on both of them. As for Background applications there both pretty much the same because I use the same security applications.