Hexus.net benchmarks Nehalem

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Ive said, were just starting to see this. Itll show much more by next year, when newer games and newer cards arrive. If A is growing at 33% a year, and B is growing at 20%, its only a matter of time before A catches up and surpasses B. We are at that threshold now
 
I have NEVER had my CPU utilization go above 60% and that only goes that high when it is loading something. Not even Crysis loads my CPU that much.

We could possibly say that its not what you think jaydee. It could be that the game engines do not take advantage of the CPU power thats given to them (this is 90% of most cases) and the minimum FPS go higher only due to a faster clock speed.

I don't think its CPUs bottlenecking but truthfully I think its GPUs at high res and the games themselves.
 
It is true game engines dont stress a cpu like super pi does, theyre not made that way. Having a cpu max out isnt a true indicator of cpu limitations in games, and many other apps. Thatd be like saying, the cpu isnt running winrar at max because it isnt maxxed out.
 


I remember the "at least 2.6 GHZ" article. However, that THG's article doesn't seem to agree with the links you posted yourself and they used cards older than a GTX 280, from what I remember. Those differences between 4.0 ghz and 2.0 ghz with a GTX 280 looked pretty much pathetic to me - except for Crysis, the most well-coded game.
 
jaydeejohn does have a point in that GPU performance is increasing at a far greater rate than CPUs. What was once more than sufficient for a 8800GTX (say, a 3GHz C2D) may no longer be enough for a GTX280 or faster GPU.

There really is no way around this though, its been an obvious trend for years now. It hasn't really been a problem in the past few years as most games were mainly shader intensive, which put the onus of performance on the GPU. Now, instead of just pretty pixels, advanced physics effects and smarter AI is becoming important as well, and thus the CPU is starting to play a bigger role on performance than in the past.
 
If the gpu requires the cpu to work faster, then at some point theres a wall. Simple as that. The gpus dont dont it themselves, as we all know. And its somewhat cavilier to think that a cpu will automatically keep up, at least on the cpu side of things. Im not slamming anyones opinions or anything like that, so please dont take it so, its just hard for some people to get their head around this. This IS new.
 


GTX280 SLI (or even 4870X2s) are mainly for hardcore gamers who run at 2560 x 1600 though. Of course at lower resolutions it'll be CPU bottlenecked, but for its intended market its really not much of a problem.

2560 x 1600 = ~4.1 million pixels

1680 x 1050 = ~1.76 million pixels

So a GTX280 SLI / 4870X2 at its intended resolution has a workload nearly 2.5x that of that of a standard 'single GPU' res of 1680 x 1050. If a 3.6GHz C2D is sufficient for a single GTX280 @ 1680 x 1050, it shouldn't be bottlenecking a GTX280 SLI setup at 2560 x 1600 either.
 
Some games are best run at 19x12 tho, even for these cards. Whether its the 4870x2, or even the G280SLI, and I point out, the res doesnt go any higher. Then what? My pouint here helps clarify our current situation as well. At 25x16, we see no or only a rare cpu slow down. That wasnt the case a few months ago, let alone 6 months from now. In other words, a short while ago, 16x10 res was where youd be gauranteed of no cpu bottleneck, thats not longer the case when we see them at 19x12. So, as weve asked the gpu to do much greater work its compled, and now its atr its final resolution before we start to see cpu bottlenecks there as well.
 



I believe you may be confusing a limitation of the software with a limitation of the hardware. If the game isn't coded to provide/handle resolutions greater than 19x12, swapping out cards isn't going to make it so.
 


What game, Crysis maybe?! 😛 OTOH I can't think of any other game that won't run at 2560 x 1600 on a 4870X2, let alone a GTX280 SLI.

There was really never a rule set in stone that stated 1600 x 1200 (or whatever) was guaranteed to be GPU limited, it depends entirely on the game engine anyway. Of course due to the recent ~2x increase in GPU performance (thanks to the GTX280 and HD4870) traditional 'GPU bound' resolutions are starting to show CPU scaling as well, and I do understand your point on that. However, the same thing has always happened with the Radeon 9700, 8800GTX or basically every time a new GPU arrived that blew existing offerings away, so it shouldn't really come as a surprise to anyone by now.
 


It is not new. At least in gaming. In this case i must support JDJ on this. GPUs are evolving (for some time now) at a faster pace than the CPUs. That is noticeable when you upgrade a GPU, and how long it lasts. How do i explain myself in this case i dont be mis understood.

In gaming, every GPU upgrade the performance jumps considerably. GPUs ussually have a must shorter life cycle about 6 months (80 was the exception) always with gains of 20% or above. Latest generation of the K10 was a 10% gain over the K8 architecture. Nehalem benchies are already out (some of them at least) and they show a 10% to 34% improvement...but not in gaming. Don't talk about theoretical Flops, see workbench results. For a long time ago, every chip makers knows that what looks good on paper doesn't mean it will work good on practice.

Netburst.
The VSA100 chip.
Kyro II Chip

And i guess if you look further, you will find more.

With the same specs the Jump between the 3870 and the 4870 is really big. A bit bigger than 34% and the drivers aren't mature yet. Se we can expect even bigger gains.

GPU world is advancing much faster than CPU world. CPU world is talking about is about dual to octo core now, and GPU world has broke that barrier a long time ago.

Like when we Europeans partied because Y2K, the Chinese were already in the year 5000 and something.....
 


I did say a few months ago I would be shocked to see more than a 10% gain. Maybe with a memory controller, DDR3 RAM could create better results, but CPU's are reaching their limits. Thats why multicore programming is starting to become so important.
 
Then not only Intel, but M$ needs to get off their arses and either contribute money or more resources on this. An app CAN be enjoyable (when done) even if it takes awhile. But gaming is now, it has to happen now, or its a poor experience. I just see this, as well as in the case of a few apps, as being very important in seeing some kind of growth thru the cpu, whether its thru SW, going multithread or the cpus themselves
 
It is undeniable, the workload for the gpu has increased much more, and at a faster rate than for cpus in gaming. What we saw 2 years ago on then mostly owned 12x10 screens was no, or rarely a cpu bottleneck or slow down. Cant even say that today. Today, we EXPECT there to be cpu bottlenecks at those resolutions, and to an extent, even higher at 16x10. Like I said, we only go to 25x16, so far thats it.
 


Its all relative though - 3 years ago 1600 x 1200 was considered high res, now its mainstream. Nowadays 2560 x 1600 is considered high res, maybe in a few years we'll be talking about 3200 x 2000?

PS. Who here remembers people going nuts over a 3D accelerated Quake running at 30fps @ 640 x 480 on a Voodoo1? Look how far PC gaming has come in a little more than a decade!
 
I have a Pentium w/MMX and a Cyrix (remember them?) 486 DX.

Game engines will have a lot to do with how well they will take advantage of a PCs hardware.

Lets look at my favorite example. Source. Source is a well optimized and built game engine. @ 1680x1050 with everything maxed out including AA/AF and HDR enabled on a Q6600 and a HD2900Pro 1GB I get about 150FPS in both the CS:S stress test and the Lost Coast stress test (tried this on my 40" TV). A guy here on the forums (3Ball I think) got a HD4870 and a C2D E6600 @ 3.4GHz he gets 300FPS (thats the max it will show on Source).

Now the CPU I have is at 3GHz and his is at 3.4GHz. Mine is a quad his is a dual. He has a generation ahead card that IF the CPU was truly limiting it he should have gotten a FPS in the test near mine, maybe a bit higher. But he easily got 2x my FPS using a 2 year old CPU.

So in the end it truly depends on the game maker and engine. Look at Crysis. Its pooryl optimized and it had a memory leak upon release and doesn't scale well at all. But Source is 4 years old (probably older with development time),and scales very well from older DX7 GPUs and Pentium IIIs all the way to the newest hardware. Well except The Orange Box,or Source 2007, which will scale back to DX8 and low end Pentium 4s to the highest end available.
 


Word. As much as Valve as a company annoys me they have built a fantastic engine. That can do some impressive effects the only real difference is ID3 and Unreal3 support a more "advanced" shader model but finding genuine visual effect is somewhat moot. Source in my honest opinion is more developer "friendly" that either ID's and Epics engines its sad their developer support is lack luster is comparison to Carmack on site and Sweeny's interesting open source consordium.

But thats not to say the other engines dont benifit from a faster CPU's, memory subsystem either its just I saw Jimmy's post figured I would word it up :lol: .

Oh ya Crysis sucks *lights DVD on fire*

Word, Playa.
 


I think piesquared is talking about the Cinebench score of ~45000 reported at IDF.

http://www.xtremesystems.org/forums/showthread.php?t=199048

Basically the new Cinema4d 11 engine renders some ~2.5x times faster than the previous engine used in Cinebench 10. Apparently you can modify CB10 to use the C4d 11 engine and get much higher scores.

The thing is, apart from a one liner quoting the ~45k score from sites reporting IDF 'live' such as Anandtech, Tech-report, etc, we really don't have any additional info on this matter at this point, but it'll make for an interesting follow up article I'm sure, since there was almost universal surprise at the extremely high score attained.