Hexus.net benchmarks Nehalem

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Well if you are directing that at me, it means i'll remove my agena from my AM2+ mobo, drop in a Deneb when it's released, and away i'll go. Yourself? I assume you are willing to spend alot of money, in exchange for 0 gaming improvements?
 
Hmm, I wasnt aware they were falling behind. Last time I checked, even poor old K8 processors could keep up for gaming, so I fail to see why we need a 50% increase in IPC. When CPUs cant keep up, then Ill be interested in your argument, but untill that happens, i7 is an improvement over Core2, so stop whining.
 
look it don't matter anyone say about intel cpu amd fanbois will
start another arguement, so I will be the first to say that Intel cpu
don,t boot faster then amd. (not true and don't make any difference
but i'll be the first to say it)
 
Who cares about Phenom? They arent leading. They arent the best. They dont hold all the influence. And more people care about the proper usage of a gpu than what youre letting on. So Im disappointed. And peoples perceptions have changed. If you dont see it, then its just my opinion. It wont change a thing, as I can see. While gpus increase in ability 100% in 20 months, and cpus gain maybe 40% over that same time, and today we have games and graphics cards that still await a cpu to be able to play those games without bottleneck, I still ask my question, and if Im disappointed that no one knows, itll be apparent next year. Before I said itd happen, cpus will bottleneck games, and its happened. This is a trend. Intel wants everything to be multithreaded. It simply isnt going to happen in everything, especially gaming. Itll cost too much. You think the cpu market is flaky? Here today, gone tomorrow is more the mantra in game dev. If the trend doesnt change, were going to see slow down in actual game dev. Again, because of multithreading. It is important that Intel puts out a faster cpu to gamers. Trust me. Its only going to get worse
 


When did Intel ever state this was a "gaming" chip? But regardless of that the lower memory latency will help on ID and Epic engines, both are sensitive to memory latency.

The fact of the matter is this is a refresh to address the Core uArch lack of scalability on 2+ sockets, fact is Core uArch chips tend to be top dog in single socket machines, but when it comes to HPC like environments the poor little fella is memory starved. Intel has solved that with this revision of the Core uArch why is it so hard for some of you guys to get that through your heads and stop, shut up, review current solution, review older solutions, review competitors solutions, then come to a conclusion. Pretty easy based on the actual specifications of the processor what its targeting, if they wanted more "gaming" performance there would have been changed to the Integer units again.

All in all if the numbers hold even at 15% overall in single socket configurations that’s a pretty good deal for us, but where its going to count is the multi socket setups and that is where we are going to see these 40% claims in that specific discipline not code specific but simply raw throughput in those configurations.

Word, Playa.
 


Easy. I won't need to upgrade my CPU, even after you upgrade yours. I am not an early adopter of new technology. Again, what is your point? Where did I mention anything about upgrading or how you would upgrade? Why change your argument to upgrade path, and not gaming results?

So, I simply ask again - If you think Nehalem is merely a server CPU, based on 1 blog, in which the author even admitted to not having tested or tried a Nehalem CPU, and the Hexus review, what does it make the Phenom 9950 BE, it was compared to, and beat in those same gaming benchmarks?
 


And which games would those be? Im quite interested in this...

O and incase you havent noticed, Intel makes their money on servers, just like AMD. Gamers make a very, VERY small part of their market. Its not who they cater to.
 


Not the slightest clue if Intel would talk more about their future products that are say 4-7 years to be anounced we could have a really cool conversation about it but sadly they dont. But for **** and giggles I would say the next revision after the 32nm refresh would be my guess, but thats me I would gauge my sucess off the product and will have to assume the FPU performance will be the defineing "handycap" of this current revision. But like I said thats just my thoughts on the matter.

Word, Playa.
 
Crysis, AoC, others. It used to be like this. I dont have the link, but Toms did a article on it. At 2.2 a K8 would max out all gfx cards. No matter what you changed, the games wouldnt produce higher fps, and that was using a FX60. That changed a lil when the 8800GTX came out. Then, having a C2D at 2,6 was adequate. Not, we have the 4870x2. To get the acual highest framerates in a game, it is no longer solely determined by a gpu. You have to oc the cpu, and on some games, it doesnt stop, the more you oc, the more fps you get. Next gen gpu? and the next? You see where Im going with this?
 
Like I said, Im NOT downing Nehalem. For what its meant for, its great, as I said. Its not the benchmark improvements I wanted to see. So Im disappointed. Maybe we are all wrong, and itll show better than what weve seen. Who knows. I hope youre getting my point, and not reading anti anything in what Ive said, because other than disappointment, its good to see Nehalem coming out
 
Heres an example where a cpu is oceed to 4.4, and the reviewer is still talking cpu limitations. http://www.driverheaven.net/reviews.php?reviewid=609&pageid=7 As gaming matures, the demands are also higher on the cpu, as well as physics AI etc. Plus, the speed of the cards are growing at a faster rate than cpus. Its apparent in this gen especially, and we will see more in the future, only worse. At these res, and prices, people dont want less than 60fps, period. Thats also why Id said earlier that the Hexus review was crap, because higher res doesnt mean no cpu bottleneck, as they inferred
 



Those are X48 boards and I said CLOSE TO $400. You obviously didn't find FoxConn's board which is about $360 or so. The point is that people are saying that X58 boards will be more expensive, so any Strikers, etc will be close to $500. I'm not knocking them I'm saying that a $235 Phenom and a $179 790FX will allow play up to 2560 with the right GPU.
 


Nice find. Interesting how no one commented on it. Oh well.



You.... look at the above quote. Whats that show ya?



But that will be very GPU limited especially @ 1920x1200+. And the funny thing is you always talk about Intels highest end mobo but you never take into perspective their mainstreame mobos like say the P55 (just a guess) would be out for. The P45 just came out a month ago and there are already mobos out cheaper than my P5K-E that are equivalent (well better cuz PCIe 2.0 and such), a good example is the P5Q Pro.

Everyone jumped on the "rumor" that Nehalem will not OC and it changed from the Lynnfield chips wont to only the X58 mobos will. Well The problem is that Intel just went and showcased a Nehalem chip self OCing for single threaded apps. It shuts off the unneeded cores and OCs the chip as high as it can to stay within the thermal limits to help complete the task as fast as possible.

I would love to buy a Core i7 rig to mess with but I am waiting to see what the next step (Westmere or whatever AMD has planned) brings. Personally I feel 32nm will improve the heat a lot and thats what I am looking forward to.
 

I wasn't trying to find every board, just a relatively random sampling of them. I'm sure there are several higher than the average price for the group I found, and several lower. The point stands that X58 will likely be much cheaper than you were trying to imply. Also, people are saying X58 will be more expensive, but is there any real reason to think so? If anything, the chipset itself will be cheaper, due to the memory controller on the CPU. The 6 memory slots adds a bit of complexity though. I would be surprised if an average "lower end" X58 board (I put it in quotes because no X58 board could really be considered low end) came in much above $250, and even the top boards shouldn't exceed the $400 or so price point that the top end ones are at right now.
 
Just touching on the topic of CPU scaling with the faster GPUs:
http://www.pcgameshardware.com/aid,647744/Reviews/GT200_Review_CPU_scaling_of_the_Geforce_GTX_280/

It appears a C2D @ 3.6GHz is the 'sweet spot' as far as the GTX280 is concerned, though minimum framerates do keep rising in Crysis even at 4GHz. Average framerates stays practically the same from 3GHz to 4GHz though, so it must be a short scene that only lasts a few seconds, like a massive explosion or something.

Now, I'm not sure exactly what speed Nehalem will need to run to match a C2D @ 3.6GHz or 4GHz in games, though I am pleasantly surprised at the Lost Planet benchmarks, but it just goes to show you really need a massively multithreaded game to truly take advantage of Nehalem.
 



I didn't see the one at Anand, but it's here at Tom's: http://www.tomshardware.com/news/Turbo-Mode-Intel,6193.html

Leaked information also indicates that production CPUs will self overclock by up to two speed bins %u2014 for example jumping from 3 GHz to 3.2 GHz or even 3.4 GHz.

With this kind of headroom, it will be interesting to see how far enthusiasts will be able to push Core i7 processors. Even Intel indicated to us in June that Core i7 silicon is extremely healthy. Our own tests revealed that Core i7 processors will have considerable amount of headroom in terms of clock speeds.
 
Well, currently, with the overall improvements in Nehalem, even barring mulithreading, a 3.4 seems fairly close to optimal, tho thats with the current G280, non oceed card at 65nm, but by Dec the 55nm comes out, and also this isnt even mentioning the 4870x2, which in certain games no cpu will max out, even Nehalem. Ive seen the pcg article before, and was reading it while I came to the cpu section, just now. Glad you posted it, I was about to. Im still under the impression, that at some point next year, with many demanding games coming out by then, and much more powerful gpus out, cpus in some scenarios will actually, truly be a real bottleneck
 
Heres the most telling argument about hexus comments on res and cpus vs gpus, and the theory that at higher res, gpus is the bottleneck. http://www.pcgameshardware.com/?menu=browser&article_id=647744&image_id=839048 Notice what happens when the clocks on these cpus are cranked. Surely its a gpu bottleneck? The average framerates are stuck at 17, meaning no further improvements. Thus a gpu bottleneck. But look further, and see the MINIMAL fps. As the cpu is oceed, we see a never ending (to the max the cpu was oceed) scaling of better framerates. To truly reach the max, the minimum would barely be below the average, but such a cpu doesnt exist. This is, as I said, just the beginning of this. We need to understand a lil bit more about it, but its here. 4 months ago, I wasnt even considered to be real with my comments on this. Everyone said, what games> Well theres some now, and with better cards, its showing up. As time goes along, will the cpu keep up? Will there be more Crysis type games out, tho this time, everyone will be waiting for a better cpu?
 


It could well be, but the problem is that the age of massive clockspeed gains with each node is long gone. The Netburst era is now nothing more than a distant memory. Multicore is the future, but making games massively multithreaded is apparently very difficult (I'm no programmer, just going by what I've read by game developers). But we're slowly getting there, Lost Planet shows its possible, so its a start. From what I've read Valve seems to be embracing multithreading as well, and we know the UT3 engine can take advantage of quads, and a lot of upcoming games will be based off that engine.

I guess the onus is on programmers finding a way to harness all the extra cores that go mainly unused today, because we sure ain't gonna get any massive IPC or clockspeed gains in the next 2 years until the next 'tock' or Nehalem replacement.
 
It may open the door for many gpu apps as well, or physx on gpu scenarios. Its not just hard for game devs, its mainly costs. Question, will the auto oc in Nehalem be able to be over ridden? So theres an ability to manually oc?
 


Honestly, I don't think we can assume games are being CPU bottlenecked, jaydee. Many games don't even show a super performance increase if going from a X2 5600+ to a C2Q QX9770 with some new GPUs. While you may argue that's because "development has stopped" I would say it's because it doesn't matter that much.

Take a look at this: http://www.techreport.com/articles.x/14573/4

Heck, even Supreme Commander doesn't show much love for highly-clocked Quad-Cores (which fanboys will say it's a so much "Quad-Core optimized" game).

The difference between CPUs in games is usually pathetic unless you game at a very low resolution. Anyway, we saw the GTX 280 (which is said to be "CPU bottlenecked" just because it's a turkey) double the scores of the 9800 GTX. 4870X2 also shows its powers. How come that they are being CPU bottlenecked? If so, why do we still see big improvements while changing the GPU but not the CPU? Wouldn't DAMMIT or Nvidia be shouting about this "bottleneck thing" by now if it truly existed?

Until there is clear evidence that our CPUs can't keep up with the new GPUs that statement will remain meaningless.
 


Sorry, but did you happen to take a look at the other screenshots?

Like this one: http://www.pcgameshardware.com/&menu=browser&mode=article&image_id=839047&article_id=647744&page=1

COD4 Min. FPS with CPU 4.0 GHZ: 70
COD4 Min. FPS with CPU 2.0 GHZ: 63

Another one: http://www.pcgameshardware.com/&menu=browser&mode=article&image_id=839049&article_id=647744&page=1

Prey Min. FPS with CPU 4.0 GHZ: 135
Prey Min. FPS with CPU 2.0 GHZ: 107

Sorry, but in my opinion that just shows how Crysis is a poorly coded game.
 
The basic flaw to your idea is that a game such as sup com which IS cpu dependent, isnt taking into account the gpu. Im talking about certain games where we find the ability of the game engine to let the gpu fly AND because of the game, stress the cpu as well. When that combination arrives, thats when the cpus start to show their slow. And, it doesnt matter what the res is, sure, a smaller res would allow the gpu to fly, but listen to this. Old games, ones where you can get 300+ fps in, sure, theyre cpu bottlenecked, but who cares? With the complexity of todays games, and the more powerful gpus, and not much growth from cpus, the workload is shifting more and more to the cpu, and not just because of physics.