Hexus.net benchmarks Nehalem

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Unless these numbers change, I agree, the "review" is oversensationalized. It looks like a bump for a Penryn so far. Which is nice, but all Ive heard for awhile now is Nehalem. Im still hoping these numbers ARE off, cause from a gamers perspective, we NEED it to
 
The same analysis Hexus did, but this time on Anand, much better and realistic.

As IDF has started, the first benchmarks of Nehalem will probably pop up. It is without a doubt an impressive architecture that gets a much better platform to run on, but this CPU is not about giving you better frames per second in your favorite game than the Penryn family. Let me make that more clear: even when the GPU is not the bottleneck, it is likely that most games will not significantly faster than on Penryn. We, the people behind it.anandtech.com will probably have the most fun with it, more than your favorite review crew at Anandtech.com 🙂. And no, I have not seen any tests before I type this. Nehalem is about improving HPC, Database and virtualization performance, much less about gaming performance. Maybe this will change once games get some heavy physics threads, but not right away.

Why? Most Games are about fast caches and super integer performance. After all, most of the Floating point action is already happening on the GPU. All Core 2 CPUs were a huge step forward in integer performance (not in the least because of memory disambiguation) compared to the CPUs of that time (P4 and K8). Nehalem is only a small step forward in integer performance. And the gains due to slightly increased integer performance are mostly negated by the new cache system. In a previous post I told you that most games really like the huge L2 of the Core family. With Nehalem they are getting a 32 KB L1 with a 4 cycle latency, next a very small (compared to the older Intel CPUs) 256 KB L2 cache with 12 cycle latency and after that a pretty slow 40 cycle 8 MB L3. When running on Penryn, they used to get a 3 cycle L1 and a 14 cycle 6144 KB L2. That is a 24 times larger L2 than Nehalem!

The percentage of L2 caches misses of most games running on a Penryn CPU is extremely low. Now that is going to change. The integrated memory controller of Nehalem can't help much, as the fact remains that the L3 is slow and the L2 is small.

But that doesn't mean Intel made a bad choice. Intel made a superbly good choice by improving the performance where Core (Merom/Penryn) was mediocre to good. Penryn was already a magnificent gaming CPU, but it could not beat the AMD competitor in HPC benchmarks. And AMD gave good resistance in the database performance benchmarks. That is all going to change.

Most Database code can not use the wide architecture of Penryn very well. The number of instructions per cycle get lower than 0.5 and waiting for the memory is the most probably cause. SMT or Hyperthreading can do wonders here: while one thread waits for a memory stall, the other thread continues working and vice versa.

Secondly, quad (and eight) socket performance is going to improve a lot as four Nehalems only have to keep four L3 in sync, while a similar Tigerton system has to keep 8 L2 caches in sync. That is why the cache system is perfect for server performance, but a little less interesting for gaming performance.

The massive bandwidth that the integrated tri-channel memory controller delivers will do wonders for HPC code. And the new TLB architecture with EPT will make Nehalem shine compared to it's older Core brothers.

No, Nehalem was made to please the IT and HPC people. Bring it to it.anandtech.com, it is not that interesting for you gamers ;-)


Link: http://www.anandtech.com/weblog/showpost.aspx?i=480

Now yes, no dummy bars, no stupid comparisons. Straight forward and clean. That Hexus review seemed to be made by Jason Mick. Really.
 
Yea, which all means zip to my benchmarks. Its like the ATOM, great for number crunching, but zilch regarding graphics usage. Im sorry, but Intel just doesnt get it. TY for the thread and quote, it helps, or really it doesnt. Oh well, theres always the next release, which will see gfx cards at least 50% faster by then, and thats if Intel hurries it up. And for those who think any comments negative towards AMD at this point wont help either. This isnt a Intel vs AMD thing. All this does is leave the door open for gfx cards to have plenty of headroom to do physx in games, because as they improve, theyll have to use that ability somewhere. And Intel said the gpu was dead. Man, they really dont get it
 


Clock per Clock ratio, Intel QuickPath Interconnect (IMC), updated architecture that can scale up to 80 cores , the return of Hyper-Threading, hype, and putting up with intel fanboys.

With all that we "were" expecting the best thing after sliced bread. And it seems to be called CUDA.
 


I would expect the machine running 8 instances to have a 15-30% advantage anyway...



I don't see much for IPC there.
 
Im just let down is all. Servers are great, but as others have posted before me, who cares? As far as vid rendering goes, who kinows? Maybe itll show better than the quads out now, but theyre being challenged by CUDA. Multithreading? Only going to work in so many instances to begin with, then the software has to be adaptive to it. No, for me its all about IPC and higher clocks. Whats the max stock to be, at intro? Anyone know?
 


I care...


But I'm not convinced how useful hyperthreading is going to be for the kind of HPC work I do.



I guess the proof will be in the pudding... I'll just have to bide my time.



But hey, it may go some way to suggesting why Intel are pricing the thing so low. 😉
 
True that. Also, it should give AMD impetus to get off their arses and do a better desktop. AMD made their move with Barcelona, I guess were seeing Intel do theirs now. One of em are taking us DT users for granted. Just like the G280, it was good, nice increase in performance, costly, even now after price reductions. BUT, it wasnt what people were expecting in performance, it did "other" things, like physx and CUDA, and nVidia was more in tuned with Intel and Larrabee, they never saw ATI coming. Someone needs to pick up this gauntlet, and give us DT users some performance.
 


For the mainstream market, it might come low priced. For your market, it will come expensive as hell. Ever looked at the prices of Xeon MP ? They are nice !!!

Hyper-Threading can seriously bump performance in some types of data. SQL/Oracle are a fine example. Small threads, but loads of them. Having worked with old Xeon Workstation (2x2.8 Mhz with HT) i can tell you for once, im happy with the return of a technology. HT didn't gave a massive bump in performance, but helped in case of hard locks and increased the stability (in performance and uptime). They were pretty fast machines for their time.

And improve data reads. Well, a picture is worth a thousand words.

hyperthreading_image1.gif



 



Funny that. I remember back in the days of X2 dominance that every reviewer would OC the heck out of Intel chips to get the scores higher, but with Phenom (now WITH SB750) no one OCs and shows how it compares to Intel. Am I just paranoid or are all reviewers full of it?

It's like they're not planning on showing what Phenom does at 3GHz+ until AMD actually releases that clock speed, but it doesn't help consumers to really see what they would be getting. All GPU tests are done with insanely expensive CPUs that no one can afford so none of us can know how the GPU will perform if WE upgrade.

Ever since Penryn, no sites are doing a full range of tests to show how CPU scales with GPU or how GPU scales with CPU. That's why I come to these sites. To see how much I need to spend to game at 1680x1050. If you're not showing that what are you doing?
 



I think we'd really need to show Phenom at 2.93GHz to really gauge. Especially since the 2.66Ghz i7 will be closest in price to 9950. What's the use in comparing chips that are separated by almost $300. The 2.93GHz was supposed to be $500+.
 


I'm with you on this one, BM. I *really* don't know what's the point of Anandtech reviewing the new SB750 while showing they can crank up the HT/NB speeds like hell, go past 3.0 GHz with the CPU and not posting a *SINGLE* damn benchmark. No, seriously.

Guess I'll try some underground forums. Or does anybody here have the new SB750 with a 3.0 GHz Phenom?
 



It's crazy. Some people on xtremesystems forum have some OC results. At 3.2GHz\2.4GHz L3, the Phenom is very close to the 9775 @ 3.2GHz. I'm not interested in whether it wins or loses, I just want as much info as I can get to determine for example if I can get away with a 9150e with a 4850 or if I should go to the 9550 and up.

Even AMDZone isn't showing OCd scores. WTH?
 


Well, I know that feeling of yours: it's disgusting. I'll be trying Google in a few minutes. But it's disappointing, nonetheless. Just finished reading an article at Nordichardware which says Nvidia will be updating its 780a to support ACC, but how the hell am I supposed to know if it's worth?

I'm torn between Intel's Q6600 and AMD's 9750, but I must say the 790GX platform looks amazing. THG is saying both 790FX and GX can support HyperTransport 3.1, so... It looks interesting.
 
Bottom line is this. Nehelam is a server oriented cpu, and certainly not an upgrade for gamers, DT users. There is simply no benefit, so those that were intent on proclaiming Barcelona as a failed design, well have another look at Nehelam. Notice any similarities? So if you were in the "Barcelona sucks" category, then you must be squarely in the "Nehelam sucks" category as well.
 



Ummm, that's blasphemy. How dare you compare i7 to K10. I just can't wait until people see how much i7 mobos will cost. X48 is inching up on $400. Striker is above it. X58 is supposed to require more layers. Imagine buying a $569 CPU with a $500 mobo and getting similar scores in high-res gaming as a person who buys a 9950 with a 790GX mobo.

Does that really HELP customers? Or how about forcing people to use G45 rather than an ATi 3200 level chipset. Sorry for the digression but I couldn't help it. Maybe AMDs lawyers read this site.
 


This does not make sense, Nehelam just addresses cores weaknesses, How can the worlds fastest cpu suck?
 



This just sounds like some people are hurting. Cry me a river.
 


Desktop users will be happy with a five-year-old Athlon. Gamers who stick with their Core 2 Duos because 'no game uses more than two cores' will be whining because 64-bit games running 8 threads are much faster on Nehalem...

The improvement in IPC is a bit disappointing considering the amount they changed the core, but I'll certainly be buying one once the cost of DDR3 RAM drops to a sensible level.
 


Remind me again of Intel's strategy. Is it Tick-Tock, or Tock-Tock? Because this definitley looks like a Tick-Tock-Tock. And judging from the very luke warm reception this is getting around the web, i'd have to say many agree.
And for the record, I never said it completely sucks. However it does look like only a minor improvement over Core. But heres the kicker, it's less of an improvement than Barcelona was over Rev F. So yeah, it looks like Intel is having trouble meeting their much talked about Tick-Tock strategy.
 

TRENDING THREADS