Xeon X5570 "Nehalem" benchmarked.

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

It is clear that your agenda seems to be to create FUD around Nehalem features that completely and utterly destroy Opereron. I wonder how your tune will change if/when AMD implements SMT in Bulldozer.
 
Ive been saying all along theyre good chips. Heres something for everyone here. Its not ALL good. Nothing of this world is ALLL good. Ive been praising a few things Nehalem has to offer, but again, its not what some people are concerned with, since it fits the ideal theyve set already, but if I mention that its not ALL good, they cant except that? Isnt the problem shortsightedness, and not whether or not ALL is good?
Yes, Intels igps are last place. In a field of 3 they finish 3rd. Last, slowest, least performing. Get over it already
 


My argument is that, while Keith may be right that SMT and Turbo may indeed hinder output (which are more likely to happen on old legacy programs), datacenters are not likely to turn them off due to reliability concerns.
 


Now here is the interesting part: My tune won't change. (But my reply will refute your little fanboy rant.)

If AMD released SMT today on current Opterons with no other change to the architecture and "guaranteed" that it worked then it would still be disabled in data centers for the next few years until the technology is proven and actually gets a history of actually working. If they add it to an entirely new architecture then it will be even longer until data centers would even consider buying the chip; much less enabling the new unproven features.

I guess that is the issue: Some people seem to think that a guarantee from Intel that these features are working is good enough to warrant putting it into production. That might be true for some machines in the desktop market but it is NOT true for data centers. It takes most data centers 6 months to a year just to decide to buy a certain pieces of hardware. They don't change these purchase decisions immediately upon the release of new unproven hardware.

Pretending this process will be bypassed is a fantasy.



But they won't be enabled until the tech is proven to be reliable with the applications run by the data center. If that happens then they might turn them on. Maybe. But then again maybe not. Apparently many forum posters haven't dealt with some of the "retentive" upper management admin types.
 
These hypothetical situations exist. Not everyone does the "leave it to HP" way. To deny this happens is folly, and the higher you go up the ladder, the more true this hypothesis actually occurs.
Now, as for Nehalem. Im sure itll benefit many as seen by some of the numbers. Will it live up to its hype, or more directly, do exactly as the benches show? Not likely, but itll be close in some hypothetical scenarios, or reality for some users. Thats a good thing. To deny the other situations dont exist is wrong tho. Like i said, its not all good.
So, Nehalem is a good chip for servers, and down the road possibly seen as a revolution. Doesnt mean what keith or I have been saying doesnt fit, or isnt true. I think its a more healthy scenario to accept, than to assume its good in all situations, which both keith and I have pointed out, it isnt.
So, having a few scenarios that Nehalem wont be accepted into the market with all bells and whistles blazing, should it effect our overall attitudes of this chip? No. But it doesnt change the fact this is a likely scenario either. Im in no ones camp here. As jed pointed out, yes, I wanted people to be more open minded about the P2, but hearing that 4Ghz wasnt an average oc was not only true, but also a disappointment. It doesnt change the fact that P2 is still a decent chip, and worh consideration in many purchasing scenarios, and so it is with Nehalem. Im not biased, but Im not in anyones camp either, so be prepared to hear bad with the good, cause as I said, its not all good
 


Show me one post of your saying something decent about
the new chips released or D0 stepping, just one you can't
because you haven't.
I'm not concern about intels igps as much as i'm concern about
your double standards.
The bottom line is intel will never be good enough for you, if they
did everything just right you would still complain about something,
like the parking spaces at corporate are to big for your yugo.
 



Okay then show everyone here where you and keith said the same
things about AMD's offering.
 


Nope. The administrators will likely "retain" the factory setting, which both SMT and Turbo mode are ON. Unless the setting is proven unreliable, no one is going to deviate from the factory settings.
 


Yet one shouldn't completely act on those hypothetical situations when none of them have proven to exist. Sure, a meteor the size of the Jupiter will someday hit Earth. Its a valid hypothetical situation. Should we act on it just because its hypothetical?
 
Ill say it now. Itd be good for AMD to have SMT for server if it applies today, or can be used today. Itll soon be a moot point anyways, as we see more cpus on smaller processes, and SMT may just end up being a short lived situation. Thats my opinion, but is possible too. SMT is no replacement for real cores, and soon we will have those real cores. That maybe what AMD is banking on, who knows? As for turbo, if its shown to be reliable, I dont see the harm in it, but cool n quiet type things dont always work as they should, and could be shown as unreliable down the road, and not used, or not fully used. Time will tell, nothing good or bad here, just reliability as the need
I havnt said much on the D0 chips, as theres but the 1 preview, and if it allows ocing like it showed in the preview, very nice indeed, but Im waiting on that, but it shows lots of promise, just like early "leaks" on P2 did. Most were true, some werent, again, time will tell
 
Intel doesnt need any propping up here ya know? AMD, with what they had previously, certainly did, and for the most part, they came thru. Ive been on the P2 wagon, for AMDs sake, and ultimately for all of our sakes, since competiton isnt just a good thing, its a needed thing.
Its easier to kick Intel, even if I sound like a broken record, but it doesnt change facts either. My threaad on igps bares this out. Ive known , as we all have known, Nehalem is going to make a loud sound in the server market. It doesnt change anything keith and I are saying tho. Nehalem will storm the server market, but it wont be from top to bottom, like suggested here. Im sure we wil all read about Nehalems numbers, their acceptence etc, but those numbers are a long ways off.
Intels built a nice foundation in Nehalem, and we can and will be, reading about it for years to come, but it isnt for everyone, and certainly not right outta the box
 
SMT isn't supposed to be a replacement for real cores, it's supposed to augment the performance of the real cores on highly threaded workloads. I doubt it will go away when the 8 core CPUs arrive - my bet would be that it is here to stay, at least for a fairly significant amount of time.
 
Yea, but it may become useless as core count increases. Depends on how it all comes out. Removing it, and adding cores as we go smaller, or to be more precise, by removing it allows for more optimization and cores them selves could change that. Who knows? What Im saying is, if SMT proves to cause limitations as we go smaller, it may mean dropping it for more cores
 


Nope. In most productions situations they will have tested the equipment in development/test systems for 6 to 24 months. That way they will know when they actually put the machines into a production what settings they want. And it may be enabled or not depending on their testing results.

Nope. They OFTEN deviate from the factory settings; many server applications will dictate exactly how a system will be setup. If when you say "no one" you actually mean people that don't have a clue, you could be correct.
 


Depends on what you mean by "factory settings". System administrators are not likely to adjust any options within the BIOS.
 


Will have to look for a source but IIRC hyperthreading adds less than 5% transistor budget to each core on i7. Basically certain registers have to be duplicated and additional reservation station and other capability added, but in sum for a small increase in size, throughput can be significantly increased where a thread only takes up <60% or so of the available clock cycles, which can happen when it is waiting around for some input or other thread's results. The idea being that when you have a powerful 4-issue execution engine sitting there, might as well keep it busy :).

So for certain code, HT packs a lot of punch for its size and I think Intel will likely keep it around forever.
 
What Im saying is, if it scales yea, youre right. Im saying, if it doesnt scale, and as we go smaller, this could change. I dont have a crystal ball, and it is a possibility, and Im not demeaning its usage, just speculating as we all are. Dont know what particular part of the chip is being used for this, and some areas dont scale as well as others, if you have more info, Id like to see it tho. Also, we may see other types of prohibitions (size,speed,power etc)
 


Have you ever worked as admin on medium level boxes?

They most certainly DO adjust bios parameters. And kernel parameters. (Although some incompetent admins will leave everything at default.)
 


Realworldtech has a pretty good overview of SMT on i7:

The last major change to the system architecture for Nehalem is the return of Simultaneous Multi-Threading (SMT), which was first discussed in the context of the EV8, but first appeared for the 130nm P4. While SMT is not strictly speaking a system level change, but a core change, the implications span all major aspects of Nehalem so it is best to mention it upfront. Additionally, SMT has implications for system architecture. Given two identical microprocessors, one with SMT and one without, an SMT-enabled CPU will sustain more outstanding misses to memory and use more bandwidth. As a result, Nehalem is very likely designed with the requirements of SMT in-mind. For instance, variants of Nehalem which do not use SMT (notebook and desktop processors most likely) may not really need the full three channels of memory.

One interesting issue is why the Core 2 didn’t use SMT. Certainly it was possible, as Nehalem shows. SMT increases performance in a very power efficient way, which is a huge win, and the software infrastructure was already there. There are two possible answers. First, Core 2 might not have had enough memory and inter-processor bandwidth to really take advantage of SMT for some workloads. In general, SMT substantially increases the amount of memory level parallelism (MLP) in a system, but that could be problematic when the system is already bottlenecked on memory bandwidth.

A much more plausible explanation is that while designing a SMT processor is relatively easy – the validation is extremely difficult. Supposedly Willamette, the 180nm P4, actually had all the necessary circuitry for SMT present, but it was disabled due to the difficulty of validating SMT until the tail end of the Northwood 130nm generation. More importantly, almost all of the experience with designing, validating and debugging SMT processors resides with Intel’s Hillsboro design team, rather than the group in Haifa. Thus a decision to avoid SMT for the Core 2 makes a lot of sense from a risk management perspective.

And from the conclusions page of that article:

It seems as though almost every part of Nehalem's pipeline has been tweaked, extended or somehow refined, except for the functional units. The bulk of the changes were made to the memory pipeline, to complement the changes in system architecture. However, the single biggest change and performance gain (made in the core) is simultaneous multi-threading which could improve server workloads anywhere in the range of 10-40%.

I'll have to keep looking for the <5% extra transistors link, as the above article didn't get into transistor counts...
 

The smaller in-order cores from Intel have SMT as well - Atom (2 threads/core), Larrabee (4 threads/core). What does that tell you?
 

I went to IDF last year and Ronak Singhal did say something like that; I will try locating the presentation PDF. The extra transistors are required for duplicating certain resources like the ISA registers, and extending certain resources (reservation station, load/store buffers, ...) that are partitioned out to the two threads, as well as logic throughout the pipeline to track threads.

There is some information on SMT (about 4:40 minutes in) on this YouTube video.

Update: Ronak's Nehalem presentation can be downloaded from here. See slides 29-31. The execution units are SMT unaware making it much easier to implement and validate.
 


Thanks, but your presentation link is userID & password protected for some reason. Anyway, I've seen the info several places elsewhere - guess I'll have to go to the trouble of putting on my Google Goggles :).

IMHO this and turboboost are cheap but effective ways to increase performance at minimal cost - Intel should be applauded for being so forward-thinking, not panned for "toy" features.
 
Can the I7 based Xeon's work in an X58 motherboard? The E55xx series?"

I wouldn't think so - the Xeons use the 5520 chipset since they have the 2 QPI links. However you can to go INTEL's website and see for yourself, if you can get past the scary dude on the front page :). Balding, Spock-ears, blue Star Trek uniform - reminds me of what I imagine Keith used to look like before Otellini ran over his cat :sol: .
 

Follow the "Content Catalog" from the main IDF page. Then it won't ask for a username/password.

I agree, its just a case of "sour grapes" for the AMD diehards. There is lot more innovation coming down the pipe on Turboboost ... in Westmere and Sandy Bridge.
 


Absolutely, and in fact, I recall an Intel presentation on Nehalem (prior to release) stating that turbo would have a much larger impact on CPUs that came out later, and the initial nehalems would have a relatively ineffective turbo feature.