It seems you are telling me running a yorkfield at 1066 is an almost non-issue. You are just checking BW.
It seems you are telling me running a yorkfield at 1066 is an almost non-issue. You are just checking BW.
Is the 680i with the 1333 fsb more future-proof than the p965? I am not talking about the PCIe 16x slots. I know I am thinking way to far ahead.
Is the 680i with the 1333 fsb more future-proof than the p965? I am not talking about the PCIe 16x slots. I know I am thinking way to far ahead.
Most definitely.....
I am also intrigued by the two socket approach in 2008. It would appear they are dong one socket at 1366 pin count (as I recall) and 711 pin count (again, as I recall) actual pin count may vary, you probably have better info on this in your roadmap thread.
Anyway, it looks like Intel will mask out two forms of Nehalem, one with an IMC and one without --- if this is true they get to have their cake and eat it too with respect to memory flexibility. The IMC dedicated to one memory type giving the best performance at the time, and a non-IMC version that allows flexible adoption of memory technology as it evolves.... if this is true, it is an interesting tact.
Jack
Is the 680i with the 1333 fsb more future-proof than the p965? I am not talking about the PCIe 16x slots. I know I am thinking way to far ahead.
Is the 680i with the 1333 fsb more future-proof than the p965? I am not talking about the PCIe 16x slots. I know I am thinking way to far ahead.
This I do not know.... I would guess it depends if Intel releases the BIOS code to the older chipsets to recognize the chip and enable the boot strap.
Good point --- it would come down to VRM and socket -- Bearlake is unofficially supporting older P4 so I suspect it will be a question of voltage.
Penryn will just be a die shrink of the C2D architechture and some added SSE4 instructions.
Conroe is only 10-15% better than K8 OVERALL!!!
Why did Microsoft opted for AMD's 64-bit instruction set AND processors for current and future development of apps?
levicki said:Why did Microsoft opted for AMD's 64-bit instruction set AND processors for current and future development of apps?
Umm, because Core 2 Duo wasn't around when they started writing 64-bit code?Intel's EM64T was around with the Prescott, which was well before the Core 2 Duo and not that much after the Opteron first shipped. Intel correctly at the time figured that the only place that 64-bit computing was needed was in HPC environments, where 64-bit CPUs like the Power, MIPS64, Alpha, and Itanium were currently being sold. Intel thought that these 64-bit CPUs would filter down with the demand for large memory addressability filtering down from HPC to medium business to small business, finally to you and me. Thus the Itanium would displace the 32-bit ix86 CPUs when 64 bits were needed on the desktop. Since your apps and OS would need to be 64 bits, reverse compatibility with ix86 could be abolished and few would care.
AMD threw a spoke in their plans by offering reverse-compatible 64-bit x86 CPUs aimed at the medium business market, where 64 bits was not currently but would be needed soon. This let companies use current 32-bit software and OSes and then upgrade to 64-bit ones later when it was needed. MSFT saw this as a better business opportunity as well as a way to try and expand Windows up into the larger-server realm dominated by UNIXy OSes. Thus they went with AMD's x86_64.
[quoteSpeaking of it, if Microsoft didn't stick to AMD for 64-bit platform, they would have to invest in some real code porting to IA64 and thus lose advantage over IBM, Sun, Oracle, in server software but we would have got way better software and platform.
Windows XP was actually released for the Itanium, so MSFT already did some porting. Since they got XP released on IA64, it would not have been too difficult to port Office or their other programs over.
AMD's weak developer support is exactly the reason why 64-bit XP and applications haven't taken off even though it has been 4 (four!) years since first AMD64 chip introduction.
It has nothing to do with AMD's developer support. You forget that Intel's EM64T is compatible with AMD64 and that Intel is pushing EM64T also. The reasons that 64-bit Windows have not taken off are the following:
1. Few that run had need for over 3-3.5 GB RAM.
2. Since #1 was true, few had a pressing need for the OS and stuck with Windows XP 32-bit. This did not put pressure on the application vendors and driver writers to make 64-bit code.
3. #2 led people that might have otherwise tried out a 64-bit OS due to speed enhancements or "just because" to not do it.
4. Everybody knew that Windows Vista was going to be available in 64-bit and its original early release date that kept getting pushed back led people to not buy the "bastard" XP x86_64.
5. All four of these things worked in a negative feedback loop to keep everybody on 32-bit XP.
Finally, AMD64 is stupid name, EM64T better describes what it is all about -- (Extended Memory technology) and it doesn't include vendor name in it. Btw, those are not 64-bit instructions but extensions. Just like 32-bit was an extension of 16-bit in the past. I am still pissed off because AMD got the laurels for that quick hack and for breathing back life into a dead horse (x86) as well as helping Microsoft keep its software monopoly.
What do you think the "i" in the i386/i486/i586/i686 architectures stands for? Or the "I" in Itanium's IA-64 (the latter is not "Itanium," I'll spot you that.) The spec is also generally referred to as "x86_64" which has no vendor name at all and says *exactly* what it is- 64-bit extensions to x86. x86_64 was a quick hack, yes. But it was actually a pretty elegant hack as it allowed for compatibility with current 32-bit x86 OSes and programs as well as being able to run new 64-bit code as well. One can run 32-bit code on an x86_64 chip on a 64-bit OS at about the same speed as running the code in a 32-bit OS. That is a major advantage and one that just about everybody using a 64-bit OS today uses and most people tomorrow will use as well. It's much better than emulating a CPU of an incompatible architecture- go as an Itanium owner how 32-bit x86 code runs on it.
x86 really doesn't have that much to do with keeping MSFT as an OS monopoly, perhaps beyond being able to run legacy apps. That isn't trivial, but it alone does not keep an OS a monopoly. x86_64 in 64-bit long mode is a different architecture than 32-bit x86. The 64-bit versions of Windows- both XP x86_64 and Vista 64-bit- require new drivers ad libraries to be written by third-party developers. This means that people have a significant migration to do from 32-bit Windows to 64-bit Windows and many may consider moving to a different OS because that would likely not require much more in migration costs.
Also, MSFT has compiled Windows NT variants for many different CPU architectures: 32-bit x86, 32-bit PowerPC, ARM9, IA64, and x86_64. I'm sure that if the platform of choice migrated to something other than x86 that it would not be too hard for them to recompile their applications for it. They have done it in the past and other OSes (especially Linux) does it right now with little trouble. MSFT has other factors other than what architecture of CPUs we all run that keeps them entrenched. I'll not get into that here unless you want to, then I can go on all night.
Intel's EM64T was around with the Prescott, which was well before the Core 2 Duo and not that much after the Opteron first shipped. Intel correctly at the time figured that the only place that 64-bit computing was needed was in HPC environments, where 64-bit CPUs like the Power, MIPS64, Alpha, and Itanium were currently being sold. Intel thought that these 64-bit CPUs would filter down with the demand for large memory addressability filtering down from HPC to medium business to small business, finally to you and me. Thus the Itanium would displace the 32-bit ix86 CPUs when 64 bits were needed on the desktop. Since your apps and OS would need to be 64 bits, reverse compatibility with ix86 could be abolished and few would care.
AMD threw a spoke in their plans by offering reverse-compatible 64-bit x86 CPUs aimed at the medium business market, where 64 bits was not currently but would be needed soon. This let companies use current 32-bit software and OSes and then upgrade to 64-bit ones later when it was needed. MSFT saw this as a better business opportunity as well as a way to try and expand Windows up into the larger-server realm dominated by UNIXy OSes. Thus they went with AMD's x86_64.
Windows XP was actually released for the Itanium, so MSFT already did some porting. Since they got XP released on IA64, it would not have been too difficult to port Office or their other programs over.
It has nothing to do with AMD's developer support. You forget that Intel's EM64T is compatible with AMD64 and that Intel is pushing EM64T also.
What do you think the "i" in the i386/i486/i586/i686 architectures stands for? Or the "I" in Itanium's IA-64 (the latter is not "Itanium," I'll spot you that.)
The spec is also generally referred to as "x86_64" which has no vendor name at all and says *exactly* what it is- 64-bit extensions to x86.
x86_64 was a quick hack, yes. But it was actually a pretty elegant hack as it allowed for compatibility with current 32-bit x86 OSes and programs as well as being able to run new 64-bit code as well. One can run 32-bit code on an x86_64 chip on a 64-bit OS at about the same speed as running the code in a 32-bit OS.
x86 really doesn't have that much to do with keeping MSFT as an OS monopoly, perhaps beyond being able to run legacy apps. That isn't trivial, but it alone does not keep an OS a monopoly.
x86_64 in 64-bit long mode is a different architecture than 32-bit x86.
Penryn will just be a die shrink of the C2D architechture and some added SSE4 instructions.
Conroe is only 10-15% better than K8 OVERALL!!!
Why did Microsoft opted for AMD's 64-bit instruction set AND processors for current and future development of apps?
levicki said:Oh yes it has. Has AMD been more capable, 64-bit OS would've been released sooner. If AMD offered a 64-bit compiler developers would have tried to port their code and realized the potential benefits sooner.AMD chose to work with makers of existing compilers- Microsoft for the VC++ compiler, the Free Software Foundation for gcc- rather than making their own compiler a la Intel. I'd be willing to bet that this was not only faster and easier to do but that it would have had a better end product than an AMD compiler. A good compiler is tricky to make and software generally works best with other software that is designed for and compiled with the same toolchain. Try to install Gentoo or LFS using gcc on a Linux box and tell me how that goes. The try it with icc. Tell me which one performs better or even compiles completely.
It was only when Intel high volume manufacturing machinery started pushing EM64T capable Celerons into masses that the popularity and awareness of 64-bitness started to grow.
It wasn't specifically Intel making 64-bit Celerons that made people generally aware of 64-bit OSes but relatively inexpensive lines of 64-bit chips as a whole that did. That would include P4 Prescotts as well as Celerons. Of course AMD's Athlon 64, Turion 64, and Sempron 64 were also relatively inexpensive 64-bit CPUs that were widely used as well. About the only chips sold in the last few years that weren't 64 bit were a few Semprons and the Pentium M/Core 1 lineage.
Please, do not mix things. I do not dispute term AMD64 as an architecture name (although I believe that the architecture name is K8 or Hammer).
ix86 is the ISA name, not the CPU name, at least in later incarnations such as i586 and i686. Intel never called the CPU that used the i586 ISA the i586- it was called the Pentium of internally it was the P5, P54, P54C, or P55. Likewise, no i686 ISA chip was ever called i686, either. The architecture was called P6 and the CPUs referred to internally by code names like "Klamath" and "Banias" or by their market name.
This is also borne out that AMD chips run on the ix86 ISA too. The K5 through the non-XP Athlons were i586 and the Athlon XP was i686.
I only say that 64-bit instruction set extensions should not be called AMD64 just like SSE2 is not called IntelSSE2.
Just as the SSE implementations have dropped the little "i" in front of them to just become "SSE," 64-bit x86 is generically called "x86_64" as that incorporates both AMD64 and EM64T. So if you don't like to refer to the 64-bit ISA on AMD processors as "AMD64" say "x86_64" instead and it's the exact same thing.
AMD calls it "long mode" in their docs. Because of that non-imaginative and non-descriptive term, developers coined AMD64 and x86_64. I must agree that x86_64 is better.
Long mode simply refers to the mode in which the CPU operates when it executes 64-bit code as opposed to 32-bit code (protected mode) or 16-bit code (real mode.) It does not refer to the specific ISA being used at all. We didn't call the various flavors of 32-bit x86 ISAs all "protected mode," did we?
However, all the names are fundamentally wrong because you neither have radically new architecture (it is still x86 under the hood) nor you really have 64-bit address space (physical or virtual).
The address space on almost all 32-bit chips isn't 32 bits either, it's actually 36 bits. But it's still a 32-bit chip. The bit number is for how many bits are in an instruction word, which in all 32-bit processors is 32 bits and in 64-bit processors is 64 bits. This has nothing to do with memory addressability. It just means that without any modifications (like 36-bit PAE) that UP TO 16EB of address space can be mapped with one word. It doesn't mean that the chip has to be able to address that much memory.
I disagree with the "elegant" part. Adding yet another instruction prefix (0x48 or REX) means a penalty for instruction decoding because instruction lengths change.I meant "elegant" because of how it solved the problem and its execution. x86 is not a particularly clean or simple ISA, especially compared to some RISC ISAs. But the approach that AMD did cleaned up the ISA some and allowed for pretty seamless transitions between 32- and 64-bit code. That's needed in the real world. Of course a brand new, clean-slate 64-bit ISA would be more elegant from a design view. But it would be horrible in an implementation view.
Result of that as well as of the bigger footprint of the immediate operands is that 64-bit code can run several percent slower than the exactly same 32-bit code under 64-bit OS. That is absurd because you are actually penalized for making native 64-bit applications.
Most benchmarks that I've seen show 64-bit code running faster than 32-bit code, especially for applications that do a lot of math like encoders. This is somewhat due to the 387 FPU being shut off and SSE being used instead. But there are also more registers available in 64-bit mode than 32-bit mode, and that certainly does not hurt.
Note that I am speaking about ports of already highly-optimized code, any other (sloppy) code ported to 64-bit will automatically benefit through the (obligatory) use of SSE2 instead of legacy FPU so the penalty would in most cases be covered.
Code that is highly optimized for anything generally won't recompile for a different target and keep the same performance. But if it was highly-optimized code as opposed to sloppy code in the first place, don't you think that perhaps optimizing for 64-bit code would reduce a lot of the bloat? For example, if you only need 8 bytes for a variable, it should be ported from (32-bit) long doubles to normal doubles in 64-bit mode instead of 64-bit long doubles that are 16 bytes.
However, those who still write critical parts of their code in assembler like I sometimes do, will have a hard time to match 32-bit performance in 64-bit code.
I tried very hard and the best I could manage was only 6.5% slower code. Intel compiler's 64-bit code (and I am talking about the same thing I wrote in assembler) lags 8.3% behind 32-bit one.
On the other hand, MSVC gains 9.5% in 64-bit code .vs. 32-bit code because it doesn't use legacy instruction mix anymore, but it is still 4.33x slower than my assembler code and 3.96x slower than the code generated by Intel compiler.
I've not done any assembly language programming, so I'll take your word. But if you write a program and want to run it in 32-bit mode or even on a 32-bit OS, new x86_64 CPUs can still run it. If there was a separate, incompatible 64-bit ISA that was slower, then you'd either have to put up with the theoretical performance hit and run it on new 64-bit chips or dig out older, slower 32-bit CPUs with the faster ISA. You get a decent bit of both worlds with the x86_64 implementation- that's why it's not so bad.
It is not just about the OS. It is about applications such as Office and server software such as IIS, ISA, MSSQL, Exchange, etc. Not having to port all that but instead just recompiling it is a great advantage over other vendors. Having to spend much more time and money to make and test a proper port would create oportunity for others to jump in.
As far as I have seen and experienced, source code is generally only specific to the kind of OS it will run on (e.g. Windows or UNIX) rather than the CPU type or bit width. I have written programs that I've compiled on a range of different machines, ranging from my 64-bit x86 desktop to my 32-bit x86 laptop to PowerPC Macintoshes. It's compiled fine on all of them and the only things that had changed was that the compiler introduced support for different things upon compiling, such as SSE, AltiVec, 64-bit code, etc. The binaries are not compatible with each other, with the exception that the 32-bit x86 binary would run on the 64-bit x86 machine. So it seems that basically a recompile is all that's needed to make a new-architecture version of most every program that's out there. The result won't necessarily be as optimized for the new architecture, but it will run and probably run at least decently. if you've used Microsoft applications before, efficiency is NOT something that they care strongly about anyway.
I believe that CPU architecture is for example Netburst .vs. Core, or K7 .vs. K8 but not 32 .vs. 64. It is true that in AMD's case 32->64 coincides with architecture change (K7->K8) but that doesn't mean that x86_64 is a new architecture.
As far as having something compiled goes, an architecture is the ISA target, not the actual hardware micro-architecture of the chip. I should have been more explicit with this.
WooEarly next week, the company is expected to provide more details on its first 45 nm processors.
Is the 680i with the 1333 fsb more future-proof than the p965? I am not talking about the PCIe 16x slots. I know I am thinking way to far ahead.
This is essentially aimed at your last bit - calling AMD's 64-bit implementation (and it's name) "stupid" is a little off.
Blaming the lack of developer support in the consumer market for 64-bit on AMD (or Intel, for that matter) is silly. AMD64 was developed for servers, where 64-bit apps are far more prevalent, then ported over to desktops.
I also must say stop bashing AMD for foresight and technical achievement in their execution and delivery of 64-bit extensions ahead of Intel in the x86 market.
Also, AMD's 64-bit extensions are more efficient than Intel's.
XP64 isn't nearly as bad as people are saying. The driver support wasn't there initially, without question, but from everything I've heard it has much better support now.
I'd be willing to bet that this was not only faster and easier to do but that it would have had a better end product than an AMD compiler.
Try to install Gentoo or LFS using gcc on a Linux box and tell me how that goes. The try it with icc.
Just as the SSE implementations have dropped the little "i" in front of them
But there are also more registers available in 64-bit mode than 32-bit mode, and that certainly does not hurt.
You get a decent bit of both worlds with the x86_64 implementation- that's why it's not so bad.
I don't see what the downside is, besides not being able to sell developers compilers that are of limited use.