Intel says 64bit is unnecessary

kinney

Distinguished
Sep 24, 2001
2,264
18
19,785
From Anandtech

Well, Intel's Chief Technology Officer seems to think that 64-bit desktop computing is not needed right now in the industry. He also went on to say that AMD and Apple are getting a little ahead of themselves by releasing 64-bit chips now. Sour grapes on Intel's part or do you think he is right? We've already got a discussion going on about this topic right here in General Hardware:

AMD and Apple are touting 64-bit computing on the desktop far too quickly, Intel CTO Pat Gelsinger said today.


Moving beyond 32-bit addressing is "really not needed for several more years", he told reporters attending the Intel Developer Forum in San Jose.


AMD, of course, isn't going to wait that long. Next week, the company will unveil its long-awaited 64-bit desktop processor, the Athlon 64. And, just a few weeks ago, Apple began shipping its Power Mac G5 desktop based on the 64-bit IBM PowerPC 970 processor.


But if Gelsinger's comments are anything to go by, Intel believes its rivals are coming to market too early.

I would have to agree with them.
BUT, seems whenever Intel deems 64bit 'appropriate' for consumer use and it gains massive popularity...which sounds like 3dfx concerning how we didnt need 32bit color.
Your old A64 is going to be faster in the inevitable 64bit future than your old P4 or Prescott.
So its still a better long term investment if you want the most longevity.
Its also funny how they have 3.2Ghz P4s available, no one needs that either in a consumer PC.

I love the comments posted at anandtech articles, you THG peeps should be reading them and slap some of that intel preference from out of your filthy little mouths!

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
 
Its also funny how they have 3.2Ghz P4s available, no one needs that either in a consumer PC.
Well, <i>something</i> has to be the flagship CPU that no one really needs. If the fastest was 1.5, programs would probably not be as demanding, and then people would say no one needs a 1.5.
 
What would it use really?

No game breaches 4GB addressing. Only the A64 registers are used, not 64-bit itself.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
 
I would have to agree with them.

Wow, Kinney, talk about double-standards.

You desperatly need a processor with 64-bit which you will throw away likely in 2-3 years for another one, and in that timeline 64-bit won't be making you buy 4GB of RAM but rather giving you performance by extra registers, and not the actual bitwidth, but then you agree with Intel's CTOs...


--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
 
LoL toss yer money away to the Athlon 64 and I am willing to say that you wount be useing the 64bit functions this year. Willing to bet next year as well, oh there will be benchmark software for sure but otherwise real world software no.

But hey its not my money Im happy with my little 32bit chip running happpy supported and lots of 32bit software. Enjoy the waste land your about to step into...

-Jeremy

:evil: <A HREF="http://service.futuremark.com/compare?2k1=6940439" target="_new">Busting Sh@t Up!!!</A> :evil:
:evil: <A HREF="http://service.futuremark.com/compare?2k3=1228088" target="_new">Busting More Sh@t Up!!!</A> :evil:
 
There's also the problem that people assume that when Intel comes out with their desktop 64-bit chip, it'll be compatible with x86-64. If it's not (and I'm willing to bet it won't be) then this whole "adopt it now" mentality is a bit narrowsighted. Are people who adopted 3dNow! when it came out enjoying the benefits of it years down the line and is Intel regretting implementing SIMD a year or so later? Nope, the majority of things uses SSE and all those who adopted 3dNow! early are sitting there with K6-2's which are outdated anyway. Yes, you can run the 3dNow! software that came out later, just like you will be able to run x86-64 applications with a K8 later, but by the time that software comes out.....your processor is obselete and you'll need to buy a new one anyway.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
 
Someone had to break the mold. 32-bit computing will probably dominate for the next 4 years, which means a new A64 will be worthless before it's new feature is worthwhile. But because that CPU exists, 64-bit programing will at least start to get a foothold in PC programs.

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
 
But that's exactly the point. IF Intel adopted x86-64 (or publically supported it), software *may* become widely available. Even if AMD managed to saturate *all* of their market to x86-64, that'd *still* only be what, 15% of the market that x86-64 software vendors will be catering to?
And by the time Intel "joins in", they will probably have their own ISA and we'd be really no better off because x86-64 software *won't work on there anyway*. So no, they're not "breaking the mold" with x86-64, they're simply presenting a new mold which may or may not lead to what future software will use.
This is, of course, mostly Intel's fault for not adopting x86-64, but the fact lies that it won't be the pleasant world of "everything is 64-bit and will work on all future processors" that people imagine.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
 
Desktops right now don't need 64bit, that's true.

But I think this is a smart move on AMD's part. If AMD pulls this off cleanly and correctly, they will force Intel into using AMD64. If that occurs, that's a big blow against intel, almost more important than having the fastest cpu as long as they keep up in performance.

The most interesting thing is seeing windows xp 64 run both 32bit and 64bit programs. That's a GREAT transition from 32bit to 64bit. It may be interesting to see how Unreal 64 turns out.
 
And by the time Intel "joins in", they will probably have their own ISA and we'd be really no better off because x86-64 software *won't work on there anyway*.
But if A64's 32-bit performance <i>is</i> as 'intel-bashing' as some seem to think, <i>and</i> scotty has 'issues'.. then AMD might be able to increase their Market share, and if they can get just a few more %, that would go a long way to encouraging Software Developers to develop X86-64, and might force intel to somehow include support for it in whatever 64-bit desktop chip they produce, as they can't just completely ignore a comparatively large % of the people. I guess it just depends on how long Intel waits before introducing its own 64-bit stuff.

Just some thoughts.



---
The end is nigh.. (For this post at least) :smile:
 
I'm getting into programming as a Computer Science major, and from what little I know, programmers would rather work with 64bits rather than 32bits. More space for them to work. But then again working with registers is all low level code stuff (ie hex, or op codes) so once someone makes a decent compiler for 64bit (and I imagine they have something close to it already since there are 64bit processors out there) then all the programmer needs to do is recompile his/her code with the new compiler and poof! he/she has a 64bit program.
I'm still learning so my opinion has probably got a few holes

<font color=blue><b>Purchase object A, install object A, curse object A, repeat...</b></font color=blue>
 
LoL Intel has a 64bit ISA foo IA-64. Intel has this planed out well, trust me on this. If they feel threatened bam they enable yamhill and it all cool. But I dont think they will. x86 has years left so I am hardly worried at this point.

-Jeremy

:evil: <A HREF="http://service.futuremark.com/compare?2k1=6940439" target="_new">Busting Sh@t Up!!!</A> :evil:
:evil: <A HREF="http://service.futuremark.com/compare?2k3=1228088" target="_new">Busting More Sh@t Up!!!</A> :evil:
 
So Kinney, the way I see it, our conversation in the GFX forum pretty much sums up how your "Facts" were nothing but personal opinions which as I told you or hinted, if you brought them here you'll see how you have a weak mindset.

You just want to buy the A64 because it has 64-bit, won't consider anything else that doesn't, and yet the people here, far more educated on this than you, tell you it's absurd.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A><P ID="edit"><FONT SIZE=-1><EM>Edited by Eden on 09/17/03 12:00 PM.</EM></FONT></P>
 
<b>to no one in particular:</b>
Personally I agree with Intel. There's no real reason to go to a 64-bit processor yet. However I also think that there's no real harm in companies doing so anyway because eventually it will happen. It's just still years off before it starts to really make sense for most people.

<b>simwiz2:</b>
If the fastest was 1.5, programs would probably not be as demanding, and then people would say no one needs a 1.5.
Only someone who never had to write software during the "640KB" era could give such an untrue answer. There have been several times in the past where hardware has not advanced fast enough and software authors have pulled hair trying to optimize their code to squeeze every last bit of performance out of the hardware that was available. Only because of the Intel/AMD speedwar a few years ago did the advancement of hardware actually outrun the advancement of software. Oh how cynical people have gotten in just a couple of years. 🙁

<b>endyen:</b>
Gaming should be able to make use of the 64 bit, guess Intel doesn't consider this a good enough reason.
<i>Gaming</i> can (and usually does) make use of <i>any</i> optimization. If Intel were to double the number of 32-bit general-purpose registers in their x86 CPUs you'd see just as much of a performance gain. Hardly any of these optimizations are actually using 64-bitness. They're almost entirely just using the extra general-purpose registers in for 32-bit math.

<b>TomV:</b>
But isn't a 64 bit Operating System needed to go along with the CPU ?? Which is still a "bit" in the future....
There have been AMD64 versions of Linux and beta AMD64 versions of Windows for a little while now.

<b>imgod2u:</b>
There's also the problem that people assume that when Intel comes out with their desktop 64-bit chip, it'll be compatible with x86-64. If it's not (and I'm willing to bet it won't be) then this whole "adopt it now" mentality is a bit narrowsighted.
I completely agree. Granted, someone has to pave the way still, but for most people it's a waste.

I'd also be highly willing to bet that when Intel comes out with x86-64 (and it probably will happen unless they jump right straight to IA64) it'll be with even more GPRs (probably a <i>lot</i> more) and will be a much more idealistic (pragmatic?) extension of the instructions so that low-level code <i>will</i> have to be completely ported to fully utilize 64-bitness. (Thus preserving the 16-bit addressing even when in simultaneously using 64-bit instructions.) I'd bet Intel would also release compilers that make this transition easy and possibly even transparent if you just want the lazy porting that AMD is pushing.
but by the time that software comes out.....your processor is obselete and you'll need to buy a new one anyway.
Funny how so many people miss the obviousness of this. :O
And by the time Intel "joins in", they will probably have their own ISA and we'd be really no better off because x86-64 software *won't work on there anyway*.
I don't agree. It'll probably end up more like the various pixel shader standards in graphics cards. Once the incompatabilities of the standards occur software will just have to add a layer of abstraction pinned in place by lower-level standard-specific code branches. It'll be a pain in the arse, but compatability will remain. It'll just mean that if the newer ISA is more robust (which I'm betting it would be) then there will be a noticable performance difference that will entice people to upgrade to the new ISA. But software will find a way to preserve compatability.

<b>Crashman:</b>
Someone had to break the mold. 32-bit computing will probably dominate for the next 4 years, which means a new A64 will be worthless before it's new feature is worthwhile. But because that CPU exists, 64-bit programing will at least start to get a foothold in PC programs.
I pretty much agree. The only difference is that I would say that it's the 64-bit mentality of customers that will get a foothold and software will be slowly be forced to comply to that in order to keep the sales figures up.

<b>TknD:</b>
If AMD pulls this off cleanly and correctly, they will force Intel into using AMD64.
No it won't. If anything it will convince Intel to release their own flavour of x86-64 that probably <i>won't</i> be compatible with AMD64. However since Intel has so much invested in IA64 I'd dare say that Intel is working feverishly on finding a good way to crossbreed that with IA32.

<b>ChipDeath:</b>
then AMD might be able to increase their Market share, and if they can get just a few more %, that would go a long way to encouraging Software Developers to develop X86-64, and might force intel to somehow include support for it in whatever 64-bit desktop chip they produce, as they can't just completely ignore a comparatively large % of the people.
Close, but no tequila. As far as CPU-specific optimizations go, software developers fall into two categories: those who do, and those who don't. Those who do will do so anyway because they have the resources to invest in that many additional manhours. Games usually fall into that category. Those who don't won't do so unless there's a majority of market share involved (I.E. 50%). So just a few more percent for AMD won't change the numbers of programmers optimizing because either they already plan to optimize or they already plan to not waste resources supporting it.

However, you're right on the Intel side. Consumers will decide that 64-bit matters, whether or not it actually does anything for them. Hype and a market driven by ignorance is a dangerous combination. So eventually Intel's marketing will lean on the engineers enough to make something happen just so that Intel doesn't lose too many sales.

<b>ImpPatience:</b>
I'm getting into programming as a Computer Science major, and from what little I know, programmers would rather work with 64bits rather than 32bits. More space for them to work.
If anyone tells you this, punch them in the nose! That's <i>soooooooo</i> not true. Programmers want more general-purpose registers. Programmers however have access to 64-bit integers by using a 32-bit low segment and 32-bit high segment. It's slow, but it's also hardly ever needed. If suddenly all integers were 64-bit by standard though, it'd drive most software engineers bonkers as their software's memory usage would have just doubled for no good reason!

Further, in <i>most</i> cases where you need more accuracy than a 32-bit integer can provide you use a floating-point math already. So this all makes the actual usefulness of 64-bit integers pretty slim.

<pre><A HREF="http://ars.userfriendly.org/cartoons/?id=20030905" target="_new"><font color=black>People don't understand how hard being a dark god can be. - Hastur</font color=black></A></pre><p>
 
I'd also be highly willing to bet that when Intel comes out with x86-64 (and it probably will happen unless they jump right straight to IA64) it'll be with even more GPRs (probably a lot more) and will be a much more idealistic (pragmatic?) extension of the instructions so that low-level code will have to be completely ported to fully utilize 64-bitness. (Thus preserving the 16-bit addressing even when in simultaneously using 64-bit instructions.) I'd bet Intel would also release compilers that make this transition easy and possibly even transparent if you just want the lazy porting that AMD is pushing.
Erm, no. Increasing the register count would mean you'd have to add yet *another* processor mode and more x86 prefixes to your instructions. This will not only cause software developers a load of pain, it'll also be very expensive to implement on a chip, it has to support *both* x86-64 mode *and* the mode that supports 32 GPR's.
As for Intel releasing an x86-64 compatible chip, highly unlikely. Unless, of course, they're willing to just throw IA-64 out the window, which they're not likely to do. More likely they'll just improve their emulation of x86 (having that marvelous Alpha developement team work on it) and release a low-priced, high-performing IA-64 chip.

I don't agree. It'll probably end up more like the various pixel shader standards in graphics cards. Once the incompatabilities of the standards occur software will just have to add a layer of abstraction pinned in place by lower-level standard-specific code branches. It'll be a pain in the arse, but compatability will remain. It'll just mean that if the newer ISA is more robust (which I'm betting it would be) then there will be a noticable performance difference that will entice people to upgrade to the new ISA. But software will find a way to preserve compatability.
There is a *huge* difference between various pixel shaders and a CPU ISA. The pixel shader standards are high-level standards. I.e. they're guidelines you use in DX programming. At the lower level, you have the driver interpreting these commands and compiling it to the proprietary ISA (which, as I recall, is VLIW) of the GPU. CPU's do not have that luxury (unless you're running an interpreted language like Java or .Net). The software has to directly access the ISA and to put *that* many ISA supports on the processor itself instead of in an emulation layer of software would be *very* expensive and infeasible.
Now, with Intel's recent adamant move of bringing high-performance x86 emulation to IA-64, it could be possible they may throw in x86-64 support in there, but why would they want to support something that'll directly compete with IA-64? Did SSE supporting processors support 3DNow! or did Intel simply tell software developers "I'm big, support me or die, drop 3DNow!". Well, how many 3DNow! optimized applications do you see out there now?

Close, but no tequila. As far as CPU-specific optimizations go, software developers fall into two categories: those who do, and those who don't. Those who do will do so anyway because they have the resources to invest in that many additional manhours. Games usually fall into that category. Those who don't won't do so unless there's a majority of market share involved (I.E. 50%). So just a few more percent for AMD won't change the numbers of programmers optimizing because either they already plan to optimize or they already plan to not waste resources supporting it.

I would say it's the exact opposite. Game developers are usually more constricted by time-to-market vs optimization. Yes, they do a lot of high-level optimization in DX to run better on various video cards (this has only been a recent thing though) but as far as low-level CPU optimization, well, how many SSE2 supporting games do you see out there?
Now consider Lightwave or Photoshop.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
 
Erm, no. Increasing the register count would mean you'd have to add yet *another* processor mode and more x86 prefixes to your instructions. This will not only cause software developers a load of pain, it'll also be very expensive to implement on a chip, it has to support *both* x86-64 mode *and* the mode that supports 32 GPR's.
Depending on how it's implemented it might not cause software developers any pain at all. It certainly doesn't have to be as painful as the 16-bit to 32-bit process was. (And even then the port was pretty easy for lazy programmers if you didn't mind wasting a lot of memory to do the same thing.)

However what if Intel implemented x86-64 in a way that all of the 32-bit and 16-bit instructions and register names still worked exactly as they do now and the 64-bit registers were just added to the list of available registers and the 64-bit instructions were of a similar naming convention but with just a small spelling change. It could easily be done in such a way that all old code would still run exactly as it did before and that the 64-bit processing was just an extension.

As for Intel releasing an x86-64 compatible chip, highly unlikely. Unless, of course, they're willing to just throw IA-64 out the window, which they're not likely to do. More likely they'll just improve their emulation of x86 (having that marvelous Alpha developement team work on it) and release a low-priced, high-performing IA-64 chip.
I don't know. In a perfect world I'd agree with you, but in reality I'm not sure how feasable that is. The Itaniums don't have the clockspeed needed to execute x86 well. Itanium is <i>too</i> designed for out of order execution to emulate the linearity of x86 in a way that <i>wouldn't</i> piss customers off when their new hybrid chip ran much slower than a top-notch P4.

Of course if they could just find a way to add the IA64 instruction set to a P4 in a way that wouldn't totally suck (as in 64-bit execution that at least matched 32-bit execution despite the architecture differences) even if it performed worse than an Itanium it'd still have potential. (So I guess that'd be the reverse of what Intel is trying now... to emulate the Itanium side instead of emulate the P4 side.)

There is a *huge* difference between various pixel shaders and a CPU ISA. The pixel shader standards are high-level standards. I.e. they're guidelines you use in DX programming. At the lower level, you have the driver interpreting these commands and compiling it to the proprietary ISA (which, as I recall, is VLIW) of the GPU. CPU's do not have that luxury (unless you're running an interpreted language like Java or .Net). The software has to directly access the ISA and to put *that* many ISA supports on the processor itself instead of in an emulation layer of software would be *very* expensive and infeasible.
Again, that depends on exactly <i>how</i> it was done. Depending on how the two different x86-64 ISAs were written, it <i>could</i> be possible that the primary differences are just in the number of GPRs and the instructions themselves. In which case it's just a simple remap layer to make one of the standards run with code from the other. I never said that it wouldn't be a pain in the arse or that it wouldn't have the possability of being a minor performance hit. But considering just how few x86 commands are literal to the CPU anymore it's hardly crazy to consider as a feasable possibility.

Hell, if Transmeta were to design a 64-bit processor the world could be a very scary place. (And if they were to team up with any other processor manu to use the combined technical knowhow to boost performance it'd be even scarier.) So don't tell me that it's not possible or even feasable. It's just not a normal mode of thought is all.

Now, with Intel's recent adamant move of bringing high-performance x86 emulation to IA-64, it could be possible they may throw in x86-64 support in there, but why would they want to support something that'll directly compete with IA-64? Did SSE supporting processors support 3DNow! or did Intel simply tell software developers "I'm big, support me or die, drop 3DNow!". Well, how many 3DNow! optimized applications do you see out there now?
That's hardly a fair comparison. 3DNow! wasn't an Intel product. It was a direct competitor to a standard that they had already been working on, so of course Intel wasn't going to support it.

A new x86-64 ISA from Intel however could easily coexist with IA64. If it didn't perform 'as' well as IA64 then there'd still be a justification for IA64. I mean sure, it might eat a chunk from the low-end Itanium market, but both are still money in Intel's pocket. And since Flops are still x87 even on an x86-64 CPU, then Itanium has nothing to fear since it's the Flop king and the P4 isn't.

I would say it's the exact opposite. Game developers are usually more constricted by time-to-market vs optimization.
What world do you live in? Games are hardly ever concerned about time-to-market. And ever since 3D hardware became common on home PCs game development has <i>always</i> been about optimizing for as many different standards as possible. Until DX and OpenGL became mainstream audio and video <i>had</i> to be writte for numerous different paths. And what game <i>isn't</i> highly optimized? I'm sorry, but you're so incredibly wrong on this one. The game market is driven much more by extreme optimization on as many pieces of hardware as possible than it is by time-to-market. The engines themselves take <i>years</i> to write.

Yes, they do a lot of high-level optimization in DX to run better on various video cards (this has only been a recent thing though) but as far as low-level CPU optimization, well, how many SSE2 supporting games do you see out there?
Do you even remember the MMX frenzy that gamers and game coders whipped themselves into? Do you know of a single game released recently that will run on a processor that doesn't support at least that if not 3DNow!/SSE?

Now consider Lightwave or Photoshop.
What about them? Photoshop is a perfect example of software optimized far more for a Mac than for a PC, and Lightwave is a perfect example of software optimized far more for a P4 than for an Athlon. Two perfect cases where development was targetted at the platform of their majority user base and <i>not</i> optimized well for any other platforms. You couldn'y have handed me more perfect examples of my point if you'd tried.

<pre><A HREF="http://ars.userfriendly.org/cartoons/?id=20030905" target="_new"><font color=black>People don't understand how hard being a dark god can be. - Hastur</font color=black></A></pre><p>
 
I don't know. In a perfect world I'd agree with you, but in reality I'm not sure how feasable that is. The Itaniums don't have the clockspeed needed to execute x86 well. Itanium is too designed for out of order execution to emulate the linearity of x86 in a way that wouldn't piss customers off when their new hybrid chip ran much slower than a top-notch P4.

Of course if they could just find a way to add the IA64 instruction set to a P4 in a way that wouldn't totally suck (as in 64-bit execution that at least matched 32-bit execution despite the architecture differences) even if it performed worse than an Itanium it'd still have potential. (So I guess that'd be the reverse of what Intel is trying now... to emulate the Itanium side instead of emulate the P4 side.
Consider this, a dual core chip. One core built on IA64 architecture and the other built on IA32 (Vanderpool Technology?)



<font color=white>---</font color=white>
Wanted: Large breasted live-in housekeeper. Must be a good cook, organized, and willing to pick up after me.
 
No its not a double standard.
They may be right, but that doesnt really mean the prescott is going to be a better buy than the A64.

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
 
Linux already supports x64 extensions.
64bit windoze is supposed to be out by xmas.

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
 
I will also be using 32bit software.

Then if there is a 'killer app' or blowout must have 64bit program out there, I will be able to use it.
Happily.
:smile:

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
 
Comparing 64bit computing to 3dnow is not apples to apples.
So, don't buy a DX9 video card now because there are no games out for it?
I doubt anyone would recommend such a thing.

In the future, you might not be able to run software with a 32bit only processor, at all.

Obsolete or not at least the thing will be able to run 64bit linux or the upcoming 64bit windows effectively.

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500