Intel server CPU roadmap updated

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I'm not saying your wrong I am simply saying you don't quite understand what you are saying.

You need to update your knowledge. MIPs + SGI haven't been used for render farms for 4-5 years. Mostly Xeons and Opterons now due to the cost and almost 100% Linux (but Windows on the workstations). I recall Pixar using 64-bit SPARC chips until 4-5 years ago too - I remember passing by the dark + cooled room. Each machine had 13 CPUs and 13GB of RAM (back in 1998). Nowadays, that's nothing.

Languages that compile on the fly might be easier to debug and are generally small in footprint just don't fit the current software environments we are all currently working in.

In the end languages that need to be compiled like C++, C#, I guess FORTRAN falls in there are going to be the languages that will dominate for I dare gander... for ever.

Frankly this is BS. Compiled languages will be around forever, yes, but developer time is expensive and software complexity grows and grows. Writing assembly used to be common, but now it's only for very specialized code.

Python, Perl, Java, VB, etc. have all made great strides in software development proving themselves highly useful in many areas - especially for web server apps where they are "mission critical." It's not usual to see a webserver with 2, 4, 8 CPUs - what Itanium was designed for. Except then you need to write all your web server code in a compiled language (uh no way).

If you look at the state of video games, more and more game engines are using scripting languages - Lua, Python, C-variant, etc. To utilize VLIW CPUs with these "dynamic" languages, requires re-writing all implementations to be VLIW friendly. No thanks.

EPIC depends heavily on its compiler to organized code and data for ideal execution, no drastic changes to the software have to be made other than 64bit safe code, such as correct pointers, constants and API support. Otherwise the compiler does a very good job or organizing everything.

You are quoting marketing spiel. You are also relying on all the compilers in the world (currently designed for RISC + CISC architecture) CPUs to magically turn non-parallelized code into something EPIC loves? Um... good luck. It's not easy and that's why Itanium is tanking and will tank. VLIW isn't bad, it's just too drastic a change for zero benefit - except in the high-end computing market.

I'm not saying your wrong (with respect to compilers) I am simply saying you don't quite understand what you are saying.

That didn't answer your initial question fo why MIPS are disappearing from the market. That was you trying to make a point that I already openly admitted not knowing anything in regards to MIPS downward spire with the industry.

So if what you say is true interpreted languages are only good for web applications why are you trying to argue with me on that point since we both agree?

Great strides though I can agree with when it comes to security and API support, but I can not agree that they have moved from web application software.

As well your information in regards to the machine arrays that the Itanium was designed for is additionally incorrect. They were meant for 16+ arrays you have mistaken the Xeon array targets.

And finally games have nothing to do with server CPU's.
 
This whole debate reminds me of FORTRAN vs. C/C++

FORTRAN - great for numerical computing(has many advanced math constructs built-in) and parallelizable(?), but bad for general purpose software development. Itanium has similar properties.

C/C++ - great for general purpose computing, but extra libraries are needed for numerical computing and there are difficulties parallelizing the code so it's not the best for numerical computing. x86/POWER5/MIPS/Alpha/SPARC is like C/C++

If you look at the state of GPUs(by Nvidia + ATI), a similar thing is happening. Originally they were non-branching brute force vector processing processors, but now are closer to a general-purpose CPU.

Itanium will die after another $20 billion dollars of investment and still nobody uses it.

You seem very confused C++ and FORTRAN are computer languages, the Itanium is a processor.

Additionally ILP is something the compilter takes care of not the particular language being used to write the software.

In fact niether of your examples really illustrate anything constructive to the conversation, as well I am a bit confused as to where you are thinking you are takeing this. Since it sounds like you aren't even remotely introduced to the Itanium let alone compilers or codeing.
 
So if what you say is true interpreted languages are only good for web applications why are you trying to argue with me on that point since we both agree?

No, my intention was to say interpreted languages are the future - I didn't believe this until maybe a year or two ago. This is why I mentioned game engines (having work on Xbox + PS2 consoles myself). Interpreted code has become acceptable even high-performance real-time areas.

As well your information in regards to the machine arrays that the Itanium was designed for is additionally incorrect. They were meant for 16+ arrays you have mistaken the Xeon array targets.

Sorry I have no idea what you are talking about - machine arrays? SMP multiproc setups? Itaniums were designed to be workstations too - I worked on one and it weighed more than 4 PCs combined.
 
You seem very confused C++ and FORTRAN are computer languages, the Itanium is a processor.

Yes but I was trying to make an analogy between general purpose languages and CPUs vs. specialized languages and CPUs (which most VLIW CPUs like the Itanium tend to for specialized apps - and are too hard to optimize for generally). Most of the world is going back to general purpose CPU architecture with some extra instructions for specialized apps. GPUs are a big one I see going in that direction and Physics CPUs will too.

Additionally ILP is something the compilter takes care of not the particular language being used to write the software.

I have a Brooklyn Bridge I'd like to sell you too. Obviously you are not a developer. To take advantage of the Itaniums all these compilers and interpreters have to be tweaked (or rewritten) to be EPIC-friendly (non-trivial). Sometimes it's easy, but with most dynamic languages it might a lot of work.

I think most people greatly underestimate the complexity of optimizing for VLIW CPUs like the Itanium. It's not using a parallelizing compiler and viola! From my personal experience with vectorizing C compilers the output is pretty generally mediocre without "marking up" the code with "parallelizing blocks"

I did a quick search on the Itanium and optimizing for it and found my thoughts echo-ed:

http://www.usenix.org/events/usenix05/tech/general/gray/gray_html/index.html

However, the most significant challenge of the architecture to systems implementors is the more mundane one of optimising the code. The EPIC approach has proven a formidable challenge to compiler writers, and almost five years after the architecture was first introduced, the quality of code produced by the available compilers is often very poor for systems code. Given this time scale, the situation is not likely to improve significantly for quite a number of years.

Itanium is too far ahead of the mundane software development technology. It will die (as with other VLIW CPUs) a horrible expensive death.
 
So if what you say is true interpreted languages are only good for web applications why are you trying to argue with me on that point since we both agree?

No, my intention was to say interpreted languages are the future - I didn't believe this until maybe a year or two ago. This is why I mentioned game engines (having work on Xbox + PS2 consoles myself). Interpreted code has become acceptable even high-performance real-time areas.

As well your information in regards to the machine arrays that the Itanium was designed for is additionally incorrect. They were meant for 16+ arrays you have mistaken the Xeon array targets.

Sorry I have no idea what you are talking about - machine arrays? SMP multiproc setups? Itaniums were designed to be workstations too - I worked on one and it weighed more than 4 PCs combined.

No I can't agree that interpreted languages will ever take the place of languages like C++ or C#. With regards you are the only person I have ever heard that seems to believe this is the future of code development.

But hey you believe we have extra system cycles to spare on the fly compileing for Java and the like, can't wait till you get to branching and real I/O input to see the folly of the idea.

Yes I agree they were made for workstations as well but their sweet spot is 16+ since they scale perfectly with every additional machine.
 
You seem very confused C++ and FORTRAN are computer languages, the Itanium is a processor.

Yes but I was trying to make an analogy between general purpose languages and CPUs vs. specialized languages and CPUs (which most VLIW CPUs like the Itanium tend to for specialized apps - and are too hard to optimize for generally). Most of the world is going back to general purpose CPU architecture with some extra instructions for specialized apps. GPUs are a big one I see going in that direction and Physics CPUs will too.

Additionally ILP is something the compilter takes care of not the particular language being used to write the software.

I have a Brooklyn Bridge I'd like to sell you too. Obviously you are not a developer. To take advantage of the Itaniums all these compilers and interpreters have to be tweaked (or rewritten) to be EPIC-friendly (non-trivial). Sometimes it's easy, but with most dynamic languages it might a lot of work.

I think most people greatly underestimate the complexity of optimizing for VLIW CPUs like the Itanium. It's not using a parallelizing compiler and viola! From my personal experience with vectorizing C compilers the output is pretty generally mediocre without "marking up" the code with "parallelizing blocks"

I did a quick search on the Itanium and optimizing for it and found my thoughts echo-ed:

http://www.usenix.org/events/usenix05/tech/general/gray/gray_html/index.html

However, the most significant challenge of the architecture to systems implementors is the more mundane one of optimising the code. The EPIC approach has proven a formidable challenge to compiler writers, and almost five years after the architecture was first introduced, the quality of code produced by the available compilers is often very poor for systems code. Given this time scale, the situation is not likely to improve significantly for quite a number of years.

Itanium is too far ahead of the mundane software development technology. It will die (as with other VLIW CPUs) a horrible expensive death.

I fail to see why you continue to claim it’s difficult to optimize code for the Itanium when the compiler is taking care of it not the programmer. As well you aren’t spinning code for IA-64 environments which further devalues what you are trying to say which is Itanium sucks for everything.

Your right I am not a developer but a developer in the making I am. Further more this brings me to the point that in the end I wouldn't be developing software on Itaniums to begin with and frankly I don't see you doing it either.

As per the link I fail to see the issue at all since the processor has always been promoted for HPC spin your own code environments, not gaming, not multimedia, not anything pertaining to the existing home PC market.

You are entitled to your opinion on the death of the IA-64 but time will tell not your wishful thinking.
 
I think his argument would be better stated as "it is challenging to compiler writers to develop compilers that optimize the machine level code"... he is one layer of the onion too far out :)

Assembly writers are a different breed of programmer thought they don't whine and complain they complete and execute.
 
I think his argument would be better stated as "it is challenging to compiler writers to develop compilers that optimize the machine level code"... he is one layer of the onion too far out :)

Exactly.

It is foolish to think it is trivial to write a C/C++ compiler that can generate optimized VLIW machine code. It's non-trivial and the most commercial products aren't that good at it. So much for "just use an optimizing compiler" argument....

The CELL(PS3) and Transmeta are both VLIW CPUs. I might be completely wrong about VLIW CPUs if somebody can design a practical optimizing C/C++ compiler (probably have because the each CPU splits one instruction into two at most and can toss one away if it can't be parallelized). I hear the CELL's subprocessors don't even have cache memory and is WAY more powerful for the same die size.

Itanium is dead if the PS3 + CELL succeed because the CELL's design will be used everywhere.