[citation][nom]blazorthon[/nom]Some of AMD's Opterons have far greater performance for the money than any Xeons do, so I'd have to disagree (at least partially) with you. The same is probably not true for all of AMD's Opterons, but it is true for at least some of them.For example, here:http://www.newegg.com/Product/Prod [...] 6819113038$600 for a CPU that in fully threaded work can compete with Intel's six-core and eight-core Xeon models quite well, if not beat them, in work that scales across as many cores as you throw at it and it does so while using not too much power at all.Other good examples:http://www.newegg.com/Product/Prod [...] 19-113-036http://www.newegg.com/Product/Prod [...] 6819113030Pretty much all of these models:http://www.newegg.com/Product/Prod [...] rchInDesc=They all have better price/performance than the multi-socket Xeons. Lower power efficiency, definitely so in most cases, but not in all.[/citation]
Some... Good point.
[citation][nom]IndignantSkeptic[/nom]Please forgive my ignorance but why do we need x86 processors still when a compiler now is supposed to automatically generate all the necessary machine codes from the same high level source code?Also will we ever be able to upgrade supercomputers simply by replacing small parts of it at a time with newer better components and basically having different parts of the supercomputer running at different speeds?Also why the hell is CUDA and OpenCL advocated repeatedly in this article when OpenMP was supposed to replace them, as far as I understand, and is actually mentioned in one of the pictures?![/citation]
Because if you've got x86 code and you want to use OpenCL or CUDA you have to completely re-write it. Compilers can't just translate between them. Besides, a lot of HPC stuff would be highly-optimised, and that often means low-level stuff (Assembly, even).
http://en.wikipedia.org/wiki/OpenMP OpenMP is for multi-threading, generally using the CPU. It also works with clusters over a network, hence it is great for this (Xeon Phi). But it doesn't work with GPUs.
[citation][nom]memadmax[/nom]Ok, the hardware looks decent.But, there's one problem: Intel is relying on programmers being able to optimize to a high degree for this setup...And that is the problem, programmers these days are lazy...It used to be you would spend a whole day optimizing loops using counters and timers to squeeze every last drop out of that loop...Now, programmers are like kids with legos, they just merge crap together that was written by someone else and as long as it runs decent, they call it good...[/citation]
Um, I get the distinct impression it needs very little optimisation. I think programmers will find it far easier to optimise for this than for GPGPU which requires a complete re-write.
[citation][nom]PreferLinux[/nom]-On a completely separate note, I'm wondering what the price of these is. nVidia's latest K20 costs about $3200 and is rated at 1.17 TFLOPS peak and 225 W. This is rated at 1.01 TFLOPS peak, with the same power rating. It wouldn't be hard for Intel to beat nVidia on price...[/citation]
As a follow-up to this, (one of) the SemiAccurate articles gave the price, and it was mid-$2000s – much better than nVidia's K20, for a similar performance, and most likely a much better architecture to program for (or make use of).