Trouble is it wouldn't help much in the first instance. The vast
majority of apps are not written to utilise multiple cores. Even
games are barely out of the starting block with only a few usefully
benefiting from more than 2 cores, which is why one will often
get better results from using a highly oc'd E8400 than various
quad-cores. In some cases the very way the gfx system works holds
back potential speed improvements (the sw API side of things I
mean).
New Scientist reported some time ago that the industry has a more
general problem atm with a major lack of talent in parallel
programming. A legacy of a market that evolved based on marketing
its products on a simple number (MHz) with no need for parallel
coding skills has created an industry base of programmers who
have little experience in such things.
So yes, right now I'm sure Intel and even AMD could churn out,
say, a 32-core chip if they wanted to, but what would be the point?
The speedups for most applications would be pretty woeful (think
of how many audio apps only use 1 or 2 cores at present, and even
some of the video encoding apps are not written to properly exploit
multiple cores atm).
But even if the applications side were magically fixed, feeding
a large number of cores with data is another problem. Conceptually
speaking, there's not that much difference between a multi-core
CPU of this kind and the large single-image systems SGI makes;
SGI learned long ago that unless the bandwidth scales as the
number of CPUs grows, performance will not increase in a useful
manner (see my
Alias benchmark results for good examples of this,
eg. compare the 18-CPU bus-based Onyx to the 2-CPU NUMA-based
Origin300; the Onyx is starved for bandwidth since it has a shared
bus design).
Thus, right now the real bottlenecks to the useful exploitation of
multi-core CPUs is how to feed data to so many cores, and the
lack of relevant expertise in the sw industry.
What is certainly obvious though is that, multiple cores aside,
a much higher-clocked CPU could easily be released if they so
wished, given how well Core2, i7 and Phenom2 all oc even on air.
But that's not how business works - they are, afterall, supposed
to be profit making enterprises with shareholders to satisfy.
They wouldn't release a chip that's suddenly 100% faster all in
one go when much more money can be made by releasing a larger
number of CPUs in stages, each with 20% faster performance every
few months.
On the other hand, if someone did try something like that, these
days they'd probably just be hammered by competition laws. The
Erebus2K chip had the potential to be a major performance step-up,
but alas the company foundered and the technology was bought by
SUN, so grud knows where all that expertise went to (SUN's
T2 CPU
perhaps, who knows).
And btw, don't moan if you choose not to take the parallel
programming course at University. ;D I remember very few did so
when I was at Uni (1990), because they couldn't see its relevance
outside research applications. I took it because it interested
me (I could see its usefulness in modelling neural networks, AI
being another option I went for), but it was certainly a hard
topic to learn, especially when paired up with related sw areas
such as functional programming languages. If you think writing in
C is easy, try writing in Prolog. :}
We'll have chips with lots of cores eventually, but they'll be
much more useful outside of the consumer desktop arena long
before we see them in home PCs. I'm sure Intel figures HPC is a
better target market to begin with for such things. SUN is already
using an 8-core CPU, but it's not exactly a consumer product...
Ian.