mapesdhs :
InvalidError writes:
> Practically no mainstream software exists that can make reasonable use
> of more than two cores, ...
Can't imagine where you get that idea from. As others have said, the OS
itself needs resources, so 2 definitely isn't enough for smooth running,
and benchmark after benchmark shows that plenty of modern games
gain from at least 4 cores.
And with both major consoles being 8-core systems, you can bet that game developers are optimizing their engines to scale well beyond 4 cores.
You could pretty much end the argument here.
mapesdhs :
> ... no hurry for Intel to bother producing mainstream chips with more than four
They're not bothering because there's no competition.
I basically agree. For the
average user, even 4 cores is a bit unnecessary. But certainly enthusiasts and power users could benefit from more. So, due to lack of competition, Intel is pushing these folks into a higher price bracket. If they faced stronger competition, they might instead decide to offer 6 cores in the mainstream bracket.
But there's a real barrier if you go much beyond 4, in a dual-channel memory config. They want to keep costs down for mainstream users, so it doesn't make sense to add more memory channels in that socket. And once you create a higher-end socket to serve a fundamentally smaller market, prices will be higher due to lower volume.
So, is Intel extorting these higher-end users? Certainly. Prices on their quad-channel CPUs could be lower (and probably would be, if they faced any real competition in this segment). But I don't fault them for not trying to squeeze 8 cores in a dual-channel socket. That just wouldn't make much sense, from a performance standpoint.
Even with quad-channel, look at how clock speeds drop off once they get much above 8-cores. There are two reasons for that - one being power efficiency, but the other being that it's difficult to feed more cores running at full speed. I mean, they certainly
could have made the chips burn more power - IBM has mainframe CPUs that burn like 2 kW. So, if it were a cost-effective way to add performance, they could have gone above 145 W.
mapesdhs :
AMD is behind because they
quit the market; the company stated this quite clearly some time ago. They don't
care about desktop CPU performance anymore. If AMD did care so much, they
could have sidestepped the multi-year delays
Um, I don't think the issue is one of caring. If you haven't noticed, Intel gained an unassailable lead in manufacturing tech, and AMD has had some architectural and managerial missteps. The combined affect has hammered their revenue, and they've been trying to use what limited resources they have just to stay in business (e.g. by going after consoles and ARM-based servers).
I have great hopes for their K12. I was really disappointed not to see Carrizo at 20 nm and in a desktop variant. I've been waiting for their APUs to become a bit more competitive, before buying one to dabble with HSA.
mapesdhs :
at the very least released a die-shrunk Ph2 with a couple more cores and some IPC tweaks,
I've also wondered whether something like this would've been competitive, but I think they just didn't have resources to hedge their bets, like this. They already had two new architectures and were probably devoting all their resources to making them as good as possible.
Remember, die shrinks sound simple enough, but there's a ton of work that goes into layout and routing of a high performance CPU. It's not as automated as you'd think, if you want the best performance. And the amount of testing required is typically comparable to the engineering work.
Anyway, there seem to be some new MIPS SoC's on the market. Perhaps they'll even run Irix!
Hands-On With The Creator Ci20
; )