Well, theres the real problems.
Intels touting its "just wait, itll get even better" when everything points towards bad scaling.
If theyre at 3Ghz and still meeting the TDP requirements, there again, the cpu brick wall, and again is why Im saying we need better cpus.
So, more cores becomes a limitation, core speeds become the other, and still, theres the emulation problem, which no one knows how well this is going to play out in games.
Understand, for gpgpu uses, I think LRB will be fine, and Im extremely interested in this aspect of it.
But, for gaming, LRB has to go the extra mile, as gaming res se, isnt really directly a x86 type of usage, in this instance. So, itll have to emulate against whats already been established and has legs/experience.
So, again, if it fails at gaming, it fails all around.
GPUs are reaching a brick wall as well, tho, this may change too.BW has always been the problem, as gpus dont need solutions for multicore to work, theyve existed since day 1, so it comes down to, can it get communication and thruput out quicly enough.
nVidia is stuck concerning the cpu/gpu communication with the PCI connection, again limiting it to whatever PCI limitations currently exist.
Intel and AMD have no such limitations, as they have options for cpu/gpu communications, which we wont truly see til AMDs bobcat for instance shows up.
Other gpu limitations are GDDR speeds, but this applies accross the board to an extent, and David Kanter has had a few good things to say about it, tho, there are other approaches/HW possibly available.
The newer GDDR5 coming, which is denser than anything weve seen, plus runs at 6Ghz quad pumped may move a few things in gaming in a different direction, meaning much higher textures than weve become accustomed to.
Whats this mean to gpus and LRB?
LRB is limited to its internal programming, and seeing as Intel is trying to go as green as possible, this scenario just doesnt play well doing gfx, and is why we see such high power usage in gpus.
I find it ironic Intel props LRB in this way, knowing that the more intensive a game becomes, the more likely LRB will be completely strapped, with no power savings available for it.
This may or may not point towards LRB as being made for mediocrity for gaming, or Intel isnt being fulling honest.
If gaming isnt a win for LRB, then itll struggle for existence, as the gpgpu market simply wont be enough for its expected ROI, especially if we see these walls coming soon to it
PS If the upcoming changeover for gpus using HKMG, and finding ways for higher clocks, Id just like to point out, imagine a 5970 running at even 1.6 on its core