Sony has made a quad core ARM A9 chip that runs at 2.3 ghz. At that frequency, it will draw a tenth of the power of a PD quad core. But would you use that ?
If it was sufficient for my usage then perhaps, but I am not sure what the point is here.
I wouldnt recommend a 8 core PD desktop CPU to anyone. But give me all that improvement you said above, and my money goes to AMD.
While the FX8350 and 8320 are marked improvements over their predecessors, I am not in favor of them myself. They are flexible to use as proficient workbench chips and community grid parts but they are excessive and unutilized for a day to day user or gamer. The FX6300 and FX4300 are copious enough gaming parts which deliver fantastic performance depending on the hardware paired with it. It is still a good processor though.
As above while I find a lot of practicality in a FX 8 part as a file server, professional workbench, a lot has come down to power usage. The FX 8350 uses the same power as a Hexcore Intel Xtreme. While performance is not in question like the FX 8000 parts the Intel Xtreme line to me has very little practicality but is efficient as a workbench.
AMD outlines for SR is focus on Cache and Memory, one such tout was branch misses reduced by 30 or 35% which is a very big number. There were leaks of 25% general computing improvements across the board from top to bottom. With that said if it were correct then AMD will be right up with intels mainstream line in terms of IPC's. I am sceptical of the number but if it was to be around 20% that is a major leap in IPC terms for AMD in what will be a new arch.
Intels fab advantage also allows them to fab chips with much less wastage. So much so, that the bastards have to artificially main their i5 and i3 to create market segments. GloFo is always struggling to better its technique. Llano was pretty much a failure because their just werent enough chips for people to buy.
Llano was late, that was a major reason, Llano was late like Zambezi but all that remains the last vestiges of Hector Ruiz and his gang of misfits.
Zambezi to Vishera, Lllano to Trinity, Bobcat to Kabini. Using the same process AMD achieved a) lower power usage and b) more performance.
Atleast for games, I would say that a good game engine is that which uses the GPU for parallel work as much as possible. I would hate to be CPU bottlenecked in games. In one year GPU perf is going to 50% increasse. But i would be lucky to get a 10% CPU increase. So what game in bottlenecked by a CPU, is still bottlenecked.
There is a interesting test. You can use the Windows Advanced Raserizing Platform . It allows all DirectX calls to be executed on the CPU. Can you do a simple check and see which processor does well ?
Skyrim, Civ 5, SC 2. Three games that have been a struggle for AMD, yet in all of them PD based arches closed the gap to Intel, all those games seem to suggest that Intel have been bottlenecked so if AMD came up with a God chip then yes it will be bottlenecked as well.
That Rasterization Platform is interesting.
"general usage" is different for different people. And playing Crysis3 is not "general usage". General usage is browsing, office apps, music, movies, simple games. I am surprised you feel that much difference here. I myself cant find any "subjective" difference.
Except in using single threaded browsers like Firefox, where raw CPU power is noticable. But then again, you can say FF is a "legacy" application.
General Usage to me is defined by the end user. A gamers general usage is gaming a benchmarker is synthetic usage, for office usage, browsing and office based applications or accounting software etc, for a professional it would be image and content creation, rendering, 3D modelling, CAD, Photoshop etc.
GPU is intels achilles heal. If they cant deliver excellent perf in reasonable TDP, they are out of the smartphone/tablet market (which thay mostly are). And it has been their focus for some time now. Hence most of the chip TDP is spent on the iGPU. And i think Haswell monster is still going to be slower than a Trinity. But, the power usage will be lesser. So OEM's are more liable to use a Haswell.
AMD has a chance there. But they cant do wonders with a 32/28nm node. Because adding a GPU means lots of transistors. And without a node advantage, you cant do that.
Richland is more efficient than Trinity on the same node just mature process, yet deliver more performance on a iGPU with same specs as Trinity's Devastator.
We have seen that TSTC is now adopted by AMD and RCM is yet to be unveiled, I think AMD are wily enough to have figured out backdoors and control thermals while maximising performance. AMD are not circumventing laws of physics here they are just manipulating it some way that nobody know how, one factor is that AMD's iGPU is far more sophisticated than Intel's, whom are needing to drop the die size freeing up node space to through more power consuming and more importantly space consuming parts yet from what I see Haswell on DT the top end GT part will roughly be around the A8 3850 and 3870K level, which was about 33% on average slower than HD7660D, which will be 20% slower than HD8670D then there is Kaveri speculation of around 55%(nobody knows what GDDR5 will do so it could be more), broadwell better have a HD7970 or this baby as they say is back back back back GONE!.
Here is the other converse, AMD will improve its x86 until it starts to pull right up to Intel's exhaust pipe, all releases are seeing massive x86 gains all the while GPU technology is ramping down the node sizes and delivering more graphics potency at lower power thresholds. If AMD continues its progression with graphics along with the incremental IPC gains that should bear fantastic fruits. For Intel to achieve AMD level iGPU they need the more expensive route of microarchitectural engineering, first Intel have somewhat of a strange idea as to what a GPU really is what is seen on haswell looks like reinventing the wheel which increases its power and delivers what is largely unsatisfactory performance for its cost. The other element is needing to drop die size is expensive and theoretically Intel will reach the physical limits of how much die space is needed to have a working processor and if its iGPU is still powder puff what then? you at your limits then you may have the brain fart that maybe perfecting the process was better idea. For AMD the position is different, first AMD have shown they can squeeze performance out, second the most pertinent factor here is that AMD integrated graphics follows its Radeon core technology that is smaller dies, less power, more performance.
If you take Haswell's GT compared to Trinity, I bet Intel's iGPU component is much bigger than Trinity and delivers petty performance in comparison, certainly not Skyrim fully maxed out. Going on what it costs it is not a very pretty picture. Rumours of Intel buying Nvidia in the next 3 years, TBH Nvidia would want nothing to do with Intel knowing how they got stiffed the last time.