The problem is the facts get in the way of your interpretation. Haswell only has a 5% and in some instances 6% improvement over IB.
Which was expected, given the focus on the GPU, which is showing a good 25-40% improvement.
You are so fixated on single-thread performance you will never learn.
The majority of software tasks do not scale well beyond a handful of processors. I simply can not stress this point enough. And the stuff that does scale is going to be offloaded to the GPU via some API (OpenCL, etc), leaving the CPU with the "single threaded" software for the most part.
In most top-notch new apps whether they be games or productivity apps multi-threading is becoming more prevalent.
For the most part though (Crysis 3 aside, and I've made my points on that approach very clear), notice how AMD doesn't gain a performance lead. Why? Because the CPU isn't driving performance, and it hasn't been for some time now. I'd imagine a C2Q/PII could still pump out some surprising numbers if anyone was willing to test them...
There are very few games or productivity apps I use that do not have mutli-core support.
Just about every Windows application ever created has multi-core support. The scheduler, not the program, assigns threads to cores.
Only the mentally challenged use winrar over its superior winzip competitor.
Or those with significant license or security restrictions, which means 99% of corporations out there.
The Most popular games are going for multi-core support because that is the future.
We'll see, though I doubt it. More likely, we'll gradually see more OpenCL/CUDA(PhysX)/Directcompute offloading to the GPU, but I really don't see the CPU doing much more then its doing now. Still expecting to see 2-3 threads doing 99% of the work.
Most games are ported from console to desktop, not the other way around. Now that AMD has multi-core processors in all the new console designs the pc ports will support multi-core .
So did the last generation remember? Both the 360 and PS3 supported six simultaneous threads. In 2006.
...oh wait, we're still using 2 on the PC. That's right...Coding to metal allows significant performance optimizations you simply can NOT make on a general purpose PC.
The greater use of gddr5 memory in the graphics cards will be utilized for not only graphics but for many applications including Premier and Photoshop. This will negate the advantage that Intel has with their larger cpu caches.
Not always. Remember that cache is only a hack to improve to (relatively) poor memory throughput. If the data you need is not resident in RAM, then you still need to go to the HDD, so that GDDR5 isn't going to help performance any. Native 64-bit apps would offer better performance improvements then GDDR5 would. Nevermind the latency concerns you would encounter with the higher latency of GDDR5 compared to DDR3...(I can't stress this enough: Windows was NOT designed to handle high-latency memory access, and I can't help but suspect it will not play nice...)
Adobe is committed to programming for the new HSA architecture as well as many other developers. This is the type of innovation that AMD excels at and that Intel, the monopolist, shies away from and fears.
"Coding for HSA", which basically means "OpenCL". HSA is invisible to the developer; its all handled on the OS side of the house.