Cazalan :
So in other words you don't like new ways of writing code.
No, I'm against ways of coding that artificially slow down processing. Because thats exactly what the Crysis devs did.
The rendering code in queston that was offloaded to the CPU will execute faster on a GPU. GPU's are gaining performance faster then CPUs. So how does offloading that code to the CPU make ANY sense whatsoever?
Fight the future, make any excuse possible to avoid making the cpu work harder, and essentiallly making dual core cpus obsolete, instead lets rely on going sli/crossfire so the gpu can handle the extra workload ... oh, thats right, you don't like those either ...
Because of the latency problems, which sites are now starting to reveal. How many threads over the years have been along the lines of "I'm getting 80 FPS on my SLI/CF setup, so why does the game feel choppy?". As long as SLI/CF is implemented by copying on cards VRAM to another, thus introducing unacceptable latency into the processing, I don't view it as a "good" implementation.
Secondly, I'm all for using the power CPU's have, providing it DOESN'T SLOW DOWN THE APPLICATION, OPERATING SYSTEM, OR OTHER APPLICATIONS THAT MAY OR MAY NOT BE RUNNING.
Lets look at your theory in another way. say you get game X in the future with geforce 890. at medium settings, your at 100% gpu usage, pushing 120fps, with the cpu utilizing 2 cores at 75%. lets crank it to ultra, make the gpu do the calculating. gpu stays at 100%, fps drops to 30, cpu still just idling 6 cores (4 being HT) and 2 working ...
Except the GPU wouldn't be that loaded to begin with, especially since my 570 isn't hitting 100% load at medium (V-sync enabled). So those performance numbers you pulled out of your rear aren't even remotely valid.
Secondly, follow this REALLY simple logic:
1: Year over year performance increases in GPU's is significantly greater then year over year performance in CPU's.
2: GPU's excel when executing parallel code.
3: Rendering code is very parallel.
Therefore, which component should handle the processing for rendering?
So look at it this way instead: two years from now, CPU's have increased in power by about 20-25% (not unreasonable at current trends). Meanwhile, GPU performance has doubled (not unreasonable either; look at the performance increases the past two GPU generations). So now when you look at that render code that was offloaded to the CPU, guess what? You now have a CPU bottleneck for all non-8 core processers (and even they are tasked to capacity), meanwhile, the GPU is running at ~50% load, not rendering anything because its waiting on the CPU to finish rendering.
Congrats. You just slowed down your application for no reason whatsoever for 90% of all CPU's on the market, with no performance increase for the other 10%.
So, if I understand you right, coding in a way that REDUCES FPS is a GOOD thing.
We can't alienate our dual core cpus. <--- This way of thinking needs to die in order for programming to move forward. Yes, I know that means your product will not sell to those with i3 cpus, after all its all about making money and not looking to the future.
Great. How about this then: We remove all GPU's and move back to executing all code on the CPU, in order to force CPU's to move to 64-core processors? Theres a reason why the entire rendering stack was moved to GPU's in the first place: Because at rendering, GPUs are faster then CPUs.
Now, if some dev can make a game engine that is more parallel in a way that makes programming sense, I'm all for it. But moving code back to the CPU from the GPU, for tasks that execute faster on the GPU in the first place, and causing a CPU bottleneck instead of a GPU bottleneck, is the WRONG approach.