AMD Richland Desktop APUs Pictured, Coming This June
http://www.techpowerup.com/183464/AMD-Richland-Desktop-APUs-Pictured-Coming-This-June.html
http://www.techpowerup.com/183464/AMD-Richland-Desktop-APUs-Pictured-Coming-This-June.html
Since tessellation is already implemented, as well as physics, these new consoles could very well use their resources, and again, if the big houses choose not to do so, for monies/profit sake, the little guys will, as well as some larger groups.
You're oversimplifying the issue in my opinion. I mean is there a massive difference in this particular workload between CPU and GPU?
Also on the other hand in situations where GPU is a bottleneck overall throughput is increased this way. If a game is performing at ~30fps at 1080p resolution on 7970/680 I'd wager that most people are going to be GPU bottlenecked.
Also anyone with $400+ graphics is likely going to have a CPU strong enough to handle the load. So I think it's a safe statistical bet that the overwhelming majority of gamers is going to benefit from their design choice.
Right now software writers are still pushing single thread optimisations leaving the world of multi processing and heterogeneous system processing and the infinite potential it offers to drive x86 into bed rock.
Meanwhile, thread control structures (Especially Mutexs and Sepharmores) break when exposed to a multiple-core environment, so I have to carry around a performance hit every time I use them to prevent them from crashing the OS,
Given how the workload involved the rendering of grass, and given how much grass exists, yes.
And anyone not using an 8-core processor will be CPU bottlenecked. The difference is that GPU's get more powerful faster, so if you are going to overload something, overload the component that is actually improving at a faster rate.
Provably false:
With a 680 GTX, which isn't exactly a slouch. Notice the walking CPU bottleneck?
By contrast:
With a 3960x, you see clear evidence of a GPU bottleneck, but given the results above, that would be expected. So we can conclude a 3960x is NOT a CPU bottleneck, even with CF/SLI/Titan. But with a single 680 GTX, its clear that you fall of very quickly as you drop down the CPU ladder. Other sites show a similar trend: A single 680 GTX is clearly bottlenecked by the CPU. [Now, I would LOVE to see a slightly less CPU as the baseline for the GPU tests, say a 3770k, rather then a SB-E. That would prove beyond a doubt what is happening.]
You're oversimplifying the issue in my opinion. I mean is there a massive difference in this particular workload between CPU and GPU?
Also on the other hand in situations where GPU is a bottleneck overall throughput is increased this way. If a game is performing at ~30fps at 1080p resolution on 7970/680 I'd wager that most people are going to be GPU bottlenecked.
Also anyone with $400+ graphics is likely going to have a CPU strong enough to handle the load. So I think it's a safe statistical bet that the overwhelming majority of gamers is going to benefit from their design choice.
Right now software writers are still pushing single thread optimisations leaving the world of multi processing and heterogeneous system processing and the infinite potential it offers to drive x86 into bed rock.
We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing, the CPU can assist with DX10 geometry processing to enhance overall performance. 3DMarkVantage is one of those workloads, as are Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes. We have used similar techniques with DX9 in previous products and drivers. The benefit to users is optimized performance based on best use of the hardware available in the system. Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers.
Intel's software-based vertex processing scheme improves in-game frame rates by nearly 50% when Crysis.exe is detected, at least in the first level of the game we used for testing. However, even 15 FPS is a long way from what we'd consider a playable frame rate. The game doesn't exactly look like Crysis Warhead when running at such low detail levels, either.
Our Warhead results do prove that Intel's optimization can improve performance in actual games, though—if only in this game and perhaps the handful of others identified in the driver INF file.
We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing, the CPU can assist with DX10 geometry processing to enhance overall performance. 3DMarkVantage is one of those workloads, as are Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes. We have used similar techniques with DX9 in previous products and drivers. The benefit to users is optimized performance based on best use of the hardware available in the system. Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers.
Intel's software-based vertex processing scheme improves in-game frame rates by nearly 50% when Crysis.exe is detected, at least in the first level of the game we used for testing. However, even 15 FPS is a long way from what we'd consider a playable frame rate. The game doesn't exactly look like Crysis Warhead when running at such low detail levels, either.
Our Warhead results do prove that Intel's optimization can improve performance in actual games, though—if only in this game and perhaps the handful of others identified in the driver INF file.