jimmysmitty :
Hard to say. The whole purpose behind DX12 is to lessen the impact that the CPU has on a GPUs performance, lower the cycles the CPU needs to do before the GPU starts pushing.
If anything it should be more like the GTX 980Ti and have very even performance across most CPUs, unless the game utilizes more cores for other functions which would give more cores and advantage.
If anything it could mean the drivers are still immature or that somehow Fiji is breaking the laws of physics.
gamerk316 :
The 4k results show the true difference in performance, which is about 2 FPS. Note though, that Fury does have more responsive frame times, likely due to it's faster VRAM, so it would likely be smoother despite a lower absolute FPS. At lower resolutions, the trend remains, with the 980 Ti slightly faster.
What I find more impressive is at 4k, the i3 apparently isn't CPU bottlenecked, since FPS basically doesn't budge from the i7 numbers. That basically means that both NVIDIA and AMD are purely GPU bottlenecked at 4k, regardless of the CPU used.
As for the 1080p numbers, the i3 appears bottlenecked, given that AMD and NVIDIA perform the same. This is farther evidenced by NVIDIA gaining some FPS as you move to i5/i7. Not sure why AMD is gaining FPS though; odd driver bug, or noise in the dataset? Bears watching, but isn't conclusive by itself.
That's the thing. I am suspecting that it's indeed an issue with the drivers for Fiji. It's the uncertain nature of Fiji that makes me not want to draw any conclusion on it, but, if it's indeed superior to the older GCN architectures, there's no reason as to why this should be happening other than drivers. Doesn't justify buying the Fury X right now. It could also be a huge bug or design flaw that won't be solved. In that price range, I'd say the 980 Ti is the safer buy, but I digress.
The same thing was seen in the Ashes of the Singularity benchmark. The R9 390X got an extremely huge boost surpassing even the GTX 980, but the Fiji GPUs were again very disappointing, even though they actually carry a lot of punch if you look at the specs. So the Fiji is not really the best example right now. If we take the cards with stable drivers, AMD is actually in a very good position.
The point is actually the following. Us as consumers want something to last as long as possible for as little money as possible. If people have been paying attention, and AMD's track record speaks for itself, they have the GPU architectures with the longest life. Leaving the reasons in the middle, they should be re-enforcing this in their marketing. Someone who bought an HD 7970 or similar GPU still doesn't really have to upgrade, since it's basically become an R9 280X. I personally regret not waiting for GCN and getting an HD6000 series card rather than the HD7000 card, but only due to DX12. Other than that, the card is still working great other than its 1GB memory limit. This is why I'm waiting for the next gen GPU architecture. I won't make the same mistake, of buying an architecture just as it's going to be replaced by something better. And this time it'll include a nice die shrink too
But in any case, AMD should be enforcing the idea that their cards' architecture is future proof. They might get a temporary boost in sales this way. They need to still account that it's temporary because the longer people stay with their cards, the less of them they will sell. Or they have to go the nVidia route, enforcing more regular card upgrades in various ways.
One can argue about optimization and bias and everything, but GCN is actually still holding its own very well for such an old architecture. Even the newest consoles are using GCN rather than something else, because they saw the flexibility and depth of the performance that can be squeezed out of it, particularly with Async compute. AMD is the one that has to capitalize on this. Most people are oblivious to the strength of the GCN architecture.
The GPUs alone won't save AMD though, so Zen better have some punch. I don't think this should be a problem, but AMD has had blunders before, so...
As for an i3 not bottlenecking on 4k, not really surprising. CPUs don't really care about resolution, while the GPU is going to be taxed more at higher resolutions.