jimmysmitty :
Math Geek :
some 470 benchmarks including xfire. http://wccftech.com/radeon-rx-470-crossfire-3dmark-11-benchmarks/ meets minimum vr scores and is around an r9-290 score. not bad for what will be roughly a $150 card
The funny thing is that people are so used to meh returns since GPUs have been stuck on 28nm that this kind of performance seems amazing. If 20nm didn't fail we would have had a much better jump for Hawaii XT (and better thermals) and Maxwell, both which were supposed to be 20nm. Then 14nm wouldn't have been as big of a jump.
But since 20nm did fail we have a much larger gap to jump that is allowing for much better performance at a much lower TDP. I would imagine it would have been the same if Intel skipped 22nm and went straight to 14nm.
Honestly I am expecting this kind of performance. I guess after being involved with computers so long it takes a lot to really wow you. I mean like Netburst to Conroe wow you.
The GTX 900 series was a
way more significant jump than the GTX 1000 series. Let's look at it this way. The performance jump alone was the exact same, the GTX 970 slightly beating out the 780Ti, just as the GTX 1070 has beat out the 980Ti. The change in performance from 700 series to 800 series is about
identical to the change in performance from the 800 series to the 1000 series.
Now let's take a look, though, at power consumption. The GTX 800 series really decreased power requirements over the 700 series. That is all on 28nm still. The GTX 1000 series, which drops down to 16nm, has
higher power requirements than the GTX 800 series. Therefore, the node shrink was actually a huge disappointment in my mind. The perf/watt jump to Maxwell was
much larger than the perf/watt jump from Maxwell to Pascal, because the jump to Maxwell both increased perf and decreased watt, but the jump to Pascal increased perf while watt
also increased, though a small amount.