AMD Radeon HD 7970 GHz Edition Review: Give Me Back That Crown!

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]recon-uk[/nom]The only time a 384-Bit bus comes into play, is at silly high resolutions with lot's of AA applied, but then the GPU runs out of puff before even utilising it fully.AMD basically took a normal 7970, strapped on some memory with go faster stripes, then locked in higher clocks to the bios, hardly a move forward...Just showing AMD trying to keep up...Whatever happened to that glorious low-power crown they had?AMD are doing what Nvidia have done before, higher clocks with more consumption....So basically, both companies are as bad as each other, but Nvidia got their design better than AMD this time round.[/citation]

Metro 2033. 1080p with high enough settings and AA to knock the 7970 down to about 60FPS. It will be beating the GTX 680, even with Catalyst 12.6 and older, just because of its memory bandwidth advantage. There are several games where this happens even at such a modest resolution as 1080p.
 

Sonny73N

Distinguished
Nov 30, 2011
221
0
18,710
In a year or so, all the current gen cards will be old and the urge for an upgrade will come back. So when I shop, performance per price is always comes into consideration.

Radeon 7970 for $500
GTX 670 for $400
Take your pick. I know mine.
 

Reference my A$$, it's AMD's version of 'OMG nVidia is kicking my rear-end' -- AMD can call it whatever they want -- it's an OC'ed GPU with 'GHz' label slapped-on.
 
[citation][nom]monsta[/nom]Would be a deecnt review if you guys chose an aftermarket overclocked 670 and 680 to do your comparison as the ghz 7970 is an out of the box overcloked card and should be compared to the equivelant cards from Nvidia.[/citation]

The GHz edition is a reference card, not a factory overclocked card. Are you going to complain about the 7870 and the 7770 being factory overclocked cards just because they are GHz edition cards? That's not fair at all. This is a reference card and should be compared against other reference cards. IT's just like how the 900MHz 7750 and such are still reference cards despite being speed bumps over the previous 800MHz 7750. It's not using a new GPU or whatever, but it's still a reference card. We don't call CPUs with no difference but a higher multiplier (from better binning) and price mere overclocked versions of the previous CPUs, they are different models. Why should we do any different here?
 
[citation][nom]Jaquith[/nom]Reference my A$$, it's AMD's version of 'OMG nVidia is kicking my rear-end' -- AMD can call it whatever they want -- it's an OC'ed GPU with 'GHz' label slapped-on.[/citation]

The thing is, it is not the 75mhz bump making the difference, it is the drivers making the difference. As to kicking rear ends, I don't think anyone is kicking anyone elses rear end to be honest.
 

shin0bi272

Distinguished
Nov 20, 2007
1,103
0
19,310
[citation][nom]redemptionse[/nom]How can you say that the 680 winds in Crysis 2 when the new Ghz edition has higher frames in every chart? Who proofreads this stuff?[/citation]

I was going through the comments to see if anyone else noticed this and in a way you did. If you look at the dx11 scores the 680 is ahead on every one. But in the dx9 benchmark the 7970 eeks out a win on each.
The graph is misleading because A) no one with a dx11 card is going to play a dx11 game in dx9 unless its a low end dx11 card and B) by putting the 2 charts together you get the impression that the 7970 beat it because they put it at the top every time due to its dx9 performance.

They really need to do 2 charts so that people dont get the wrong idea.
 

redemptionse

Honorable
May 2, 2012
24
0
10,520
[citation][nom]Jaquith[/nom]Reference my A$$, it's AMD's version of 'OMG nVidia is kicking my rear-end' -- AMD can call it whatever they want -- it's an OC'ed GPU with 'GHz' label slapped-on.[/citation]
Compared to nVidia saying "omg AMD is kicking our butts so lets automatically OC our next GPUs close to their max headroom" ?
 

verbalizer

Distinguished

nice 'uber'...
good call.
 

shin0bi272

Distinguished
Nov 20, 2007
1,103
0
19,310


this is why
http://www.tomshardware.com/reviews/core-i7-3930k-3820-test-benchmark,3090-10.html
 

verbalizer

Distinguished

nice find as well...

crysis202201920.png
 
As it has been for a good while now, just AMD lost their low power crown.

Nvidia have led in performance seriously since the 8800GTX....

It is a bit obvious that Nvidia are the better company, and they even have more support and bonus features.

AMD are drowned by Intel in the CPU arena, what will happen with GPU's?

Radeon 6000 versus GTX 500, AMD won out in the performance for the money and per watt for quite a while. This was reflected in the best gaming graphics cards for the month articles. The two companies have traded places several times as the leader in value. Who has the highest end card at any given time means nothing more than bragging rights. Also, AMD isn't getting drowned in the CPU industry. They are improving and are very good if you know what you're doing. Take he eight core CPUs and disable one core per module and you'll have quite the CPU for gaming because each module's resources are now dedicated to one core instead of two cores and that speeds up the one core by improving its performance per Hz while improving gaming performance per watt greatly. Sure, it won't beat the more expensive K edition CPUs, but it will do well.
 

I'm real about it -- it's the same GPU with AMD bin, Intel and others do the same all of the time. As far as drivers -- 'today' vs 'next week' both nVidia & AMD re-write and 'optimize' their drivers every month -- beta drivers every few weeks. nVidia just improved their drivers.

nVidia's X79 'official' PCIe 3.0 patch - http://nvidia.custhelp.com/app/answers/detail/a_id/3135/session/L3RpbWUvMTM0MDIyMzU2OC9zaWQvaDEzbE45X2s=

nVidia's GeForce 304.48 Driver (beta) 10%~60% improvement - http://www.geforce.com/drivers/results/45970

Updating a driver is common to all of the GPU's AMD or nVidia, so it depends on 'What Day' and 'What Version' you use that will affect the performance FPS. Meaning next week -- the tests will be totally skewed.

I 'get' most folks don't SLI, CF, or have 120Hz (3D) or Multiple monitors -- I do and I see the differences and I also see the long-term failure rates.

I use to be a huge ATI/AMD advocate, but after proportionally higher failure rates with AMD -- nVidia is my choice and it's the choice of the vast majority of system builders and extreme gamer's. Further, I have my favorites EVGA or ASUS for nVidia and XFX or ASUS for AMD -- though MSI has had a some interesting non-reference GPU's -- though most of the builds I do are blocked.
 

The door swings both ways, and the answer is non-reference GPUs.
 
5000 vs 400 series, it happened there too, but when the longevity called in, the 480 is still kicking serious arse to this day, the 5870 is left standing.

The 480 can dance with a 5970 in certain games and tests (OC), drivers really made this card brilliant.

The 580 will last longer than a 6970, AMD has shortcomings in minimum FPS, and have for a good while, the reasons are unknown, but that is how it has been.

Nvidia did that purely because of how much power those cards consume and thus their heat generation. The 580 also has less VRAM and inferior scaling, so for multi-GPU setups, the 6970 wins substantially. Both companies have disadvantages and advantages. I'm not going to pull the fanboy card on you based on our previous conversations, but I can see how some people might be thinking that about you.
 

verbalizer

Distinguished
Take he eight core CPUs and disable one core per module and you'll have quite the CPU for gaming because each module's resources are now dedicated to one core instead of two cores and that speeds up the one core without increasing frequency.
defeats the purpose of that chip turning off 1/2 of the module...
just get a FX-4170.

but then lose to the i3-2120... :p
 

vertigo_2000

Distinguished
Feb 26, 2007
370
0
18,780
[citation][nom]ubercake[/nom]Here's what I don't get about any of these GPU reviews...In the 'Best Gaming CPUs...' articles each month Tom's tells everyone all you need is a i5-2500k, yet all the GPU reviews are using a 3960X now. Why is this necessary? Tom's should put their money where their mouth is and use a 2500K for all the reviews or tell us why it is necessary to use a 3960X in order to run appropriate GPU benchmarks.[/citation]
Using a better CPU ensures the GPU's won't be bottlenecked by the CPU. The benchmarks should be more accurate.
 


The 4170 is slower than this modded 8 core despite having a higher frequency, so that's not a good idea. The 4170 has much lower performance per Hz than the modded 8 core because it has two modules, not four modules with one active core per module. Taking the 8120 and overclocking it to the 8150's frequencies and such means that at load, it will hit 3.6GHz-3.9GHz Turbo and will beat the 4.2GHz-4.3GHz Turbo 4170 (or overclocked 4100, they are the same) considerably. With overclocking, this modded eight core can trade blows with the more expensive non K edition i5s because it can overclock better than they can (those i5s can usually only hit a 20%-30% overclock at the most). It would use more power, but not unbearably more power like it does with all eight cores enabled. It is better binned than the 4100 and 4170, so it uses less power at the same frequency despite outperforming them significantly.
 

verbalizer

Distinguished

I'm gonna actually try that on my friends 8120 unit.
I have heard you say this repeatedly and WTH, I'm gonna try it and see..

might not mean much to you but that's actually a compliment.
but now if it's a fail then I'm gonna light you up...
:D :lol: :heink:
 
If you're paying $450 (or more) for a graphics card, you're going to get a powerful card, period. Whether the red one is faster than the green one is hardly going to matter. This is AMD saying "ok ok, you found out the HD7970 can run a lot faster; so we'll set it that way for you" and then charging more for it. I'm not sure it's merely a faster HD7970 though, not unless and until we find out why the BIOS of this card couldn't be applied to a vanilla HD7970.
So, if performance doesn't really matter a lot at this level, what does? Power consumption? Not hardly, even for someone like me who tries to minimize power use; it isn't a big enough difference to matter. Turn off a light, or set your thermostat a fraction of a degree hotter in the summer.
It's the heat. Nothing good comes of adding another 9C (or more) of radiant (Radeont?) heat into your case. And the noise, noise, Noise, NOISE (cue the Grinch); yeah, I see the performance improvements, but overall I don't see the relevance or added value in this card.
I DO want to see what the driver improvements bring to other members of the Radeon family though.
 

ForTehNguyen

Honorable
Jun 6, 2012
13
0
10,510
the idle/load temperatures and noise are not impressive, especially when you can get something like the Asus GTX670 DirectCUII TOP. 37dB idle and 48 dB load?? No thanks.
 
Status
Not open for further replies.