AMD Radeon HD 7970 GHz Edition Review: Give Me Back That Crown!

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Are you really complaining about a reference card's temp/noise not beating a card with a non-reference cooler and lower power consumption? Besides, AMD doesn't seem to like having great reference coolers anyway. We should all know to avoid AMD's reference coolers right now.
 

verbalizer

Distinguished

Oh, so you haven't done this yourself and just going off an article.?
you need to be on someone's payroll..

JUST KIDDING.

These results couldn't be much more definitive. In every case but one, distributing the threads one per module, and thus avoiding sharing, produces roughly 10-20% higher performance than packing the threads together on two modules. (And that one case, the FDom function in picCOLOR, shows little difference between the three affinity options.) At least for this handful of workloads, the benefits of avoiding resource sharing between two cores on a module are pretty tangible. Even though the packed config enables a higher Turbo Core frequency of 4.2GHz, the shared config is faster.

Our test apps, obviously, are not your typical desktop applications, and they may not be a perfect indicator of what to expect elsewhere. However, since many games and other apps are lightly threaded, with three or four threads handling the bulk of the work, we wouldn't be surprised if one-per-module thread affinities were generally a win on Bulldozer-based processors.

Naturally, some folks who have been disappointed with Bulldozer performance to date may find solace in this outcome. With proper scheduling, as may come in Windows 8, future AMD processors derived from this architecture may be able to perform more competitively. Unfortunately, Windows 8 probably won't ship during the model run of the current FX processors.

At the same time, these results take some of the air out of AMD's rhetoric about the pitfalls of Intel's Hyper-threading scheme. The truth is that both major x86 CPU makers now offer flagship desktop CPU architectures with a measure of resource sharing between threads, and proper scheduling is needed in order to extract the best performance from them both. (This situation mirrors what's happened in 2P servers in recent years, where applictions must be NUMA-aware on current x86 systems in order to achieve optimal throughput.) A gain of up to 20% on a CPU this quick is certainly worthy of note.

Trouble is, right now, Intel has much better OS and application support for Hyper-Threading than AMD does for Bulldozer. In fact, we're a little surprised AMD hasn't attempted to piggyback on Intel's Hyper-Threading infrastructure by making Bulldozer processors present themselves to the OS as four physical cores with eight logical threads. One would think that might be a nice BIOS menu option, at least. (Hmm. Mobo makers, are you listening?)

At any rate, application developers who want to make the most of Bulldozer are free to affinitize threads in upcoming revisions of their software packages anytime. If AMD can persuade some key developers to help out, it's possible the next round of desktop applications could benefit very soon.
 


Your FX-8120 core/module question is totally worthy of a deeper look, but I’m not sure who among us has the time to pull it off any time soon. I’m committed to other projects outside the SBM, and not sure I could pull off a worthy FX-8120 build at $500. Also a problem, in an SBM all overclocked tests must run the same settings. If we disable cores for games, I’d need to leave them disabled for Apps too. From what I have read (not tested), disabling 4 cores … leaving 4 cores on 4 modules, does outperform FX 4100 (4 cores on 2 modules). I understand frustrations of games/OS not taking advantage of the architectures threading abilities, but otherwise feel overclocked FX can game right now as is. I’d lean towards single-GPU configs of any size, and while low resolution bottlenecks will exist, it more about the experience at the intended resolution/details. But I personally own and have tested so many Phenom II’s that I also have to admit FX didn’t impress me enough to warrant further attention. I’d want to tinker on ALL hardware (and share data) given the time/opportunity, but that’s far from reality.

Quote from pauldh at the current $500 SBM article, who seems to agree with me.
 

verbalizer

Distinguished

but you haven't actually done or seen this yourself....
that's my point.
 
Hmmm, was this an AMD review?
What has BD got to do with this?

[citation][nom]recon-uk[/nom]As it has been for a good while now, just AMD lost their low power crown.Nvidia have led in performance seriously since the 8800GTX....It is a bit obvious that Nvidia are the better company, and they even have more support and bonus features.AMD are drowned by Intel in the CPU arena, what will happen with GPU's?[/citation]

Three guesses.
 
Ummm Intel have all areas covered with CPU's and the deliver better perf/watt.
How is that wrong?

True, but with the modding in mind, AMD has the better performance for the money (for the cheapest model per default core count and with overclocking and the modding in mind) and isn't far enough behind in power efficiency for that to make much difference in the power bill unless you're doing huge overclocks that aren't very sensible. What Intel is truly winning is default value and although that is a more important win overall, it has no value for people such as us. Granted, I wouldn't build an incredibly high end system based off of AMD right now, but most builds can work with AMD just as well as Intel (better in many situations) if you have someone who knows the mod and overclocking building them. Both have their places.
 
Modding? WTF?

You mean everyone will go out and dump LN2 over their CPU's?

You really are silly Blaz...

Intel win in overclocks too, IB can oc really far on LN2...

I consider disabling one core per module to be a mod because it changes the CPU from it's default, yet is not overclocking nor underclocking on its own. What does this have to do with liquid nitrogen?
 

verbalizer

Distinguished
I will say this (this is a 7970 article LoL)

if running AMD and the FX chip, make sure it's the FX-81xx or nothing at all from AMD.
APU not included because I kinda like that concept.

other AMD then keep your Deneb C3 or Thubans as long as you can and check out Piledriver, just skip Bulldozer all together.
(except FX-81xx)

now back to this [strike]GTX[/strike]... HD 7970 thread.
:whistle:
 

A Bad Day

Distinguished
Nov 25, 2011
2,256
0
19,790
[citation][nom]urban legend[/nom]looks like the GTX 670 is still the best buy currently available.this mentality of higher clocks from AMD / HD Radeon is the wrong path for true success.now if someone will give me one to play with I'd appreciate it.. I can't afford the GTX 690, putting that in there was a tease.[/citation]

Well, they can't simply release a Radeon 7980 with a completely new die. Although dropping the price helps, I think AMD wants the crown of performance.
 

verbalizer

Distinguished

7980..... hmm intersting thought.
combatant for the GTX 685 perhaps.

the conspiracies commence. :heink:
 


Why can't they do that? If AMD really anted to, I'm sure that they could make a new die. If they want to avoid the binning problems associated with huge dies, then at least theoretically, they could use two dies on a single chip and connect them through a high bandwidth link (maybe an implementation of Hyper-Transport).

All cores are switchable with intel.... you have no point..

It doesn't benefit Intel at all... It benefits AMD's Bulldozer greatly due to how its modular architecture was designed. I have a point, yet you pretend that it's not there and reply back anyway.
 
Hmmm, moooar cores LOL

Cheers for the thumbs down, this topic turned into a CPU debate....

Mooaarr cores, why turn them off?

More CPU cores than four don't help much in gaming performance, so why leave the second core in each module active in a gaming machine, especially when the second leeches performance from the other core in each module?
 

cangelini

Contributing Editor
Editor
Jul 4, 2008
1,878
9
19,795

These are all reference cards, David. Why would it make sense to include NON-reference Nvidia cards, but reference AMD? If you read the Test Setup page, then you know that both AMD and Nvidia released new drivers claiming significant performance improvements. So, I spent two and a half straight days re-testing five different cards. *That's* why old data isn't in the story. There's no agenda; just a colossal effort to make sure Tom's readers have the most up-to-date data using the latest software from both companies.
 
So yes, AMD suck yes?

No, not at all, Bulldozer sucks.

End of, Phenom II is still rocking in my Dragon rig and i love it, AMD was my old favourite until we hit the floor hard with BD.

No point saying modding is going to help, when all it does is make it a lesser CPU.

The modded 8120 is significantly faster than any Phenom II. That's why it matters. Heck, doing the same to the FX-6100 would also have much better performance than a Phenom II. With the 8120 and the 6100 having these mods, their gaming performance (especially when overclocked) is better than Phenom II and is only beaten by Intel's K edition SB/IB CPUs, Intel's EE SB/IB CPUs (maybe the LGA 1366 EEs can win too), and the i7-3820. All of these are much more expensive. The non-K edition i5s and i7s won't beat the modded/overclocked FX 6100 and FX-8120 in gaming performance.
 

verbalizer

Distinguished

:non:
 
Hmmm, seems the AMD fan has gone nutty on all aspects to AMD, this was a GPU review?

Yes, the AMD fan who previously had a Nvidia GTX 560 TI and usually builds gaming and other high end computers with Intel CPUs... Nice try there. It would take something such as the 2500K to beat the 6100 and 8120 when they have the mentioned mod and a good overclock because the non K edition i5s and other similarly performing CPUs can't overclock far enough to beat them due to their mostly locked multipliers and hardly overclock-able BLCK.
 


We all already know that most games can't effectively use even four cores... Having three that can overclock a little further would not hurt much. What I'd be more worried about with AMD CPUs is how they will compete with Haswell.
 

verbalizer

Distinguished

you need a disclaimer to let people know these are your personal opinions.

and I'm speaking on the non K series Intel chip not beating AMD and FX.
have you lost your mind, or maybe you need to add disclaimer and then re-word what you say.?
 
Status
Not open for further replies.