Do AMD's Radeon HD 7000s Trade Image Quality For Performance?

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


That "are" is also supposed to be an "is", not an "are". It's referring to intelligence as a noun and a singular noun at that, not the people. Even if it was modifying people and not intelligence in that sentence, people is considered a singular noun.
 


Being easily noticeable is not a prerequisite for something to be a problem. I don't really care whether or not Tom's recommended P4s over Athlon 64s unless you have a point to make. Ever since then, Intel has had faster CPUs. The only time that AMD came close again was with Phenom II versus Core 2. Since that time, Intel has only improved and AMD has, overall, declined. I don't care about the brand name of the CPU or really any other part in my system a whole lot. I would buy Intel for a gaming machine right now because Intel's i5 CPUs are the best for that due to a huge diminishing returns on gaming performance increases by going over an i5 since all performance increases going over the i5s is simply the core and thread count increasing along with minor cache increases.

AMD is not for gaming anymore, or at least not high end gaming. Some of AMD's CPUs are okay up to 1080p gaming. All of AMD's *high end* CPUs only have high end performance for the most highly threaded work that uses either 6 (Phenom II x6 and FX-6xxx) or 8 (FX-8xxx) threads and utilizes them well. AMD doesn't compete with Intel in performance per core/thread anymore. If that changes, then I'll gladly use an AMD processor in my system. However, it probably won't because that would mean that AMD needs to not only make up a huge amount of lost ground, but also then beat Intel significantly. That would require something like doubling FX's IPC while decreasing power usage. Is that even possible right now?

Well, only if AMD can do another die shrink to 22nm and modifies the design and architecture greatly. I can see Piledriver cores matching Nehalem for IPC and almost matching Sandy Bridge for power efficiency because I've done the math and it's possible. Whether or not AMD managed to do what I thought they should do or did even better is up for discussion, but won't get anywhere and claiming to know how much faster Piledriver will be than anything is just wrong, unless you are inside AMD or something like that. No amount of leaks will tell for certain what something will do. If I remember correctly, Bulldozer was supposed to be as fast as or faster than Sandy Bridge according to many of it's *leaks*, but it doesn't even come close.

AMD isn't being shown in an unfair light here. Sandy Bridge is about 50% faster than Bulldozer at the same clock speed. Phenom II isn't much better than FX. and doesn't match FX's clock frequencies, so the two are roughly equal and also roughly equal with Core 2.

As for an Intel CPU beating an AMD CPU and then losing later on? Are you referring to something like a Core 2 Duo to AMD quad core comparison, because if so, then the test that was performed more recently was probably more multi-threaded than the previous test. If the previous test used only one or two threads than it makes sense for the Intel to win if it's clock speed isn't significantly lower than the AMD and then lose if the test becomes quad threaded. A lot of games are either quad threaded or becoming quad threaded now, so at the low/mid end, AMD and Intel are still competing because of Intel's lower core counts at the low end that more or less negate the performance per core difference.

For example, an FX-4100 and i3-2120 trade places as the winner in different games, with the FX winning somewhat in some quad threaded games and the i3 winning somewhat in in lightly threaded games and quad threaded games that don't spread the load around evenly (one or two heavy threads, the rest are light threads. This hurts AMD, but helps Intel's i3s because the i3s have Hyper-Threading, so they have two heavily and two lightly performing threads. The heavies are the physical threads and the lights are the logical threads).

When you actually look for reasons for Intel to win and for reasons for AMD to win, it's a lot easier to find the truth in the matter because you're not being biased by wanting a specific brand to win. AMD's best gaming processor from the FXs is the FX-4100 and their best Phenom II is the 960T (if overclocked because it should overclock better than the Deneb quad cores do). Given a choice between the two, the 960T is the obvious winner. Even if it can't be unlocked to six cores, it still has more IPC and should be able to have about the same clock frequencies as the FX processors. Whether it will be more or less power efficient, I'm not certain. However, it would still be slower than Intel's Nehalem quad core CPUs and they are significantly slower than Sandy Bridge CPUs which are somewhat slower than the upcoming Ivy Bridge CPUs. We already pretty much know how the Ivy's perform (not enough info on low/mid end overclocking just yet) at stock, but we don't really know how well Piledriver cores will stack up.

As of right now, it's AMD that is faster (for the price) only in some apps. Intel rules most programs so long as they aren't more than quad threaded. Most non-workstation and server programs are single, dual, or quad threaded, with single and dual still being the most common, but quad is catching up little by little. This is why SMD is losing... AMD's many slow cores approach due to their apparent inability to make something significantly faster than Phenom II does not go well with software that only uses 70% or less of the cores in the AMD processors. A six or eight core FX is roughly equal to a quad core FX at the same frequency in lightly and quad threaded software and thus is just wasting money and electricity.

i3s and FX give nearly identical performance in gaming (from a visual standpoint. 10-15% faster in either way will not be a very noticeable difference unless there is something else seriously wrong with the machine), but FXs do it at about 75-150% more power than the i3s do. Even when the FXs are cheaper, it only takes two to three years to recoup your losses if your computer is on for about four to six hours a day. Even faster if it's on more often than that. You don't even need to be using it because unless you outright shut it down, the FX uses a lot more power (as proven by every site who tested power usage. They can't all be biased like you say they are so much that they lie about power usage) than the i3s. If the machine is on more like 18 hours a day to 24/7, then you will recoup your losses on slightly higher upfront costs several times over during the lifetime of the machine unless it manages to die within a few months. I get my numbers from an AMD fan to, so no, I'm not using bad numbers.

Face it, AMD is no good for high end gaming and is more expensive over a few years than Intel for low/mid end gaming. AMD's low/mid end CPUs use more power than Intel's high end i5s and i7 quad core CPUs! AMD is really only very useful for low/mid end highly threaded environments (for desktops, AMD is really killing Intel in the mobile markets because of Intel neglecting them compared to the desktop market) or very low end PCs right now. Hopefully, Piledriver/Trinity will change that, but it might not.

My Phenom II x6 1090T was a far better purchase than any Intel processor with a similar cost for my work machine because of it's highly threaded performance. However, I realize that for gaming and such, it would be slower than an i3 for lightly threaded performance and not to much faster than an i3 for quad threaded performance. Considering my 25% overclock to just over 4GHz, it would beat the i3s in quad threaded performance significantly, but not in power efficient since it already uses about twice as much power at stock as the i3s use. However, for highly threaded performance, it is pretty fast. It hangs with the FX-8100 in highly threaded performance and they are right behind the i7s for highly threaded performance.
 
[citation][nom]wolfram23[/nom]What? Performance wise, the 7870 is faster than a 6970. Not to mention they clearly included the 7970 in all the screenshots.Image quality does not ever have to be sacrificed on lower end cards. Setting for setting, they should produce identical high quality images with the only difference being framerate. Whether you look at a 6570 or a 7970. The changes that must be made should be done by the user, by setting game settings to lower quality or reducing driver settings ("Performance" vs "HQ").On topic: Good article, glad to see it's a simple fix. Sucky for AMD that they will likely get some bad PR for this, seems like a pretty honest mistake given that the totally fixed driver has identical performance - there's no reason for them to have done it on purpose. Not surprising to see the rage in the comments though. Nvidia fanboys calling out fails left and right...[/citation]You should look at the specs of each card. The 6970 is clearly better than the 7870.

Regardless, the 7870 is a replacement for the 6870 which it clearly beats, and the 7970 a replacement for the 6970 which it clearly beats. My point was that if you are going to compare across series, you should stay withing the same performance line - 6870 7870, 6970 7970.
 
LaHawzel: AMD fan boy much? well unlike you i do not live in a basement, and i have owned BOTH a HD 7970 and a GTX 680...as well as a good 2 ms monitor and image quality is obvious if you are not deaf, blind and stupid. If you wish i can show you the return recipe of the HD7970 the card was complete crap driver issues beyond belief, and the image quality seemed far worst then the GTX 680 on the SAME image quality settings. The difference would be obvious to anyone who has a half decent 2 ms 1080 monitor.... please update your 8 ms dell 15 inch or do some research before making claims you cannot back up
 
i love how they delete all the negative comments. don must be thrilled that this has also turned into an amd vs nvidia flame war and taken the heat off the fact that his article, methods and conclusions are all without merit.
 


Yeah, no kidding, right?

We totally could have published this weeks prior before there was a fix.

Instead, we gave AMD the heads up and worked with them to remedy a problem that they appreciated being told about.

Apparently, giving a shizz about image quality and looking into issues isn't OK with some commenters if AMD is involved because they'd rather we bury it.

If it was Nvidia who had a problem, and we found it, none of those people would have complained.

Pathetic. They know who they are, and they know it's true, too.
 
The difference is so minimal its not funny, who cares, its being/been fixed, and it you do notice it you need to get a life and stop being so nit picky about the smallest things that you wont even notice while playing the game.
 


Future games are to rely on compute performance for improvements in quality and performance. Only an idiot would say that compute doesn't matter as more and more games come out that rely on compute power over the next few years. What Nvidia did by having crap compute was genius... It forces Nvidia buyers who get the Kepler cards to upgrade as soon as compute becomes the focus for games (this is happening right now, so it probably isn't more than a few years off). It also means that as of right now, Nvidia can use much smaller dies than they used to use because their gaming performance in non-compute heavy games is much higher per square mm and per watt than it would have been. This basically means that more GPUs for the 680 pass binning and they are cheaper to make than it would be if a 500+m2 die size was in use. More pass binning because there is a much lower chance of a defect happening in a 300mm2 die than in a 500mm square die (less area that can be defective).

All of this came at a perfect time because compute heavy games aren't mainstream yet, so Nvidia gets to milk this while it lasts and AMD gets screwed in the short term because they took the opposite route. Hopefully for AMD, compute heavy games will be out within two years, but it's unlikely, so Nvidia will probably stay the course for while yet. Unless AMD manages to make a compromise between compute and non-compute gaming for the short term or manages to unify performance for both, they are in trouble. Anyway, I'd still take a 7950 over a GTX 680 because it is almost identical to a 7970 if the two are at the same clock frequencies and as of yet, it seems that without Catalyst holding the 7900 cards back, they has more overclock headroom. I would also use the compute performance for non-gaming work fairly often (maybe some bit-mining and such could help with the power usage costs issue caused by AMD now being less power efficient due to it's compute focused GPUs).

All in all, a very smart move by Nvidia. It may hurt CUDA/PhysX customers, but their numbers seems to be decreasing and Nvidia probably knew this and acted accordingly.
 


The 7870 beats the 6970 by a fairly wide margin. The 7870 is almost identical to the 7950 and GTX 580 in stock performance. The 7850 competes with the 6950/6970.

The 7870 clearly has better specs if you understand how graphics technology scales in performance. Increasing core count on a GPU increases performance much less than increasing clock frequency and IPC of the cores does. IE, a 20% increase in core count is usually less important than a 20% increase in clock frequency if all is is the same. That the 7970 has fewer cores means little to nothing because GCN has a little more IPC than VLIW4 and the 7870 has a much higher clock frequency than the 6970.

This is why the 6950 is nearly identical to a 6970 in performance if it is overclocked to a 6970's clock frequencies without being unlocked and the same is true for most cards, even Radeon 7000.
 
[citation][nom]LaHawzel[/nom]These differences are things that no one would ever notice if tech review sites didn't point them out. Well, not that I mind knowing that it can be fixed with a driver update, but I find it unnecessary for the average gamer to worry about these minor differences with image quality (knowing it's "fixed" is more of a placebo than an actual improvement of gaming experience). Not to mention that the typical gamer plays on 6-bit TN-panel monitors because "HURR 1ms RESPONSE TIME HOLY SHIT BEST SCREEN EVER" and they in turn elect to give up the superior color gamut and viewing angles conferred by IPS panels. They ought to the last ones who deserve to complain about image quality, at any rate.[/citation]
So if you buy a cheap airline ticket, you deserve to only get 90% of the way to your destination? If you buy an inexpensive fast food meal, you shouldn't complain if your fries container is only half full?

[citation][nom]therabiddeer[/nom]Is it just me or is toms heavily biased towards nvidia? We see tons of articles for the Nvidia 6xx but very few for the 7xxx. Nothing negative for nvidia, but an article like this for AMD's, which is already being fixed even though it is undetectable... and the fix doesnt even yield a real change in framerates.[/citation]
Yeah, it's you.
 
[citation][nom]blazorthon[/nom]The 7870 beats the 6970 by a fairly wide margin. The 7870 is almost identical to the 7950 and GTX 580 in stock performance. The 7850 competes with the 6950/6970.The 7870 clearly has better specs if you understand how graphics technology scales in performance. Increasing core count on a GPU increases performance much less than increasing clock frequency and IPC of the cores does. IE, a 20% increase in core count is usually less important than a 20% increase in clock frequency if all is is the same. That the 7970 has fewer cores means little to nothing because GCN has a little more IPC than VLIW4 and the 7870 has a much higher clock frequency than the 6970.This is why the 6950 is nearly identical to a 6970 in performance if it is overclocked to a 6970's clock frequencies without being unlocked and the same is true for most cards, even Radeon 7000.[/citation]

Maybe I am thinking of the 7850. Regardless, we are not talking about benchmarks, we are talking about image quality. Everyone here seems to think image quality should be the same irrespective of hardware. But I say, if you are going to compare the 7800 series, compare it against it's replacement, the 6800 series.
 
Oh, and the 7870 vs 6970 benchmarks I have seen don't exactly scream "wide margin". In the higher definitions, the 6970 wasn't beat by a 580 by a wide margin, and the 580 still beats a 7870, regardless of the fact that people compare the two.
 
[citation][nom]buzznut[/nom]Huh, don't know about all of that but thx for the article. I do think its important to bring such things to the vendor's attention and follow up to see if they respond appropriately. Good job![/citation]

i dont know about you but i think the jacked up 7800's still looked better than that 680 lol
 


6800 and 6900 have identical image quality. Almost all other card families have identical image quality with the same settings. The 7800 cards had their image quality hampered because of a driver mistake that has since been fixed, so they now give full quality picture and it didn't take any additional performance to do it, so that practically proves that it was a mistake.
 


The benchmarks were with the old drivers that were in beta. Now there are released drivers that soothed out the 7870's performance and it is much more consistently right next to the GTX 580 and 7950.
 
Doesn't really matter to me, as I have two 6970s in crossfire. But if I do decide to "upgrade" to a single card solution, it would be to the 7970 as a single 7870 would be a downgrade for me.
 


Dual 6970 beats the 7970 significantly. The 7970 is a little ahead of dual 6870, maybe equal to dual 6930. Dual 6950s would beat it slightly and the GTX 680 is roughly equal to dual 6970 performance.
 
[citation][nom]blazorthon[/nom]Dual 6970 beats the 7970 significantly. The 7970 is a little ahead of dual 6870, maybe equal to dual 6930. Dual 6950s would beat it slightly and the GTX 680 is roughly equal to dual 6970 performance.[/citation]
Doesn't really matter as my heart is set on an upgrade - but I plan to add a second card later.
 
Actually, I have my heart set on a 7990. I am thinking I may have to go with one 7970 and get a second later which would give me roughly a 7990. That's what I did last year with the 6970 and then another for 6990 performance. The problem is I don't have a g-note to spend all at one time. So while a 7970 may be a slight downgrade for me, it would lay the groundwork for very large upgrade in the future.

Do you have any data about how the 7970 scales in dual card solutions compared to the Kepler? I haven't been able to find squat online.
 
[citation][nom]MaxxOmega[/nom]Agreed...Or they say "You got ripped off d00d. My HDTV is way bigger and half the price. " Idiots...I only EVER buy S-IPS Panel Monitors. Avoid that cheap TN crap like the plague...For my 55th birthday in June getting myself 3 x 30 Inch Monitors, woot...Maxx[/citation]
I LOL at all the people who think 60hz IPS/PVA panels are better than 120hz TN panels FOR GAMING, perhaps you don’t play games or your 55yo eyes can’t tell the difference? Or maybe you want to justify that $600 was money well spent. Anytime there is movement on a 120hz screen everything stays so much sharper and not just in twitch games. I have a Dell U2711 sitting next to a Samsung S27A950, even with the drop in res the Samsung kills for games. Also why are viewing angles important for monitors when you sit DIRECTLY in front of them? TVs on the other hand…
 
[citation][nom]Maximus_Delta[/nom]Glad its fixed, I want the best possible IQ so it was important this defect in the drivers was identified, escalated and driven to resolution. Let's hope 12.4 absolutely nails it for the 7000 series (I had to roll back to 12.2 on my CrossFire 7970s but won't go into why here). Cheers[/citation]

IMO, the goal of a GPU is not to render the best images (taken separetely/one by one, I mean), but to render the best sequence of images.
The filtering algorithm, which was modified, is here to avoid the sequence of images to blink.
If, with this correction, we'have trade blinking reduction for a better screenshot quality (does it came from a static or moving scene ??), then it is maybe not better than before...
 




Instead of addressing the issue I brought up, you two decided to go after two typos in my post.

Spelling Police are about as pathetic as trolls.
 
[citation][nom]rubberjohnson[/nom]I LOL at all the people who think 60hz IPS/PVA panels are better than 120hz TN panels FOR GAMING, perhaps you don’t play games or your 55yo eyes can’t tell the difference? Or maybe you want to justify that $600 was money well spent. Anytime there is movement on a 120hz screen everything stays so much sharper and not just in twitch games. I have a Dell U2711 sitting next to a Samsung S27A950, even with the drop in res the Samsung kills for games. Also why are viewing angles important for monitors when you sit DIRECTLY in front of them? TVs on the other hand…[/citation]

Maybe you're the one wanting to justify the retarded purchase of a 120Hz monitor.

My monitor is 75fps, and anything under 30fps I consider lag. I can't believe console gamers put up with it, and some people settle for it.

40fps, not ideal. I can still tell a difference.

50fps, it is starting to get good. Very smooth.

60fps, perfect. Even in twitch shooters like Quake Live and UT2004, it is as smooth as butter.

75fps, I can't tell a difference. Not one single iota. If you were to randomly show them to me, I wouldn't know which is which. 75fps or 60fps. I notice a significant improvement up to 50, a minor improvement up to 60, and after that, NOTHING. Just smoothness.

And you expect me to believe that at DOUBLE the frame rate where I stop noticing improvement, YOU can tell?

You are either a pathetic wannabe elitist liar, or a fool.
 


I think that both the 7970 and 680 scale exceptionally well, almost 100%. I do not know how good they are with micro-stutter, variable frame rates, etc, but I think that they are both more or less free of it because the faster two cards get that are in a dual GPU config, the less micro-stutter and such that there is (Tom's proved this when a 580 SLI had less stutter than a 570 SLI which was also better than a 560 TI SLI and 560 SLI. They also checked the 6800 and 6900 cards and they too showed improvement, but a little less. It seems it's not only frame rates that improve with faster graphics cards).

Due to it's extra frame buffer memory, the 7970 scales better than the 680 at higher resolutions and such. The 7970 also has the remarkable ability to add even a third and fourth GPU, yet still have excellent scaling, whereas almost every other generation of cards adds a third and fourth and get hardly any difference in FPS compared to adding the second. I'm not sure if the 680 is also good about this, but I do know that it would take a 4GB version to test it properly because 2GB of frame buffer VRAM is not enough for the high resolutions and such that are needed to be tested at to avoid CPU bottlenecks.
 
Status
Not open for further replies.