Challenging FPS: Testing SLI And CrossFire Using Video Capture

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Actually it was Ryan Shrout and Scott Wasson who reported this first (and Shrout uses a card also, wasson first reported frametime stuff). Moral of the story is, AMD's cards are not running as fast as they say when runts are omitted. What cracks me up is it took both Toms and anandtech a year and a half to even comment on this. Worse Anandtech new this for a year and a half and kept recommending AMD hardware anyway...LOL. Whatever...
 
I really like to see AMD improve there drivers I have 6870 very pleased runs what I like. As of terms of flawlessly not really. I can tell drivers are not as smooth though the hardware has the power. Reinvent the driver AMD like no one has ever. Nvidia nice work. I will always enjoy my AMD tho thanks
 
I'm just curious if AMD could design an FPS analysis toolkit that could produce the exact opposite results. I would like to see a test performed using an unrelated third party toolkit.
 


Since AMD has confirmed the problem, and have tools similar to Nvidia, I doubt they'll be motivated to release a version of their own. At the moment, a lot of customers seem desperate to find a reason this is a conspiracy, if they release a set of tools that show the same thing, they take away doubt.
 
Just some perspective: There is no reason to automatically stop recommending AMD cards for single card setups. It is only Crossfire which apparently isn't all it's cracked up to be. Particularly if the user's applications include bitmining, AMD remains the way to go. If we're only talking about games, then check the benchmarks. While these FCAT results make me glad I've never wasted money on Crossfire, it says nothing against (nor for) single AMD cards.
If FCAT were not vendor-neutral, we'd see big dropoffs for AMD single cards vs. FRAPS, but not for nVidia vs. FRAPS, which is not the case.
 
Is it just me or can anyone else see that runt/dropped frames and screen tearing are correlated (and perhaps causal)?
We've tried triple buffer vsync frame rate capping, but what if we simply need full frames to be available and delivered whenever the monitor samples the frame buffer (even if the buffer is refreshing at 120+ times/sec)?
Will we eventually realize that GDDR5 is a bottleneck and can't actually handle an infinite amount of fps and that's why frames are being "dropped" from the buffer? But neither of the big players really wants to tell us that because there's just way too much money to be made in selling us a number on our screen.

The author of the article keeps emphasizing that gamers won't visibly notice a difference all the while saying there's a smoothness factor to be experienced when frame latency is minimized.

maybe nvidia knows AMD has a huge hardware architecture problem on their hands and is handing out microscopes to everyone who wants to take a closer look. There might be a simple sw fix but when doing so frame rates drop to ~90fps and their most "powerful" card's performance is reduced to that of a 9800 gtx equivalent.

Kinda gutsy for AMD to release their 7990s while being fully aware of this issue.
 


I wondered, since it looks like Vsync mostly fixes the issue.

I use Afterburner's limiter in every game I play to maintain smoothness.
 
[citation][nom]Cpu NumberOne[/nom]Again only two cards, but combining the results does paint a better picture.Also, it seem with v-sync enabled Crossfire is THE way to go.[/citation]

With v-sync enabled, they are comparable, at least that is what I saw, and if you use v-sync, Nvidia does offer a few different options that AMD doesn't.
 


that's right because they are using FCAT an nvi made tool for benchmarking...given the full lists of notorious acts nvi done in the past cheating is like a piece of cake to them
 
[citation][nom]Onus[/nom]Just some perspective: There is no reason to automatically stop recommending AMD cards for single card setups. It is only Crossfire which apparently isn't all it's cracked up to be. Particularly if the user's applications include bitmining, AMD remains the way to go. If we're only talking about games, then check the benchmarks. While these FCAT results make me glad I've never wasted money on Crossfire, it says nothing against (nor for) single AMD cards.If FCAT were not vendor-neutral, we'd see big dropoffs for AMD single cards vs. FRAPS, but not for nVidia vs. FRAPS, which is not the case.[/citation]

Well said. For the sake of thoroughness, here's a quote summarizing the comparison between high-end single-card configurations, from PC Perspective's recent article, "Frame Rating Dissected":

"The overall picture comparing the two cards indicates that the AMD Radeon HD 7970 GHz Edition is a faster card for gaming at 1920x1080, 2560x1440 and 5760x1080 triple-monitor resolutions. In Battlefield 3 the performance gap between the HD 7970 and GTX 680 was small at 19x10 and 25x14 but expanded to a larger margin at 57x10 (19%). AMD’s HD 7970 also shows less frame to frame variance in the BF3 than the GTX 680. This same pattern is seen in Crysis 3 as well, though at 5760x1080 we are only getting frame rates of 13 and 16 on average, getting the HD 7970 a 23% advantage.

DiRT 3 performed very well on both cards even at the 5760x1080 resolution though AMD’s HD 7970 maintained a small advantage. Far Cry 3 was much more varied with the GTX 680 taking the lead at 1920x1080 (20%) but at 2560x1440 and 5760x1080 the cards change places giving the HD 7970 the lead. Skyrim was another game that saw small performance leads for AMD at higher resolutions though I did find there to be less frame time variance on the GTX 680 system which provided a better overall experience for game that can run on most discrete GPUs on the market today.

Finally, one of the newest games to our test suite, Sleeping Dogs, the AMD Radeon HD 7970 holds a sizeable advantage across the board of the three tested resolutions. The margins are 34% at 1920x1080, 37% at 2560x1440 and 23% when using triple displays.

While some people might have assumed that this new testing methodology would paint a prettier picture of NVIDIA’s current GPU lineup across the board (due to its involvement in some tools), with single card configurations nothing much is changing in how we view these comparisons. The Radeon HD 7970 GHz Edition and its 3GB frame buffer is still a faster graphics card than a stock GeForce GTX 680 2GB GPU.
In my testing there was only a couple of instances in which the experience on the GTX 680 was faster or smoother than the HD 7970 at 1920x1080, 2560x1440 or even 5760x1080."


I've bolded the most important part, IMO, for those people who would use this frame-latency issue to flog AMD's products, and for those people who want to impugn the results because NVIDIA developed some of the testing software.

Given that even DIY desktop-computer builders don't typically bother with multi-GPU configurations, we should all keep in mind that, although articles like this one are interesting, their conclusions as of this moment aren't all that important in the grand scheme of things. The testing methodology is important; that's about all we can say for sure.
 
Given that even DIY desktop-computer builders don't typically bother with multi-GPU configurations, we should all keep in mind that, although articles like this one are interesting, their conclusions as of this moment aren't all that important in the grand scheme of things.

I have to disagree there. Back "in the day" I would have agreed but, at least in this era.....especially starting with the nVidia 5xx / AMD 6xxx series. I can say that every single build I have done for a user in the last 2 + years has contained an SLI / CF capable MoBo. I can only remember one in the last two years that didn't contain a PSU sized for that 2nd card. Over a third of these builds started with 2 cards in the original build from the get-go.

Case in Point .... one very close to home

Son No. 2's box has an GTX 580
Son No. 3's box has twin GTX 560 Ti's

Son No.3 's box averages 40% faster with 80% of the GFX Card budget.....one SLI issue to date which required disabling SLI for about 2 weeks ..... later solved with a game fix.

With the cost premium between the top card an the ones that one or two notches down growing and growing, and the difference in performance shrinking (i.e. 670 vs 680), the appeal of two mid range cards has become more attractive.

 


Absolutely. I am chomping at the bit to upgrade to a GTX 670, but I can't bring myself to pay almost 100% more than a GTX 660 for 30% more performance... So I'm waiting for the GTX 760 - which really should hit at less than $250. But SLIing 2GB GTX 650Ti Boosts is tempting for the price.
 
[citation][nom]bystander[/nom]Great article.But as great as the review is, I feel one thing that review sites have dropped the ball on is the lack of v-sync comparisons. A lot of people play with v-sync, and while a 60hz monitor is going to limit what you can test, you could get a 120hz or 144hz monitor and see how they behave with v-sync on. And the toughest thing of all, is how can microstutter be more accurately quantified. Not counting the runt frames gives a more accurate representation of FPS, but does not quantify microstutter that may be happening as a result.It seems the more info we get, the more questions I have.[/citation]

anandtech has a much more in-depth series about this topic also adressing your concerns, which leads me to think toms has left out a little too much crucial info in this first article.
 


I've been checking their site, and I have not yet seen one with FCAT and v-sync. They talked about the new tool in their Part 1, but I have yet to see results yet. Do you mind posting the link you are talking about?
 
[citation][nom]bystander[/nom]Great article.But as great as the review is, I feel one thing that review sites have dropped the ball on is the lack of v-sync comparisons. A lot of people play with v-sync, and while a 60hz monitor is going to limit what you can test, you could get a 120hz or 144hz monitor and see how they behave with v-sync on. And the toughest thing of all, is how can microstutter be more accurately quantified. Not counting the runt frames gives a more accurate representation of FPS, but does not quantify microstutter that may be happening as a result.It seems the more info we get, the more questions I have.[/citation]
NO real gamer plays with Vsync , vsync makes ur movement lag ,its super noticable no matter what card you have. this is one thing that AMD also said. for now im looking for Nvidia cards
 
[citation][nom]JackNaylorPE[/nom]I have to disagree there. Back "in the day" I would have agreed but, at least in this era.....especially starting with the nVidia 5xx / AMD 6xxx series. I can say that every single build I have done for a user in the last 2 + years has contained an SLI / CF capable MoBo. I can only remember one in the last two years that didn't contain a PSU sized for that 2nd card. Over a third of these builds started with 2 cards in the original build from the get-go.Case in Point .... one very close to homeSon No. 2's box has an GTX 580Son No. 3's box has twin GTX 560 Ti'sSon No.3 's box averages 40% faster with 80% of the GFX Card budget.....one SLI issue to date which required disabling SLI for about 2 weeks ..... later solved with a game fix.With the cost premium between the top card an the ones that one or two notches down growing and growing, and the difference in performance shrinking (i.e. 670 vs 680), the appeal of two mid range cards has become more attractive.[/citation]
All of that is great for you, and I agree with you in principle: SLI is more attractive now than it ever has been.

But these tests aren't even relevant to the vast bulk of the GPU market. Anyone using these tests to proclaim that AMD makes an inferior GPU isn't paying enough attention. All we can surmise from frame-latency testing, so far, is that AMD makes an inferior dual-graphics solution as of this moment. We don't even know that Crossfire has always been inferior, based on what I've seen.
 

There are millions that use v-sync, but I'd agree that most professional online gamers do not.
 
I am very suspicious about the "runt" frame conversation of 21 lines. Why this arbitrary value? I suspect this is because it highlighted an AMD shortcoming.

What about less-than-completely-rendered frames? Any frame that is not rendered completely ends up being visually merged with the frame before it and is difficult to see, but from a performance monitoring viewpoint, it is the same as a full frame.

In the series of tests that were ran, Nvidia could have a greater number of 22-line runt frames, but not be penalized for it, while AMD has 21-line runts and is penalized.

I like the idea of the FCAT toolkit, but I suggest we need to look at 100% complete frame rendering, along with sync'ing issues discussed in the other comments.
 


The goal is to have evenly sized frames. If your FPS go past your refresh rate, you may never have a full screen image, but each partial image is representing a different time. However, 21 pixels, or 20 pixels high is only 2% of the screen. So small it is hard to even know it is there. They did mention they'll be trying different heights, but I'd say 2% is being generous, personally.
 
Status
Not open for further replies.