Challenging FPS: Testing SLI And CrossFire Using Video Capture

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Max framerate 180 - Min framerate 80 = Difference 100
Min framerate 80 < Difference 100 = game unplayable?

Just checking.



 

This. Read the articles; considering how and where this tool works, it is vendor-neutral. Right now, it makes clear that two-card setups should NOT be AMD, so yes nVidia's marketing guys are drooling and AMD's are soiling themselves, but the end result should be that engineers from both companies will be able to improve their products.
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
does this tool work on 1 card? perhaps we should to bench the smoothness between the 2 GPU from diff vendor on 1 card and see the diff.

I am not in anyway to get Multi GPU.
 
[citation][nom]ubercake[/nom]I can see it in the case where framerates exceed the refresh rate of the monitor because there is something new to draw in the buffer prior to refresh interval being ready for the frame, but in the case where framerates are lower than the monitor's refresh, if there's nothing new to draw to the screen during a particular refresh interval, wouldn't the monitor just keep the current frame on the screen and wait to draw the next frame when the buffer makes it available (or "flips") and the monitor's refresh interval occurs? This seems to be consistent with why tearing is uncommonly seen at framerates lower than monitor refresh rates.[/citation]
Because there isn't something new to draw every refresh, you can't get tearing as often. If the video card flips the image midway through a refresh, you see a tear, but then the next refresh will see a full image until the video card flips the image again.

Btw, don't think the term flip means that it is instantaneous either, it is a copy operation, but a very fast one.

So yeah, when your FPS are lower, you see less tearing, but there is still tearing.
 




I thought the flip was the point when buffer 0 "becomes" buffer 1 and buffer 1 "becomes" buffer 0 (given most modern cards use double-buffering by default)? Back buffer is renamed front and front renamed back?

With framerates lower than the refresh rate, it would seem data are already fully written to the back buffer before it "becomes" the front buffer?

I found an article that explains this pretty well:
http://www.anandtech.com/show/2794/2
 
[citation][nom]ubercake[/nom]I thought the flip was the point when buffer 0 "becomes" buffer 1 and buffer 1 "becomes" buffer 0 (given most modern cards use double-buffering by default)?[/citation]
That depends on what you are labeling buffer 0. The display buffer, at least a long time ago, does not change, and I'm pretty sure I've read recently that it still does not. However, the buffers used for rendering can be. In a double buffering system, where one is the buffer for the display, and the other is the buffer that is used to render, then a flip is actually a copy. But in triple buffering, where you have two buffers used by the GPU's and one for the display, a flip from one of the GPU rendering buffers to the next is simply a matter of pointing to the new buffer.

I hope that was clear. Of course, it is possible I'm working off ancient knowledge, as I haven't done this type of coding in many years, but there would be a few obstacles to over come and question marks ( If you flipped to a new buffer by changing the pointer, then why would the display not continue updating with its original buffer, preventing tearing? Or how do they stop the refresh of the screen, and reset the pointer in the correct offset with the new buffer?)

EDIT: I have read three different articles, with two different answers. One says it just swaps what is considered the front buffer, two say it copies the back buffer to the front buffer. Based on my ancient experience, it is the latter, but I'm not certain.
 

ibjeepr

Distinguished
Oct 11, 2012
632
0
19,010


Very interesting read. While it doesn't change my mind about buying my HD 7950, I also wasn't planning on buying a second for Xfire. I will say that with the changes I'm expecting to come around for both AMD and Nvidia in order to fix or simply refine this issue, I may be looking for a new card already with the next gen.
 


Thanks again for the info. I updated my post with a link that does a good job of explaining things within the context of triple buffering. It's really good if you haven't seen it yet.
 

That is the one that says it swaps, while I read two others that says it copies. From my experience, it is the latter, but it's been long enough, things could have changed.

I could easily see Anandtech getting it wrong, as regardless if it is a copy or swap, we call it a swap.

Now I've found a couple more, one for each. I wonder if there is optional ways of doing this, though I'd have to assume the swap method would be preferred if it works.
 

The over report makes sense as fraps gets the data at the start of the pipeline. It does not know that some frames partially or do not even make it.
 


The swap seems to make sense since no data has to move.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]nukemaster[/nom]The over report makes sense as fraps gets the data at the start of the pipeline. It does not know that some frames partially or do not even make it.[/citation]
Thrue, but go to the second or third page and read Don's comment in reply to mine, he says:
1. We don't know if it does for nvidia
2. Shouldn't, because FCAT doesn't show variance.

Look at the TR chart again. Difference b/w single GPUs using either FCAT or FRAPS gives the same result, i.e. 1.8 fps. Both gain 2 fps due to FRAPS.

Almost the same with SLI/CF, difference is 3.9 vs 3.5 fps, doesn't change the order.

What i'm saying is, unless there's evidence to the contrary, the difference b/w FCAT "hardware" and FRAPS might amount to nothing really, when you're looking at the average frame rates.

It's only when FCAT filters the raw data does it actually make a difference, So i think reviewers should continue to provide FRAPS data, as it'll serve to compare with FCAT's hardware results for everyone without FCAT (which is most of us, really).
 
I think FRAPS will remain a usable but extremely coarse measurement tool. Its value for comparisons will really only be for single cards, possibly using the same drivers (i.e. one AMD card vs. another, or one nVidia vs. another). Since it does not reflect on what takes place further down the pipeline, it will be essentially useless for AMD vs. nVidia comparisons (except in extreme cases), and useless for multi-GPU setups.
 
[citation][nom]ubercake[/nom]The swap seems to make sense since no data has to move.[/citation]
I won't argue that it would be the fastest, but it doesn't necessarily make the most sense, because you are now moving the monitors read location in the middle of updating the screen. That said, I suppose Windows could do that, but I don't know how disruptive it would be to the monitor.
 


It seems like the swap would just be between references to the frame data. For example, buffer 0 (back) is now called buffer 1 (front) and the monitor picks up buffer 1's data on refresh and buffer 1 is swapped and renamed as 0 and is now being written to as the back buffer. I'm sure this is simplified somewhat as buffer 0 and 1 cannot simultaneously swap reference names. I'm thinking the back buffer may change names while the front buffer keeps its name.

This way, the monitor always goes after the front buffer data, so it wouldn't have to get reference anything but the front buffer.

It would be interesting if Tom's could do something on this?
 
@ubercake

You should read this: http://en.wikipedia.org/wiki/Multiple_buffering

This one explains 2 different methods. One that changes the pointer, and one that copies. One thing that caught my attention was the flip method can only be done during vertical retrace mode, meaning that it is only an option if you use v-sync. The copy method has the advantage of being capable of happening during the refresh.

This would work around the issue I was concerned with about flipping during a refresh, as the flip method (as explained here at least), only works during vertical retrace mode (what v-sync requires).

Look under the "Double buffering in computers" and "Page flipping" headings, they explain things pretty well.
 

Cpu NumberOne

Honorable
Mar 14, 2013
7
0
10,510
[citation][nom]bystander[/nom]I'm not really sure it matters. The focus of the article is how Crossfire and SLI are performing, and it does a good job at showing that, though pcper.com has a more in depth picture, but THG does plan to give us more soon. It sounds like they had a system setup problem that caused them to lose a lot of time and data.[/citation]
And who is to say that more powerful Radeons will NOT do better IN THIS EXACT test?

Comparing JUST two Radeon cards with JUST two NVidia cards does NOT prove anything when it comes to doing an overall comparison between Crossfire and SLI. People just like to get all excited and argue about something that really want to argue about.
 
The Anandtech article indicates that even the AMD engineers have realized there is a problem (they'd never looked for it before), and they are taking steps to fix it. This strongly suggests to me that, at least for now, it is indeed a universal problem with Crossfire.
 
This was one of the most interesting parts of the PC Perspective article. Essentially, when they cleaned up the runt frames, the resulting Observed FPS showed NO benefit from adding a second 7970 GHz in Crossfire in this example.
AMD CrossFire configurations have a tendency to produce a lot of runt frames, and in many cases nearly perfectly in an alternating pattern. Not only does this mean that frame time variance will be high, but it also tells me that the value of performance gained by of adding a second GPU is completely useless in this case. Obviously the story would become then, “In Battlefield 3, does it even make sense to use a CrossFire configuration?” My answer based on the below graph would be no.
/http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-Dissected-Full-Details-Capture-based-Graphics-Performance-Test-3]
BF3_2560x1440_FRAPSFPS.png

BF3_2560x1440_OFPS.png
 


Which makes me wonder if the Battlefield 4 demo that AMD won't stop tooting their HD 7990-shaped horn about would run at the same framerate with a single HD 7970...

Or maybe just slightly better, because they definitely have Vsync on in the video. Something like this:

BF3_2560x1440_OFPS_1.png
 
[citation][nom]Cpu NumberOne[/nom]And who is to say that more powerful Radeons will NOT do better IN THIS EXACT test?

Comparing JUST two Radeon cards with JUST two NVidia cards does NOT prove anything when it comes to doing an overall comparison between Crossfire and SLI. People just like to get all excited and argue about something that really want to argue about.[/citation]

TechReport: http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-Dissected-Full-Details-Capture-based-Graphics-Performance-Testin?page=2#comments

Pcper: http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-Dissected-Full-Details-Capture-based-Graphics-Performance-Testin?page=2#comments
 
Status
Not open for further replies.