AMD Dual Graphics Analysis: Better Benchmarks; Same Experience?

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

cRACKmONKEY421

Distinguished
Dec 27, 2010
78
0
18,630
Awesome article. I loved the videos that really show the issue. Maybe nVidia can come out with a FPS measuring device that sits on the DVI port instead of some software like FCAT. Really feels like AMD is trying to pull the wool over everyone's eyes with this. I guess FPS is no longer enough to convey playability on its own. I'd like to see an industry standard for quantifying the stuttering that takes place within a second. Not sure what that would look like, but ideally it would make it so the numbers match the experience until someone figures out how to fudge it a bit. Measuring frame variance is one way, but it's difficult to say x FPS with y frame variance feels like z FPS with near zero frame variance. Figuring out that relationship is really key to determining which option feels the best during gameplay. Until we have a better way to quantify the experience, I think it's safe to judge by those videos. Looks like AMD Dual Graphics just sometimes adds nearly double the frames, but it adds them in a way that does not help the user experience at all.

Faster RAM would help the APU graphics, but not nearly enough to disqualify this article. Bottom line, AMD Dual Graphics doesn't seem to benefit the gaming experience nearly as much (if at all) as AMD makes it seem.
 

Sam Bittermann

Honorable
Aug 15, 2013
37
0
10,530
"AMD Gamer Series Memory, 2 x 4 GB, 1866 MT/s, CL 13-13-13-34"

No offense but that has to be the slowest 1866MHz memory I have ever seen. My 1866 MHz G.Skill runs stock at 9-10-9-28. I don't know if that could affect benchmarks at all but yikes!
 

Sam Bittermann

Honorable
Aug 15, 2013
37
0
10,530
"AMD Gamer Series Memory, 2 x 4 GB, 1866 MT/s, CL 13-13-13-34"

No offense but that has to be the slowest 1866MHz memory I have ever seen. My 1866 MHz G.Skill runs stock at 9-10-9-28. I don't know if that could affect benchmarks at all but yikes!
 

drbaltazar

Distinguished
May 21, 2010
53
0
18,630
@tomhardware:first the issue you refer to is caused at ms os end !(if its the same issue I had)why?basicly windows cannot keep up with hardware!(not sure with pcie 3 because since pcie2.2 a new thing appeared!anyway ,go in bcdedit and ask and to tell you how to adjust everything so and hardware(or if you use Intel, intel)where I got problem is if you hybrid,Intel but with amd GPU.basicly ms is trying to make user believe os can keep up ,before pcie 2.2 don't sweat it, it couldn't!and with the new way avail after 2.2?highly likely it is worst!use hardware instead of os for most bcdedit (might need to adjust bios)this should fix your issue !keyword is timing!
 

loops

Distinguished
Jan 6, 2012
801
0
19,010
That is just not right?

That is funny because I felt the same way when I bought my 2nd gtx 560 for Battlefield 3. SLI was a feature that was promoted and I bought hook line and sinker. No one from the green team or Toms at the time said anything about the lack of vram on the 560s making the set unplayable with maxx settings. On paper, it looked like I had a gtx 580 but ya. It sucked. I talked about it, but it was not till a lot later that others started to highlight the issue like me.

I sold the 2 cards and went with a 7870xt and all if fine now. I'd SLI again but this time I'd go in with my eye a bit more wide open. Point is, both teams don't sell using the weak points in their products.
 

Untruest

Distinguished
Jul 15, 2004
123
0
18,680
InvalidError, It seems to be unanimous the problem discussed in this article can be resolved using software. 13.8 Beta 1 has helped with crossfire discrete cards and dual gpu cards. Here's an article for you to read: http://www.anandtech.com/show/7195/amd-frame-pacing-explorer-cat138

I agree with you however that further hardware changes could potentially in the future further improve performance. After all it is possible to create a hardware component to schedule much in the same way the software would but with less latency.

 

curtisgolen

Distinguished
Sep 30, 2010
6
0
18,510
I am surprised that they did not looking into the choppiness. Just today 8/15/2013 I finished setup of a 6400k and a 6570 ddr3. When testing the computer I notice a lot of chop with anything graphical. I notice that the fps was bouncing around from 8 to 60 in 3dmark. I looked into the issue. I overclocked the CPU 4.9Ghz and the APU GPU to 1.1Ghz and the 6570 to 900 and 1000 and It was better but still choppy.

I recorded the 6570 GPU clock while tested and I saw that it was dropping to 2D speeds. I modified the profile to prevent that dropping the clocks down and the choppiness is gone.

I now get with the overclock system almost a 7770 level performance from low cost solution.
 

yannigr

Distinguished
Oct 3, 2008
140
0
18,680
rmpumper

I took that RETARDED combination from your post.
Do you call yourself retarded?
Well I can't argue with that. You do know yourself better than me.
 
You would think that after having the dropped and runt fame issue with standard Crossfire ( 2 GPU's connected) They would check to see it the issue flowed over to their newer designs. Another slap on the wrist for AMD that just got paddled for their crossfire issue.
 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
This is depressing, you woudl think after all this time multiple gpus would work together. Even if you dont get a speed up you should never get a slow down at this point.
 

axefire0

Distinguished
Feb 1, 2011
21
0
18,510
Having been in experimental design for decades, it is easy to make the wrong conclusion by making the wrong design parameters. Conclusion therefore should be confined to the parameters set by the test design, e.g., 1080p, 13.6 catalyst driver and specific games.
 

joezkg

Honorable
Oct 8, 2012
50
0
10,630
it change a little bit from its previous generation of APU's .. at very first that Llano was out i've read a lot about of its negative things.. but now that its out on its 3rd generation, congrats!! AMD made a great effort in just a couple of years by making it 3rd generation of APU's better that before....
 

ANTI GOOGLI

Honorable
Jul 5, 2013
4
0
10,510
When it comes to AMD ,a lot of web-sites like this likes to bash on amd in favour of intel for example
If I was interest in this kind of hardware I would look somewhere else, like forums etc.

After the Fx-8350 scandal, big differences in benchmarks from website to website etc,
I do not believe in this anymore.
 

HKILLER

Honorable
Jan 8, 2013
85
0
10,640
One thing that bothers me is AMD Mother Boards lack PCI.e 3.0.so i would buy an APU and then dual graphic it with a bottle necked HD7750.it's a waste even in terms of performance when i can't even use 100% of my graphic card!

I'm not trying to bash AMD but the fact that i had to wait 2 months until i could get an AMD mother board with PCIe 3.0(that is Asus Sabertooth 990FX Gen3 and good luck buying one!)has really bothered me.when Asus released that mother board i was thinking in a month other companies would release their own mother boards having that feature but that didn't happen.why is that?i have no idea!
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
@ddpruitt
I think your missing the point. There are many benchmarks that indicate the APU's are very sensitive to memory bandwidth, double framerates in some cases. The reason a frame is dropped or becomes a runt is that the second GPU, in this case a more powerful discrete GPU, has a frame ready before the first one's done. If the APU is better able to keep up it may push out more complete frames. Since it wasn't tested we don't know if this is the case, however we can't dismiss it out of hand. Higher end graphics also suffer from runt frames but they are pushed much harder than this configuration. More importantly 7900s run equal GPUs and equal Memory. They aren't as unbalanced as APU + GPU (or DDR3 + GDDR5), we just want to try and even out the playing field a little.

Here's a case were we would like additional testing to disprove said hypothesis, like you said science not magic or guessing. Science and engineering involve testing, refining, and careful thought, clearly you don't understand the basic premise of either.
Hello random person on the internet! Thanks for the mild personal attack.

Anyway. So let's start with the careful thought.

See, i did think of that scenario in which more bandwidth = more frames = the APU being able to keep up. However, I'll point to these four pages from three different articles (testing):
http://www.tomshardware.com/reviews/a10-6700-a10-6800k-richland-review,3528-5.html
http://www.tomshardware.com/reviews/memory-bandwidth-scaling-trinity,3419-4.html
http://www.tomshardware.com/reviews/memory-bandwidth-scaling-trinity,3419-5.html
http://www.tomshardware.com/reviews/memory-bandwidth-scaling-trinity,3419-7.html

And specifically these three charts from page 5 of the memory bandwidth scaling article:
BF3720p.png

BF3900p.png

BF31080p.png


So what conclusions can we draw from this?

1) Memory bandwidth can unlock a lot more performance (22%) and push frames faster, and from the Richland review it would appear that the 6800K was achieving lower latency on its own and in dual graphics mode, especially with respect to this article.

2) However, as we note here, there's a difference b/w anti-aliasing modes, that should put some additional pressure on VRAM and the GPU, perhaps causing the dual graphics config to fall apart.Here simply "MOAR SPEEDY RAM" wouldn't have helped much. I'll come back to this in a bit.

3) Also note the results in this article, where Skyrim appeared to show low latency as calculated by FRAPS recorded frame times, however on screen there was a lot of stutter.

4) Look at the BF3 charts. Frame times remain similar and equally stuttery throughout, with some apparent improvement going above DDR3-1600. However, as you turn up the resolution, the frequency and magnitude of the stuttering becomes more alike b/w different memory configs. Also, the performance gain is maximum at 720p and minimum at 1080p, suggesting that memory bandwidth can't alleviate issues caused by an anemic GPU. Similar arguments have been made against the Xbone.

5) You talk of careful thought, so tell me this, if a balanced GPU setup with almost identical memory bandwidth (as identical as physically and statistically possible) still needs software side frame pacing/metering/buffering to make things smooth, do you really expect me to believe that an increase in memory transfer rate by maybe 20% at the most can alleviate problems that are partially caused by a far greater difference? In architecture, too.

I can sort of agree with InvalidError's remarks about a software fix on a similar problem helping out somewhere else (though in this case it would have been extremely improbably), but definitely not this "MOAR RAMS FASTER" train of logic.

6) So here's a more refined version of your train of thought, that i began with, before making that post. You can look at the other charts and draw further conclusions and refine your conclusions. I may be very wrong about it, and heck, i'll learn something if i am, but i don't see it happening.

Hope this helped.
 

belardo

Splendid
Nov 23, 2008
3,540
2
22,795
I am thinking that this problem may not be fixable. it seems to be a sync/latency issue. A proper SLI or CF setup requires two cards that are the same type. (2 x 7850 etc). With dual graphics (DG) there isn't a good way to properly sync frames from the APU (internal with CPU) that needs to go out to the PCIe bus to the GPU. If the GPU is waiting... the frame is lost.

Perhaps AMD needs to make the APUs work like Intel in which the graphics part just does math rather than graphics.

Things like this is why AMD is a mess.
 

belardo

Splendid
Nov 23, 2008
3,540
2
22,795
I am thinking that this problem may not be fixable. it seems to be a sync/latency issue. A proper SLI or CF setup requires two cards that are the same type. (2 x 7850 etc). With dual graphics (DG) there isn't a good way to properly sync frames from the APU (internal with CPU) that needs to go out to the PCIe bus to the GPU. If the GPU is waiting... the frame is lost.

Perhaps AMD needs to make the APUs work like Intel in which the graphics part just does math rather than graphics.

Things like this is why AMD is a mess.
 

MrCommunistGen

Distinguished
Jun 7, 2005
1,042
0
19,310
I read the article right away but haven't had time to post until now. On page 1 it is stated that the GPU in Richland is VLIW5: "Is it even possible to mix a VLIW5-based APU and a GCN-based add-in card using this technology?" It is again implied on page 3: "The boost we measure from the GDDR5-equipped cards is less impressive. But despite mixing the VLIW5 and GCN architectures, there's still a clear improvement."

Trinity and Richland improved upon Llano's graphics by going to a VLIW4 architecture. The HD 6670's are VLIW5 and as we all know the 7750 is GCN.

These details do not change the outcome of the article. That aside, this needs to be corrected ASAP so that there isn't any blatantly incorrect information in the article.
 

Zuesacoatl

Distinguished
Dec 1, 2008
17
0
18,510
With the lack of comparison to an Intel with either discreet or intel graphics, it just appears that you are doing nothing but putting down AMD to further promote your love child platform of Intel. Not to mention, I understand that some runt or partially created frames are a cheap way to boost performance, but on the 2x speed, you can not tell, you can see the difference only at half speed or slower. Show how intel stacks up with its current in house offering. You do not because the difference is rather big, and it does not show well for Intel either, but this is not a bash intel report, it is an editorial to try and push people away from a cheaper but better overall alternative to your preferred platform...

I used to love reading TH's articles, they used to be unbiased and fair, now it just seems to be a promotion ground for Intel and Nvidia.
 

PedanticNo1

Honorable
May 3, 2013
26
0
10,530
I'm only commenting on the Skyrim results: the dual graphics setup may be suffering from stuttering, which Skyrim is infamous for. It would be interesting to see if applying the Skyrim 64hz Stutter Fix http://skyrim.nexusmods.com/mods/2581/? would fix it for that particular config.

My comp runs Skrim anywhere between 40-60 FPS and I get MASSIVE stuttering if I don't cap it at 40-43 (any higher and the stuttering becomes perceptible again at various times). Of course, then you have to fiddle with triple-buffering, FPrendertargets, renderahead limit, and Skyrim's built-in vsync.
 
hopefully there'll be a similar investigative article on apu-powered laptops with discreet gfx. afaik, dual gfx is more prevalent on laptops. a bit of enduro and battery life analysis on the side would also be much appreciated. :)
do softwares like radeonpro, lucidlogix mvp/virtu and HiAlgoBoost affect fps in dual gfx configurations?
 
Status
Not open for further replies.