AMD Radeon R7 260 Review: The Bonaire GPU Rides Again

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

vertexx

Honorable
Apr 2, 2013
747
1
11,060
I think it's safe to say that 1080p is the mainstream choice in at least the USA. Just some stats off Newegg:

Of all monitors available, here are the top 5 resolution types:

Resolution Count % of Total
1920 x 1080 517 43%
1366 x 768 129 11%
2560 x 1440 119 10%
1280 x 1024 100 8%
1600 x 900 84 7%

Also, looking at all resolution types available, monitors of 1080p resolution or higher account for 63% of all monitor models available.

I think it's safe to say that this is reflective of what people are buying. Sure, there is some lag with what people actually have, but anything lower than 1080p is I think now considered sub-standard.
 

hannibal

Distinguished
Some one above were interested in seing how 750 will do against these...The 750 is about as fast as 650ti. So it is slover than these.But the 750 use 75w power and 260 use 90w power, so they are guite completely in different gategory. We need 760 or 860 to compete with these.
 

InvalidError

Titan
Moderator

I do not see why people would not consider it on a technical or performance basis. The main problem with it is manufacturers needing to find out how much it is worth on the market and it always takes a couple of months for prices to sort themselves out when overlapping products get introduced.

As for 2GB not making sense at 1080p, that depends on how the games and drivers decide to use the GPU's spare RAM. Having 2GB RAM opens the door to many performance optimization tricks like pre-calculating finer mipmaps for all textures instead of relying entirely on multi-sampling brute force. The drivers can also duplicate textures across both memory channels to improve concurrency and load balancing - that's likely the single biggest contributor to higher memory usage on higher-end GPUs with 3-8 memory channels: copy a frequently used texture to four channels, you get roughly four times faster access to it.

There does not seem to be many recent numbers about 1GB vs 2GB and what little benchmarking I have seen seems to indicate Nvidia cards gain 10-20% from it in most games (and 60-90% in some exceptions like Skyrim) while AMD's GPUs seem to perform mostly somewhat worse. The benchmarks I have seen are 1-2 years old (GTX460, HD6950 and other oldies) so it seems now might be a good time for a mainstream 1GB vs 2GB shoot-out with 10 or so typical modern titles.
 

derpderpderp

Honorable
Sep 26, 2012
11
0
10,510
I got a Gigabyte 7790 oc with 2GB for around $110-$120 with rebate lol. Thing eats any game and I have an AMD dual core @ 3.0ghz plus 4 GB of ddr2 oc
 
Skyrim with Mods will definitely benefit from the extra VRAM, so will the newer games with so many textures to load.

I am not saying that it would be a lot more better, but yes, it would have some advantage on newer games which might use just a little more than 1GB VRAM. In that case the 1GB cards would not have any extra VRAM and it would swap data on the HDD and hence one would experience some dropped frames.

Whereas with 2GB card, in case of VRAM utilization going above 1GB, there still would be plenty of room for loading textures and stuff.

This is the case where there really would be the advantage of extra VRAM.
 

choochu

Distinguished
Jun 5, 2011
12
0
18,510


Yep I'm glad I got a GTX 770 4GB because skyrim uses over 3gb easy with texture mods and so on.
 

alextheblue

Distinguished
It depends which compact OEM boxes you are writing about. It looks like the NUC ball is starting to roll with those J1800-based boards and if that catches on, low-end systems will reach a whole new level of "hopeless" enthusiast-wise with systems lacking desktop-style PCIe slots.I would not be surprised if NUC-style systems, albeit with somewhat more powerful SoCs than the J1800, became frighteningly popular in the future.
Agreed. :( The only reason Intel designed such a thing was to create a "new" market and pad their profit margins. "Oh well sure it's more expensive than mini ITX but LOOK! It's SMALLER!"

For those systems, the best upgrade is to put a "NOT FOR SERIOUS GAMING" sticker on them. Then if the owner wants more performance, sell them something with an accessible x16 PCIe slot - even if it can only slot a relatively tame card. Other than that, I suppose even NUC has it's place. I just don't care for it much, myself.
 

InvalidError

Titan
Moderator

At $60-70, the current crop of NUC-style boards are a fair bit cheaper than buying a Celeron CPU and B85 motherboard separately so I would expect "padding margins" with those to be a fair bit more difficult than with conventional desktop components.

It looks more like a move to incite people who have a large conventional systems they already grossly under-use to upgrade to something in a more convenient form factor even if they do not need to just because it is cheap and much smaller.
 

cleeve

Illustrious


You're welcome to disagree. I am curious if you've tried two different cards with the same GPU and different amounts of VRAM, because I think you're making an assumption that the slowdowns you've seen can be attributed to RAM.

You see, I have actually personally tested Starcraft 2 with a large number of graphics cards with varying amounts of RAM, and recorded my results which do not agree with your assumptions... specifically, that 512MB is enough for that game to employ the ultra texture set:

http://www.tomshardware.com/reviews/blizzard-entertainment-starcraft-ii-benchmark,2611-8.html

So based on empirical factual evidence, I will respectfully disagree as well. :)

 

InvalidError

Titan
Moderator

Hybrid crossfire with a GPU that is 2-3X as fast will produce tons of runt/dropped frames and usually worse overall performance: IGP (or a slow GPU) receives frame "A", GPU receives "B", GPU finishes rendering "B" before the IGP is done with "A" now you have to either wait for "A" or skip to "B" so you either end up with a delay or a skipped frame, both of which causing potentially irritating animation jerkiness.

For best results, you want to crossfire two devices with similar processing power and memory bandwidth so their average frame rendering times line up reasonably well. For the A8/A10-7xxx, that would be something like a DDR3 R7-250. With a GDDR5 version, the GPU would have 3-4X as much dedicated memory bandwidth as the IGP and might already be better off on its own.

Crossfiring Kaveri with R7-260 might be interesting for curiosity's sake but past experience with previous hybrid crossfire setups has shown that once you take a step beyond IGP-class discrete GPUs, the discrete GPU is usually better off alone.
 

Trachu

Honorable
Jan 16, 2014
7
0
10,510
InvalidError: I understand your logic however just like you said you are guessing based on the PAST experience. We all know in the past for both Trinity and Richland Crossfire worked good only in benchmark, however AMD has recently presented Frame Pacing technology with their newest Catalyst. They suppose to work also on trinity and Richland and first benchmarks prooves they work differently (better) than previous solutions. However frame pacing is dedicated to older graphic cards without XDMA. Radeon R260 and Kaveri both have XDMA and I am very curious how they work together. I am guessing that those new technologies work a bit different than faster is waiting for slower to end its work, otherwise there would be no obvious differences those new technologies give.
XDMA, frame pacing are the two technologies which I find very little informations about, other than marketing bullshits.
For example I can not find information does R7 250 (probably the best companion to Kaveri) supports xdma or not?
 

InvalidError

Titan
Moderator

All new AMD GPUs support XDMA which is nothing more than bridge-less Crossfire over the PCIe bus. It is unlikely to make any performance difference - except maybe make it slightly worse due to snubbing ~1/16th of the PCIe 3.0 bandwidth. Since AMD already could do hybrid crossfire before the R7/R9 series, previous AMD GPUs and APUs must have had something equivalent already, so nothing fundamentally new there.

As for frame pacing, it has been around for several months already and is applicable at many generations old GPUs so I would not count it as a "new" technology either... and frame pacing is primarily intended to sort out performance issues usually between matched GPUs since that is where AMD's reputation is taking the heaviest blows from jerky frame rates on $1000+ multi-GPU setups.
 

alextheblue

Distinguished


Frame pacing isn't a one-time driver release. ;) They've been continually improving their frame pacing efforts. So saying "it's been around for months" doesn't mean anything with regards to the most current releases (and future releases). They just released new beta (14.1) drivers, the "Mantle drivers". Well, Mantle overshadowed their other work - they brought their latest frame pacing algorithms to a wider range of configurations, including dual graphics. So while I don't think they're really quite there yet, indications are that there are improvements, probably worth testing Kaveri + 250. You can disable frame pacing in the drivers too, so it is easy to see the results of their frame pacing efforts.

Also, XDMA is a bit more than just "we use the PCIe bus now". They could already do that, and did when the CFBI hit its bandwidth limits. The results were not pretty. :( XDMA is not a frame pacing hardware solution, either - though it doesn't impair frame pacing either, allowing them to apply same frame pacing code to XDMA and non-XDMA setups. But as the name implies, it's an on-chip very fast hardware DMA solution. It lets them bypass the CPU entirely (as well as some of the GPU), meaning a major performance boost over previous usage of the PCIe bus for crossfire (again, when the CFBI was not enough at 4K and multi-monitor setups). It was a very necessary change for these high resolutions. There's various articles about it and such.
 

InvalidError

Titan
Moderator

A higher speed version of the crossfire bridge would certainly be within AMD's capabilities if they wanted to do so - they already have the high-speed building blocks in their PCIe, SATA, HDMI, DP, DVI, HT and other interfaces. A pair of 8Gbps lanes based on their PCIe 3.0 PHY would have enough bandwidth to handle hi10/4k @ 60Hz... and such a bridge can bypass the GPU, CPU, PCIe controller, PCIe latency, PCIe framing, etc. so it seems unlikely that going over the PCIe bus makes things more efficient.

What it does do is eliminate the expense of ~10 solder balls on the BGA, the traces to the crossfire bridge and the crossfire bridge itself. So the main motive behind XDMA is eliminating the ~$0.30/board cost of adding the crossfire bridge since less than 3% of boards will get used that way.
 

alextheblue

Distinguished


Hmm, I didn't mean it was the only solution. I meant they had to do SOMETHING, fast, and they did. But I think you'll have to ask AMD's engineers why they went with this approach. Why would they implement XDMA on-chip instead of "some traces and solder" and a bridge? Doesn't seem logical to me, I suspect there's more at play here. I don't think it's as simple as a price reduction per card.

For one thing, AMD isn't interested in mass producing cards and crossfire bridges. They sell chips. The cost of off-chip crossfire hardware is shouldered by OEMs and consumers. So unless AMD was pressured into this by the OEMs, I just don't see cost as the primary factor.
 

InvalidError

Titan
Moderator

While AMD might not be manufacturing the bridges themselves, they do get the GPUs made and they do need to have them tested, provide reference designs and a bunch of other things for it and adding extra pins does add to all the physical aspects of this on top of the internal logic. This does add pennies per chip to chip-level R&D, production, testing, validation, etc. costs. Rinse and repeat at the board-level, which adds yet more pennies per board produced.

Eliminating pins eliminates the associated chip packaging and board-level manufacturing and testing costs. That makes XDMA cheaper and more convenient.

If you think pennies are nothing, consider this: in high volume, there is often less than a $0.05 difference between high-quality capacitors and junk, which is quite insulting when you find junk caps in a $300 device that has only ~$150 worth of parts in it and fails 2-3 years after purchase due to dying caps but lasts another 5-7 years with fresh high-quality caps at an aftermarket replacement parts cost of ~$2 total.

In mass manufacturing, shaving pennies counts - particularly for rock-bottom price devices like the R7-240/250 and if they have already done the R&D to enable some more penny-shaving there, might as well use it for the 260-290 range and beyond if the performance hit is small enough... PCIe 4.0 will be there long before 4k resolutions become mainstream anyway.
 
Status
Not open for further replies.