AMD Radeon R9 285 Review: Tonga and GCN Update 3.0

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

sleepyman

Honorable
Oct 3, 2013
5
0
10,510
A few things to take into account when reading this review. First of all, the r9 285 that is tested is clocked at 918 MHz not 954 MHz (read the small print). Secondly, the power consumption numbers measure the power consumption of the r9 285 clocked at 973 MHz instead of the 918 MHz card that is used to review gaming performance. Thirdly, there are no OCing results.

Some additional tidbits about my third point. If rumors are true, the GTX 970 will be running at around 1200 MHz. Also, the GTX 770 is clocked at a little over 1100 MHz when playing games. Maxwell maxes at about 1400 MHz, Kepler maxes at about 1325 MHz, and GCN maxes at about 1250 MHz. It seems that the r9 285 has more overclocking headroom than the GTX 770 and the soon to be released r9 285x will have more overclocking headroom than the soon to be released Maxwell cards. Also, note that the r9 285 is priced to compete with the GTX 760 even though a r9 285 with a max OC should be closer to a gtx 770 with a max OC than a gtx 760 with a max OC.
 

hannibal

Distinguished
Good architecture upgrade, but needs production node upgrade to really make difference. Interesting to see that both Nvidia and AMD are going to effiency direction with new models.
Hopefully this is cheaper to produce than 280 and improve production profits. We needs these companions to float...
 

h2323

Distinguished
Sep 7, 2011
78
0
18,640
I think the naming scheme is fine, it just doesn't fit to how you all want it. GCN is also correctly described as GCN 1.2 in other reviews. Power figures are all over the place, Great Card
 

MrstimX

Distinguished
Sep 1, 2014
32
15
18,535


+1
My hopes as well. we need amd strong.
 

chaospower

Distinguished
Mar 8, 2013
67
2
18,640
AMD will do fine even without these cards since they got the consoles. This new card is an attempt to make more profit out of each card sold by reducing the cost to manufacture while increasing what it costs us, the consumer. The new card is slightly better than the 280, and doesn't improve that much on efficiency, only about 10% according to techpowerup for instance. The R9 280 is selling for about 200$-220$ which is also not that impressive considering the same card under it's older name (HD7950) sold for less than that a year ago! So we're paying a lot more than what we paid a year ago, getting a tiny bit more in terms of performance and efficiency and this is called progress?
 

InvalidError

Titan
Moderator

They do not really have any other choice: GPUs with wide memory controllers cost more to make so they have to make their GPUs more memory-efficient if they want to improve their performance-per-buck metric. This usually comes with improved performance-per-watt too.

If they fail to reduce their memory bandwidth dependence, GPUs' performance and cost will end up dictated entirely by memory interface bandwidth since the shaders will become starved for data an increasingly large proportion of the time.
 

seinfeld

Distinguished
Jan 31, 2007
103
0
18,680
I would have also liked if you included results from the 7800 series and 7900 series cards. as they arent much faster then those! it would put the lack of performance of AMD as a company into perspective. still using old CPU's and respun rebranded video cards from 4 years ago, with barely 5% speed improvements
 

horaciopz

Distinguished
Nov 22, 2011
446
0
18,960
It shocks me how the GTX 760 is stacking below the R9 270x as i brought it thanks your recommendations of being the best 250$ card in the moment, I dont mind if AMD has a better product now, which is great, but that a 170$ card is the same or faster than your recommendation is senseless. Well, I think that drivers and brand optimizations pays off the people aiming AMD cards. Good article btw.
 

Avus

Distinguished
Nov 2, 2001
356
1
18,780
I am guessing this card should have 4GB variant (double RAM size) like other AMD/nvidia cards. Will you think a 4GB model help a lot specially in Mantle??
 

heero yuy

Distinguished
Jul 25, 2010
494
0
18,810
i'm pretty sure in the 30 year live stream thingy they said something about 4GB versions being a thing

although that could be the unannounced R9 285X
 

saturn85

Honorable
Mar 26, 2012
10
0
10,510
is that a bug or what in GPU-Z image?

Asus: Bus Interface: PCI-E 3.0 x16 @ x16 1.1 (stuck at PCI-E 1.1 x16 mode?)
Gigabyte: Bus Interface: PCI-E 3.0 x16 @ x1 1.1 (stuck at PCI-E 1.1 x1 mode?)
 

InvalidError

Titan
Moderator

Probably nowhere near as much as people might expect: most of the extra RAM on 384/512-bits boards simply holds extra copies of data already on other channels to make it more available so all channels get more even loading.

Without the extra channels, the GPU has no need for those extra copies and associated RAM usage. This is why many games' GPU memory usage scale with channel count... for identical settings, people with 2GB dual/triple-channel GPUs may see ~1.6GB usage while people with triple-channel GPUs might see 2.2-2.4GB and quad-channel GPUs may show ~3GB usage... most usage is simply same 700-900MB artwork payload getting replicated across two, three or four channels for bandwidth multiplication.
 

cleeve

Illustrious
GCN is also correctly described as GCN 1.2 in other reviews.

Not really.
AMD has never officially called Hawaii's GCN implementation 1.1, nor has it named Tonga's 1.2. AMD has avoided distinguishing nomenclature entirely between iterations, there is no official designation.

These are terms invented by the press, just like GCN update 3.0. :)
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990



This is normal. Without load GPU-Z shows always only @1. :)
 

h2323

Distinguished
Sep 7, 2011
78
0
18,640



Yes I understand that from other reviews but they will be using GCN 2.0 when that design actually arrives. So the use by other publications of GCN 1.1, 1.2 makes much more sense and is less confusing for people as times moves forward when AMD actually does release 2.0 and 3.0.

 

Brian Blair

Reputable
Mar 20, 2014
128
0
4,690
So they going to charge the same price as the 280X for a card that is only slightly better than my R9 270, Way to undercut and rip off your customers AMD! Now your cards will match your bad artifact causing drivers.
 

logainofhades

Titan
Moderator


Considering the price of some of the 280x out there right now, I do agree that $250 is a bit high. I'd rather spend the extra $25 and get a Sapphire 280x. Since it is replacing the R9 280, it should be priced about the same. You can get those around $225.
 

Dyseman

Distinguished
Feb 8, 2009
141
2
18,680
Well hell. I just bought the kid an R9 270X 2 days ago.. Should I keep it or exchange for 285? Same price (where I bought it)
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310
AMD will do fine even without these cards since they got the consoles.

ROFL. Check their financial reports since consoles hit and let me know how that's helping. Have you seen the console sales numbers? Do you realize they're only getting 10-15% margins right now? Even at 20% at some point the earnings from this can barely cover the $180mil in interest on their debt each year not to mention whatever they'll owe GF again most likely due to yet another fine for take or pay type crap (I hope that is over, but we'll see). Between the two consoles they've only sold ~16mil (10mil to sony maybe 6mil to MSFT). That isn't enough to say they will support AMD, and SOCS from all vendors are moving in on their performance and will do so yearly. The price of games on android/ios/steam etc vs. $60+ games on consoles is already a VERY tough sell and will continue to get worse.

My guess is at 20nm or 14/16nm you'll see a two chip android console (as in 2x M1 or whatever NV names K1 successor) with 100-150w psu that catches them and makes them a moot point. Or some arrangement like this, such as a soc with just the cpu portions paired with an NV discrete gpu. The biggest mistake consoles made was going so low that a tablet soc could catch them very quickly (less than 1/2 into their lifecycles easily, we're not even a year in now, 20nm around the block). Google or Apple could put out a special soc with extra gpu units (SMX's for NV etc) just for this purpose. Tablets with K1 (and it's 20nm future versions+ all competitors) will already be darn good at running games on your TV via HDMI/Miracast etc and again with far cheaper games. You won't get the same sales this gen for console vs. last gen with all the CHEAP game choices people have during the 2nd half of consoles life cycles. Usually that is when the casual gamers pick up the slack, but that won't happen this time. They'll have far more options to choose from (steambox's also) and they get better yearly for everyone EXCEPT consoles.

AMD needs to start making money from their core products (cpu/gpu/apu) beyond the consoles or get nowhere fast.
 
Status
Not open for further replies.