AMD Radeon R9 300 Series MegaThread: FAQ and Resources

Page 54 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


I would also like one... as I'm hoping to switch to AMD and get a Freesync Monitor... G-Sync is just too expensive... plus I've read G-sync isn't quite as good... costs more performance wise and stuttering in some scenarios. I'll wait to see what the benches are from the R9 390X vs the R9 490X before pulling the trigger though, also since i would need to sell my GTX 980 first for some cash back. :/
 
Does anyone else wonder where are all of the raw processing power in these cards is going? If you compare what the world's fastest computers did in the not-so-distant past, you'll see that today's gaming GPUs are faster than some of the previous "super computers." I'm not entirely convinced that games really need all this horsepower. I wouldn't be surprised if AAA-title gaming companies receive a lot of money from the military and similar private companies as an incentive to write "heavy" code that requires a lot of processing power. Giving that money is much cheaper than paying for research to develop fast processing chips, which Nvidia and AMD will do in order to keep up with software demands. If they do that, then it's pretty smart too. It keeps a carrot in front of the horse for relatively little cost. That would allow sponsoring organizations to benefit from a market full of very inexpensive, but extremely powerful, GPUs for sifting through huge amounts of data and the like.
 


I wouldn't be surprised. I always go back to my classic example, Roller Coaster Tycoon from 2000. Programmed by one single man in a low-level programming language, Assembly. It can constantly calculate pathfinders for 3000 guests in the amusement park at ease on a CPU of 200Mhz. It's pretty insane. You look at a game today, and path-finding calculations for some 50 people will have the CPU nearly at 50% probably.
 
Without going into details, the amount of calculations needed to increase the complexity of the polygons and at the same time light effects and textures/coloring (AKA shading) grows fast. I don't recall the formula, but it was a lot of processing when you increase the numbers in all directions.

Cheers!

EDIT: The additional issue nowadays are Frameworks as well. They ease up the programming burden, but they also make it more expensive processing wise (less "to the bone" approach).
 


It takes time to rebuild market share. It wont be over night and I highly doubt nVidia will stand by and let AMD grab a huge chunk without throwing some competition out there.
 

Yep, it's a certainty that Nvidia has Polaris firmly factored into its mid-summer release schedule. They have a history of raining on AMDs parade, with the GTX 580 being a classic example (ruining the debut of the 6970), the GTX 780 Ti (a couple weeks after the 290x) and the GTX 980 Ti more recently (same with the FuryX). They seem to like to set AMD up, allowing their hype train to build, and then dash it with a superior release.
 
Yet if NVidia doesn't plan on releasing an x80 product this summer, amd's going to have several months unchallenged with a better than fury X / 980TI level card.

Either they counter sooner than they want with a GDDR 5 card or they let AMD have a lot of time unchallenged while they work on a GDDR5X card.

They aren't exactly known for fighting back with price cuts.

I see AMD being on top for several months, then Nvidia for a couple, then AMD again when both launch HBM2 cards starting next year
 


I'm not so sure Polaris is going to be that fast.... neither do I think GP104 will be. This is based on the recent info on GP100 (which they showed off, won't ship until Q1 2017 and then only for server market). The point is GP100 is only slightly faster than Fiji in SP maths, despite being on a new process (10 TF vs circa 9 for Fiji). Also whilst is sports HBM 2, it isn't a lot faster than what is on Fiji, it's around 700 gb/s bandwidth rather than the 1tb/s we were lead to believe.

I don't think this generation is going to be all that much faster than the last sadly. It might close the gap on fury X and 980ti, but so far Polaris 10 actually looks to replace the smaller Hawaii die instead...
 
GP100 was supposed to be double precision monster. Nvidia was defending their position against intel. If you look at the top500 list you will be aware the two company that fighting for the market share was intel and nvidia (tesla vs phi).

Personally i do think nvidia already have GP104 and lower are ready. They just need the right time and tweak their performance accordingly to competition. They have working 610mm2 chip right now it is impossible to think that much smaller chip aren't ready. If GP100 still not working DGX-1 won't be on sale on june. Being late not necessarily bad. As long as they were not late more than 3 months they should be fine.
 


the 390X and Fury are not that far apart in performance. Any new replacement of the 390X has to at least match Fiji
 


I feel like a 490X would only see somewhere between 15-30% performance increase over its predecessor, much like a GTX 1080 over the 980 would be the same. They keep saying 2x the performance per watt, but if you break that down, they cut the GPU processor down in size by like half. Means, smaller cards and less watts to be consumed. Depending on how much power it saves, you could easily say it's getting 2x the performance per watt with a 20-30% increase in performance. Mind you, I have no idea what the realistic possible drop in wattage can occur with the much smaller chip...
 


It could be but this would not be the first time this was done this way. After the Radeon 9800XT ATI launched the X800XT then X850XT and the X1800/1900. The X meant 10 so it was in reality the Radeon 10800XT
 


The difference is the preceding part of the name... In nVidia's case it sounds kind of dumb: GTX X80. Might as well just use the core name this time around... Although nVidia also thought it was a good idea to name the small chip 104 and the big one 100, lol.

Well, nVidia's marketing team has always been good at their jobs, so even if they name them GeForce GTX XXXXX80XXXtiXX people would say it's a terrific name, haha.

Cheers!
 


Did you forget about the ATI Radeon X850 XT Platinum Edition?

I think ridiculous names come with the game.
 
They can always go back to the old scheme and place the designations after the numbers, then start over with 200 series again

That would get them all the way back up to the 980 GTX before they had to come up with a new idea 😛
 
It would be cool if they just had a name and a number. Like, the GTX 780 ti, 780, 770, 760, 750 ti, 750, 740, and 730 could have been Kepler 1, Kepler 2, . . . . Kepler n. Since they release slower cards first, they would release higher numbers first, and then go down. Once they get to 1, it would be the end of the series and time for a new one.
 
http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100007709 600565504

Newegg is running out of R9 390s. They only have 1 non reference card for direct sell excluding open Boxes. (and of course it's VisionTek, the company that has never heard of the price cut lol)

Polaris launching a bit a head of schedule? AMD ceased production of 390 GPUs too early? Newegg just not paying any attention to their stock levels?
 


An early launch would be nice but if AMD dropped production too early it would let nVidia grab sales that might have otherwise gone to them.

Maybe a problem at a FAB.
 
Any good sources on overclocking the Fury X? I just picked one up and would like to get some 'ball park' numbers as a reference so I at least know where to start. Looking for recommended ranges for power limit, voltage (core and memory), core and memory clock ranges. I'm hoping the silicon gods will have looked kindly upon me with my card, fingers crossed.