Discussion: Polaris, AMD's 4th Gen GCN Architecture

Page 32 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Fair point, and I should not be assuming things. Although some evidence in regards to performance would be fantastic.
 
tNdnzJn.png
 


As I have no idea what evidence you are after I would suggest you exercise your powers of Google-Fu and go looking for it.
 
In regards to "optimizations in games", both AMD and nVidia do it. We can argue on the degree, but both do it.

MANTLE turned out to be good for everyone, fortunately, but if AMD had gained sufficient traction to destroy nVidia, they would have taken the chance in a heartbeat. Now, look at the "Freesync" dilema. AMD and nVidia have their own competing solutions to the same (or at least same in the core) problem with monitors and their refresh rates. One is from the consortium backed up by AMD and the other is pushed (AFAIK) by nVidia alone. If AMD wasn't using the DisplayPort standard for it, no one would have even tried to adopt it. In nVidia's case, they didn't even have to try hard to gain traction. The reversed role, is AMD's Audio engine in GCN; anyone remember this? Yeah, I thought so too.

Cheers!

EDIT: Changed reply.
EDIT2: Final argument about the current thing =X
It's 0.1A if applied directly to the heart, but if through the fingers or feet, you need 0.5A from what I remember. The body is not a perfect conductive material, so you still need to break the skin/tissue dipoles 😛
 

Slight correction here. Only 0.1A (100mA) across the heart is fatal.


Applied in just the right place/s a AAA cell can be fatal.

And now back to our regularly scheduled topic.
 


Optimized does not instantly mean crippled for the other. That is a term that only die hard fanboys (not saying you are) tend to use to try to get their point across.

Optimized means that they utilized the specific hardware better. It is much like with CPUs. Right now you have the FX 8350 which supports up to AVX 1.1 and the i7 6700K, or even the older i7 4770K, that supports AVX 2.0. If the software can support the faster AVX version and is set to be able to use it if it is available then it would technically be optimized for that specific feature only in Intels CPUs. That doesn't mean they are crippling AMD as AMDs CPU does not support it.

Either way AotS has shown to always perform better on AMD GPUs. What is funny is that it is a RTS which normally is very CPU bound due to all of the AI involved and if you look at benchmarks like this one:

http://www.pcper.com/reviews/Graphics-Cards/DX12-GPU-and-CPU-Performance-Tested-Ashes-Singularity-Benchmark/Results-Heavy

It shows that CPUs play a big role in performance. What gets me is that while nVidia doesn't seem to benefit from ASYNC in AotS its DX11 results are very close to some of AMDs DX12 results which could mean that AMDs DX11 drivers are just very poor and they are focusing more on DX12 optimization since it will benefit them vastly more.

Who knows. All I know is that when game companies are involved I do expect a bit of favoritism since they help the game devs utilize the hardware to the best extent and as well the hardware dev will be able to optimize the drivers for the game better.
 


Haha, Jimmy. Thank you for opening Pandora's box by bringing in "compilers"!

So, in your own example that is a "fair" exclusion. The "unfair" exclusion is when your hardware can use the same set of extensions, but the developer chooses not to invoke it in the code when one vendor hardware is present. And yes, I am talking about the lawsuit between AMD and Intel. NEVAR FORGET!

In any case, I haven't seen or read anything that would point that way for AMD and nVidia. The Dev would have to be really retarded to do that in their code, since they alienante their user base hard. If someone were to find evidence of this, we all should raise the pitchforks right away.

Cheers!

EDIT: Fixed Engrish.
 


I did not bring in compilers. You can optimize for an instruction set without compilers, compilers just do all the work for you.

It was simply an example of how you can optimize based on features a CPU does or does not support much like how you can optimize a game to utilize a GPU better than others.
 


Compilers are pieces of software that have to be manually optimized as well, in case you don't remember.

It sounds weird cause I used the least appropriate words, haha, but the point is compilers are not magical things that just spit machine code out of nowhere. They have stages that are designed by humans and, mostly, optimized by humans in the means of "use this instruction for this type of invocation" and so on. No that you optimize compilers to do it "faster" (only).

Cheers!

EDIT: Clarified point.
 
Release dates don't matter too much anymore. What matters is when supply catches up and aftermarket cards finally hit the market. The 1060 may "officially" be released July 14, but the actual release date that means is the end of August. The RX480 seems to be having better supply so far than the 1070 and 1080, and that is good so far, but we still have to wait a while for custom cards to actually arrive. I'd say come mid July custom 480s will be released, and it'll probably take a week or so for supply to catch up to demand. But Nvidia on the other hand can't seem to manufacture as many chips at a time. I think the RX 480 will probably have a month with no competition in terms of performance, if the GTX 1060 is anything like the 1070 and 1080.

The GTX 1060 will probably sell for more than the RX 480 and perform better, probably more like a GTX 980. This will still leave AMD with virtually no competition in the <$230 price range, as I highly expect the GTX 1060 to come in at $250, and if they do the Founder's stuff with mid end cards, $300 for that edition and many aftermarket cards. With the RX 460 and 470 coming out, I feel AMD will have achieved their goal: stealing the low-end market for some period of time before Nvidia releases their low-end cards. Who knows when we will get a GTX 1050, or 1040 for that matter. If AMD also releases an RX 350 and 340, they will have secured the entire low-end market segment with virtually no competition, while Nvidia has the upper segment with no competition.
 
It seems like both AMD and nVidia are streamlining their offerings. AMD has not mentioned anything about possible "X" versions and nVidia never made a 960ti as many were hoping for. Can't count how many times I saw fanboys telling people to not get a 380X as the 960ti was "coming soon" and would "crush" it. It does make sense, but it would be nice if they would just announce that they were eliminating the naming conventions, if that is their intentions. Then again, it's possible that since the 9XX and 3XX series were more of a stop-gap until 16-14nm was ready they didn't find it worth making a full lineup.

Another thing of note is the progress in IGP's, I have a feeling that the 460 might just be the bottom of the full product stack. Especially if they are properly incorporating Polaris into Zen.
 
Suddenly my comment is gone. Any mods feel responsible? :> There's nothing in my inbox.

Anyway: yeah, mid july is what I was saying for both AIB 480 and 1060. If the availability of aftermarket 480s matches the reference atm then we should be able to get our hands on one without too much of a hassle. Hopefully some reviews arrive soon as we know there already are a few custom 480 out there.
 


nVidia's "poor" scaling to DX12 doesn't have anything to do with architectural deficiencies or poorly optimized drivers, it shows that their DX11 performance was much better than AMD's.

It was a well known fact that AMD GPU's relied heavily on the host CPU, more so than nVidia GPU's. This was illustrated a few times by various websites where they showed that while an nVidia GPU could put out decent framerates with a relatively low end CPU, AMD's performance scaled with the CPU from low end to high end. This was why AMD made the push they did with Mantle which then got the ball rolling for DX12. For whatever reason (only the people inside AMD's graphics group knows) AMD couldn't seem to unlock the potential of their hardware under DX11. They knew it had to do with the way DX11 handled multithreading. There is no way of knowing what was holding it back, or if it was ultimately fixable, what we do know is that AMD thought it was worth it to develop a new API to work around it. So while AMD's performance is good in DX12 which is good for them, it is still overshadowed by the fact that there are a few magnitudes of order more games for DX11 than there are for DX12.

Async compute is a benefit, but wasn't the reason that AMD made the push for DX12. However they had a hand in crafting DX12, so they made sure that it was part of the new API because they knew they had hardware in place to take advantage of it. There is no doubt that AMD's approach to Async would perform better than nVidia's. It's always been better to supports something in hardware rather than software.
 
FINALLY my system is working somewhat right:

3D Mark Firestrike - 8771

http://www.3dmark.com/fs/9052550

Now mind you I'm running a 5+ year old system, which is definitely dragging this down.

Now my best score with my 2 R9 280's in Crossfire was 9216 (however in Win 7 I managed a 9646, but since Win 10 somehow it dropped)

http://www.3dmark.com/fs/8031025

I'm a bit disappointed as I figured the RX 480 would beat it, however, its close enough to not really matter at all.

Also, subjectively, the RX 480 ran it better, what do I mean by that? In short, Crossfire sucks. Visibly the RX 480 even at slightly lower FPS had far less screen tearing and ran noticeably smoother and better. The fact it uses 250 watts less at full load helps as well. And finally obviously Crossfire only works on some games, The RX 480 will work on all games.

So in the end I'm still happy with my purchase, I'm glad to be done with Crossfire (for now), and I look forward to any upcoming improvements as the drivers are sorted out more.
 


You'll save money on the electricity bill now.
 
Downloaded DOOM demo, ran between 60 and 130 fps with the average in the 70ish range on Ultra with everything turned up. Smooth as silk. I should have tried with my old setup, but even my "ancient" system has no problems with it using the RX. Very happy with it
 


Was this with the Nightmare shadow setting and Nightmare Virtual Texturing Page Size setting?
 
Oh, I just read your update Rouge Leader.

Ever since I had to toy with a friends XFire using 6870s, I just dropped the idea of any form of SLI or XFire. It didn't work bad, but it was just a pain to set up the profiles manually for each game and toy around with "hidden settings" (all hail RadeonPro back in its day) for it to be smooth.

In any case, nice to read your initial impressions are rather positive.

Once the custom cooled versions come out, will you think on upgrading the HSF or swap it for one of them?

Cheers!
 


That's exactly why I shouldn't respond. People don't understand what I'm saying, unfortunately.
 


Well, try to improve on it, I guess! :)
 


Hmm, I didn't even look or know that existed. I went to the presets and selected Ultra and I assumed it put everything as high as possible. I'll look for that tonight.



CF is just frustrating, its back to the old days where every time you get a game you have to mess around to get it working right. And Microstutter is a real thing, I think thats what makes the difference that even though the RX gets less 3Dmarks, it "seems" to run better.

TBH I'll keep it as is because as you can see my processor is still a bottleneck so I don't see any performance benefit to it going faster. I do play on completely rebuilding my system once I know what Zen looks like, at that point Vega will be out or close as will whatever Nvidias even higher offering will be, so I will likely replace it at that point (or crossfire if it sucks less by then).
 


What are your temps and % GPU usage, if I my ask?
 


The GPU maxed out at 81 deg C in Firestrike and DOOM. I didn't pay attention to GPU usage but when I try those other settings again tonight I'll take a look.
 
Status
Not open for further replies.