Are AMD GPUs "discriminated" against by developers?

dminzi

Commendable
May 27, 2016
35
0
1,530
Hello everyone,

I would really like to purchase a Rx 480, but first I want to make sure that I will not face any backlash for my decision to go red. I have memories of people telling me that certain games ran worse for AMD cards for no real reason. That is, they had specs that would suggest maybe 60fps at 1080p, but they could only really achieve 45fps at 1080p. I would hate to buy a card and find myself unable to play certain AAA titles because developers got money from nVidia to optimize their games better for team green. This is NOT an AMD vs. Nvidia post...
 
AMD makes the gpus in all of the major consoles now, and with DX12 looking to be more used in future games these are two reasons why AMD is looking on the up in the future in my opinion.
 
Lol if anything game developer would want their games to work well on every hardware. There is case were sponsored games running better on sponsored hardware but most games are not like that. If anything that pretty much every console using AMD hardware give amd the advantage to certain extend.
 
Consoles have nothing to do with PCs by way of gaming. If you look at the technical specifications on an XBOne or a PS4... They're running ( I believe ) hyperthreaded octacore CPUs. That run under 2GHz. Game optimization on either of these consoles is VASTLY different than optimization for a PC. A console has 16 threads that are able to be utilized--the only PCs on the market that have those are custom built enthusiast PCs, or server PCs running Xeons. The DX12 benchmarks still haven't been released--and from some of what I've watched from various youtubers on the DX12 front, AMDs flagship DX12 4k card falls on it's face due to it's low amount of HBM.

That being said, I'm running team red and having no problems. It's really down to quality of life stuff in games that might change... For example, Fallout 4. If you don't have team green, then you don't get rocks to fly out of the walls when you shoot buildings. Oh... oh no. Earth shattering. Or, wall shattering I guess.

The 480 would be a nice choice.
 
Yes as Turleu3_scratch said don't worry about it, Team red are coming out with new and improved technology and me personally I have never heard of such a statement. The thing they may had been confused with is there are a large amount of games for example GTA V the developers used Nvida cards to create the game, this means that if they used nvidia cards to make the game and have coded the game to work with nvidia and made information and certain code to be optimized for a specif process then yes inevitable the nvidia may be ever so slightly faster but the difference is AMD will then create drivers to improve upon what nvidia have already done. The differences are minute and you wont be able to see the difference is high quality well made games with AAA titles.

Like a car for example if a part for say FORD was made for a FORD car then it will work on a FORD car better than if you tried to put it on a BMW , If you have a non FORD/general motors car then you can still use that car part but you have to make something to make it compatible and this is the DRIVERS from AMD and NVIDIA. The drivers is what makes everything work smoothly.

AMD are making history and will continue to, if you want to pay a premium price for a product that make be slightly slower or just for the name nvidia then go ahead but if you want something that works and isnt expensive, has equal or better performance and does what it says then go for TEAM RED

Money to blow= TEAM GREEN/Nvidia
Money to SAVE/under a budget or want something that works for a good price = TEAM RED/ AMD

Hope this helps !
 
Nvidia-sponsored titles often do little things that give Nvidia a significant advantage. One of those things is hidden surface removal: at least one Gameworkx title always renders a water texture below the terrain where Nvidia's hardware correctly identifies the hidden surface and skips rendering it while AMD's hardware doesn't and takes a large performance penalty from rendering hidden details.
 
"Consoles have nothing to do with PCs by way of gaming. If you look at the technical specifications on an XBOne or a PS4... They're running ( I believe ) hyperthreaded octacore CPUs. That run under 2GHz. Game optimization on either of these consoles is VASTLY different than optimization for a PC. A console has 16 threads that are able to be utilized--the only PCs on the market that have those are custom built enthusiast PCs, or server PCs running Xeons. The DX12 benchmarks still haven't been released--and from some of what I've watched from various youtubers on the DX12 front, AMDs flagship DX12 4k card falls on it's face due to it's low amount of HBM."

I don't believe this is correct. The console developers are recently discussing very positive performance improvements by implementing Async Compute optimizations. The AMD cards dominate in Async Compute, so any game that is ported to PC will likely benefit from a console game made to support async compute. While Nvidia clearly has a performance advantage in DX11 optimizations, they are believed to have a weakness in asynch compute, appears to be a hardware limitation not fixable by Nvidia drivers.
 


That statement is totally false, Nvidia has proven to have some pretty good performance per dollar cards (I have both AMD and Nvidia so I'm not a fanboy). In the other hand, from my experience Nvidia has always been ahead regarding drivers taking more advantage of their cards.
 
I wouldn't say money to blow/money to save... especially not on the GPU front. It's not like AMD's CPUs where they're all pointed at budget building--the 470 and 480 are slotted to pin down a mid budget gaming PC, but it's not like there aren't any that will end up coming in the future... Look at the current (last gen) red/green lineups:
R7 line: 700 series
R7 370: $120
750Ti: $100
(really the only still-notable choices)
R9 to 900:
R9 380: $200
GTX 960: $200
R9 390: $275-325
GTX 970: $275-325
R9 390x: $350-425
GTX 980: $400-450
Fury: $550-600
980Ti: $600-650
Their lines match up fairly close to one another for what the cards are supposed to run. I expect that to happen for this gen as well.
1070, 1080: Both beat a Titan X, the 1080 beats it by a fairly large margin.
The 480 is slotted to beat a 980... which doesn't even touch a Titan X. So, of course you get what you pay for. AMD came out with the lower end-next gen cards first, NVidia came out with the upper end-next gen cards first.
 
"Asynchronous compute runs on Nvidia's latest cards also and is an over hyped feature. At this rate it'll take DX12 another year or two to become regularly adopted."

Runs on Nvidia? Check this chart (http://cdn.wccftech.com/wp-content/uploads/2016/05/GTX-1080-Ashes-Of-The-Singularity-DirectX-12-Async-Compute-Performance-1440p-Extreme.jpg) and tell me how much performance gain was had by going from DX11 on 1080 to DX12 + async on 1080. Now Look at the gain going from DX11 on R9 Fury X to DX12 +async on Fury X

Also you said it will take a year or 2 to be regularly adopted. I don't know most peoples GPU purchasing habits, but I tend to use my GPU's for around 3 years usually when I buy them.

 
Wccftech is a site that I do not support. People tend to use it to support themselves which is why it gets so much popularity, rumors and exaggerations.

Take a reputable site like Tomshardware. Look at the Ashes of Singularity benchmark. The 1080 has clear dominance. Plus I'm sure InvalidError could provide input on this stuff since he is very smart.
 

I'm not saying that it will or will not--but there still haven't been enough DX12 benches to actually support that statement. My statement regarding console development was set more towards the CPU front. On the GPU front, though, I can totally see that. I stand by that there aren't enough benchmarks to see if NVidia's raw increase in power will overtake AMD's better software utilization, or if AMD will finally win out by having better support on DX12.
 
I agree with your totally and I also use both cards, I also have used AMD and Intel CPU's. I am simply saying the amount you pay for some of the nvida cards compared to AMD are not possible for some. Same thing for intel vs AMD CPUS, intel have better performance per single core but AMD are better for Multicore use. AMD CPU's are cheaper than most intel CPU's because intel have better and faster stuff in some cases but it all depends if people have the money to spend £200+ just on a CPU or want to pay premium price.

For me AMD has always been there for the people who don't have a lot to spend. intel responded with the Intel Core i3-6100 which is slightly better than the FX 6300 by AMD i believe if i'm correct
 
On the AMD-Intel front yes you are 100% correct, but that's not how the GPU market works now. If you want the performance level of a 970 from Red: get a 390. For roughly the same price as a 970. If you want the performance level of a 980Ti: get a Fury/Fury X for roughly the same price. It's not a "These are cheaper than those" thing. For the performance that you get, you pay roughly the same price for roughly the same performance in nearly every pricing slot--the only one Nv doesn't fill is the $200-300 gap the 960-970 presents, and that is filled currently by the 380x. Other than that, if you want the performance of a 960, get a 960... or a 380. If you want the performance of X, get equally priced Y.
 


Im not sure, im not the best at comparing things in a sense. CPU as nothing like the GPU market. I was simply trying to say that AMD try to be cheap in a way were as intel or Nvidia put their prices slightly higher but it all depends on what the OP wants to do, there will never been a clear NVIDIA OR AMD ARE BETTER than the opposite. They will always compete together. I think without the rivalry between the both then we may not have the cards and CPUs we have today because nonn of the two parties would have anything to push for in a way. There would be nothing to try and outperform or compete against.

Again sorry for any concussion
 
I wouldn't really worry about DX12 as of now also, does anyone know the cost of developing async software? why do you guys think few games take advantage of more than 4 cores if any?

I think it all goes down to "how mucho" are you willing to pay to get certain level of performance.
 
@genthug

That's why i said "to certain extend". The fact that the console have GCN and on pc amd still using GCN developer actually familiar with GCN architecture. but it doesn't mean that will giving amd outright advantage on pc. Because the hardware config still vary on pc. Take war of thunder for example. They said their games are running quite well on console. But on pc their games running much faster on geforce because they have closer relation with nvidia.

@invaliderror

In case of crysis 2 without nvidia help the game will not even patch to DX11 and will still have low res texture. It seems to me crytek just put tessellation everywhere to satisfy those that making noise talking about "console downgrade" when the game first come out.

@sleepybp

Accept thw async use in console not simply applicable to the pc version of the game. the thing is console hardqare only hav3 one config for xbone and one config for PS4. but pc is not like that. Hitman dev mention that for pc you need to tweak async compute per card basis because each card have different compute to bandwidth ratio. go look at guru3d review of hitman. You can see several radeon card actually taking performance hit in DX12! And the game has far more stabilities issue in DX12 vs DX11.
 
The thing is DX12 is not the magic that many people think it was. Dx12 will complicating stuff more and more for developers. They already burdened with the task to finish their game on short time despite their game getting bigger and complex and now you want them to do various gpu architecture optimization on their hands? AoS dev already mwntion that they will not going to do specific architecture optimization because that will take too much time and will instead do optimization that they think will benefit most modern gpu. What happen with Hitman is just the tip of the iceberg. Look at the latest warhammer. It seems the game will not run in DX12 for GCN1.0 and kepler hardware.
 

Yeah, but the 1080 is also a newer, more powerful card. It could perform the best in AotS by brute strength.
If Pascal has improved support for async compute compared to Maxwell, you'd expect the 1070/1080 to outperform the previous gen by a larger amount in games that make use of async compute, like AotS. However, if you look at the AotS benchmark in Tom's 1070 review, the performance difference between the 1070 and a Titan X doesn't seem significantly better than for any other game. Compared to the Fury cards, whose relative performance jumps hugely for AotS compared to other games.
 

You'll be able to run your games, but you won't be able to run any of the Nvidia specific features. Of course GPU PhysX will no longer be an option (weapon debris effects in Fallout 4, for example). But then also several antialiasing options (MFAA, TXAA, FXAA, TrSSA), ambient occlusion options (HBAO+, which can be forced on in many games that don't support it, Skyrim was a notable example), and VSync options (Adaptive VSync, Fast Sync).

Will you miss any of these things? Well, probably PhysX if you play any of those games, but otherwise not likely unless you have two machines side by side. But there is a difference in graphics options that helps in the tweaking process when trying to find the best blend of image quality and performance. You have more settings to play around with.

But the bigger issue is resigning yourself to a card that hasn't been released yet with no reviews to base a decision. The discussion surrounding the RX 480 at this point is the very definition of a "hype train". That is never something you want to blindly hop on board. And consider that you do have options, cards in the same performance range with similar prices to the RX 480. There are even custom GTX 970s at right about the same price as a custom 480, at about the same performance level. That's an Nvidia card, with no compromises necessary in your graphics options and visual quality. And then there's the certainty that Nvidia will release its GTX 1060 not long after June 29th.

Bottom line: get the facts and don't commit to a decision until everything is analyzed, bugs are revealed, and all the alternatives are considered.
 

Do you mean forcing FXAA at a driver level? Because I can turn on FXAA in games no problem with my R9 380. Also, HBAO+ is an ambient occlusion option, not anisotropic filtering.
 
@TJ Hooker

Ned to be reminded though AoS is AMD sponsored games. and before they use DX12 the game actually was develop using mantle. The thing is the jump for amd is significant in DX12 because their performance is really poor compared to nvidia. The game actually should running faster on amd hardware but because they did not optimize DX11 path (be it intentional or not) the game actually running faster on nvidia hardware with DX11. Even in AoS actually dev already mentioned that most of the performance uplift coming from the cpu overhead reduction. You can toggle async on and off in that game. In hardocp test turning on async only give very small performanve gain on radeon. The gain don't even reach 5% back when they do their test.

Ultimately amd and nvidia handle async differently. Amd specifically needs ACE and that hardware only accessible using DX12. Nvidia doesn't have such dedicated hardware but if you look at how good nvidia dx11 performance is probably you will be aware that nvidia hardware actually does't really need async compute the way amd did with their ACE hardware. And it seems this ACE thing is quite complicated even for AMD.