Vega 64 vs GTX 1080 (NOT Ti)

That_Tech_Guy_Again

Distinguished
Aug 27, 2016
201
0
18,690
I was looking at the specs for the 64 vs the 1080 and comparing them; the 64 has LOTS more stream processors than the 1080. So my question is, how will this affect performance?
 
Solution
first of all you need to throw away the idea about you can compare nvidia and AMD GPU directly by comparing their spec sheet such as stream processor count, clock speed, bandwidth etc. few poster here already said the architecture is different between the two. even if both card have the same amount of stream processor and same clock speed the performance will not going to end up being the same because they were build differently.

and you tried to figure out which one will be better in the future. just because Vega is almost as fast as 1080ti in blender means vega currently under utilized right now in games. the thing about software is they can be optimized for specific hardware. for professional application this is normal. but...

That_Tech_Guy_Again

Distinguished
Aug 27, 2016
201
0
18,690


Why is it, that when i say "performance".... Everyone constantly brings up games???? How do you even know if i am looking at gaming performance? I am taking about "stream processors".... i want technical specifications, i do not care about benchmarks.... benchmarks in this context are irrelevant.

The vega 64 has approx 4k stream processors while the 1080 has approx 2.5k.... What would "hypothetically" increase/decrease performance of a graphics card "in relation to its cores".... For example, The vega 64 has more cores/lower clock speed... however the more cores allows for more work to be done IN PARALLEL.... thereby allowing for the LOWER core speeds while still obtaining similar benchmarks to the 1080 which has LESS cores/HIGHER clock speed... First, is this right? What about bus speed? or other components? How would they relate to core speed? use a linear logical progression of reasoning when stating how... also I have heard that the vega 64 also IS NOT fully UTILIZED in most applications... For example, lets say you have a car which can go UPTO 100km (vega 64) and another car which can go UPTO 50km (GTX 1080), however what if the speed limit was 40km.... NEITHER CARD (vegas 64 gtx 1080) WOULD use 100% of their resources. However, the 100km would have a theoretically higher limit.

I ask this because in certain applications.... such as BLENDER (particularly in rendering).... apparently the Vega 63 beats the GTX 1080 Ti by 20%..... Which would seem to confirm my suspicion of not all resources being utilized.... however, what if all resources were "activated" by that i mean, what if, per say, the vega 64 had a voltage running through all its components even when some of those components were not in use? That would explain the higher TDP and the lower benchmarks, correct????
 

That_Tech_Guy_Again

Distinguished
Aug 27, 2016
201
0
18,690


Lol... my thoughts exactly. Just one small problem.... money... i do not have enough to buy both... only 1 card.but i will be doing more than just gaming with it... for example... strictly speaking, in games, the higher clock speed means more fps.... but does more fps alson mean greater detail? for example, 100 fps with low presets vs 60 fps with ultra presets.... and what about in terms of rendering? Does clock speed even matter? or does having MORE cores matter? Does the type of core matter? (i do know about IPC and that stuff in relation to optimization), however how would each card fair, if per say, they were both running the same code, which was NOT optimized for either card? What about bus width? The data throughput might be important if the need lots of data running simultaneously? Like say, maybe a particle physics simulation?

In the end, what i am trying to get at, is that, just because a 1080 might beat the 64 in gaming benchmarks, it does not relflect its raw potential, and if that potential could be harnessed, then could that lead to an increased in its potential in the future?

For example, some applications are programmed to work better with CUDA while others better for OpenCL? This is due to optimization on the developers part. this real world performance difference might not be due to one card being better than another.... Hypothetically lets say we have 2 card, Card A has 10 cores.... Card B has 5 cores

If

card A only uses 5 cores

card B uses all 5 cores


Assuming Both cards A and B have identically type of cores and everything else is identically except for the program which is unable to utilize all cores of card A.... Then both cards would have the same Real world performance, correct?

Then what if, 1 year from now, card A was optimized to use ALL 10 cores, would it not be TWICE as fast?
 

COLGeek

Cybernaut
Moderator
MERGED QUESTION
Question from That_Tech_Guy_Again : "CUDA cores VS Stream Processors Comparison"





 

That_Tech_Guy_Again

Distinguished
Aug 27, 2016
201
0
18,690


How was it a follow-up question? They were two different lines of enquiry. One was specifically focused no the affect that different/quantity of cores make to performance, and how that might be translated into performance.

The other was a general overview asking which card; the vega 64 vs gtx 1080 would be better, overall, for example, not just in gaming benchmarks, but also including workstation bench marks, and possibly which card would be considered more "future-proof" (by this, i mean the definition of future-proof where a card lasts approximately 5 years before it is recommended that you buy a new card).
 
In order to know whether or not component X from AMD will outperform component Y from Nvidia, you can only rely on benchmarks to know precisely.

Whether you intend to use the GPU in question for productivity, mining or gaming... you still need to compare the relevant benchmarks.

If you start looking at core clocks, memory specs, stream processors,ROPs, TMUs etc., that won't give you the full story (as the two GPUs have different architectures). It's only the benchmarks that count.

Vega 64 has 4096 Streaming Processors. The GTX 1080 has 2560 Cuda Cores. But a Radeon Stream Processor (as found inside Vega 10 XT) is simply not equivalent to an Nvidia Cuda Core as found in GP104-400, because they are different architectures. Both the core/processor itself and the remainder of the GPU they are found in.

So the question 'how will this affect performance?' is not really a relevant question. All aspects of the GPU architecture and associated components will play a role in its overall performance to some extent; such as GPU memory bandwidth, amount and latency.

https://graphicscardhub.com/cuda-cores-vs-stream-processors/

 

sla70r

Honorable
Jan 20, 2014
576
2
11,065


If optimizing a card would result in better performance vs a competing brand, the manufacturer would be the first to let everyone know. They wouldn't hide raw potential, that would hurt their sales and company. Companies manufacture products to make money.
 

That_Tech_Guy_Again

Distinguished
Aug 27, 2016
201
0
18,690




https://www.tomshardware.com/reviews/playerunknowns-battlegrounds-graphics-performance,5619-5.html


Finally, the GeForce GTX 1080 causes slightly less host processing utilization compared to AMD's Radeon RX Vega 64 at a similar performance level. This is a DirectX 11-based game after all, and Nvidia is known to enjoy an advantage in that API due to its heavily optimized drivers.


I was referring to "Driver optimization".
 
first of all you need to throw away the idea about you can compare nvidia and AMD GPU directly by comparing their spec sheet such as stream processor count, clock speed, bandwidth etc. few poster here already said the architecture is different between the two. even if both card have the same amount of stream processor and same clock speed the performance will not going to end up being the same because they were build differently.

and you tried to figure out which one will be better in the future. just because Vega is almost as fast as 1080ti in blender means vega currently under utilized right now in games. the thing about software is they can be optimized for specific hardware. for professional application this is normal. but that is not usually the case with games. most often game developer will only make optimization that will going to work on various hardware that available at the time (for PC). in short you will not going to see Vega suddenly going to be as fast as 1080ti in majority games in the future while right now they only as fast as 1080.

but can Vega be as fast as 1080ti in games? yes it can. but it will not because of games finally utilizing more Vega stream processor like you want to point out but by supporting extra feature inside vega that did not exist in 1080ti such as RPM. but the problem with RPM is the feature is hardware specific and game developer will need to support it in their games. in PC majority of game developer try to avoid this unless they were "sponsored" to support it.

 
Solution