Discussion AMD NAVI RX 5700 XT's picture and SPECS leaked *off topic*

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Hello,

You may or may not be aware of this, but Videocardz has just leaked some SPECS and info on the AMD's upcoming NAVI GPU.

AMD Radeon RX 5700 XT features 40 Compute Units (2560 stream processors). The Navi GPU is clocked at 1605 MHz in base mode, 1755 MHz in Game Mode and 1905 MHz in boost mode. Of course, the new addition here is the Game Clock.

With the said boost clock, AMD expects a maximum of 9.75 TFLOPs of single-precision compute from the Radeon RX 5700 XT. The card is also confirmed to feature 8 GB of GDDR6 memory that should run across a 256-bit wide bus interface


The card is confirmed to feature 8GB of GDDR6 memory. The memory bus width, memory clock, pricing, and availability date were not available at the time of writing.

https://videocardz.com/80966/amd-radeon-rx-5700-xt-picture-and-specs-leaked

https://wccftech.com/amd-radeon-rx-5700-xt-7nm-navi-gpu-rdna-specs-leak-8-gb-2560-cores/
 
Last edited:
  • Like
Reactions: MrN1ce9uy
The Navi siblings are overpriced* and underwhelming to me. The "RDNA" re-brand of GCN feels cheap after all the good effort AMD has been putting forth. Those price points make zero sense to me and the feature set they'll bring is meh. I hope partners bring better value at the MSRP with non-blower designs.

All their power savings went down the drain to reach parity with nVidia and you can see that easily. These being """mid-range""" cards and having an 8pin+6pin is baffling. This smells more like a failed attempt at a flagship card than anything else. I hope I'm wrong and they'll have a beefier Navi (or whatever) shortly announced, because these two cards are underwhelming at those price points.

I'm also wondering about the massive gap they have under $350 though. Polaris refreshes aplenty? Ugh... Please no.

Cheers!
 

King_V

Illustrious
Ambassador
The "RDNA" re-brand of GCN feels cheap

This seems like a very unfair description. It's not a rebrand of GCN.

"Although the company says RDNA is all-new, vestiges of Graphics Core Next are clearly identifiable throughout."

There's probably good things that are in GCN that are worth keeping. I'm sure the same can be said of Nvidia's architecture, some old things that were worthwhile became part of the new architecture.

After all:
vestige

noun
1. a mark, trace, or visible evidence of something that is no longer present or in existence: A few columns were the last vestiges of a Greek temple.

2. a surviving evidence or remainder of some condition, practice, etc.: These superstitions are vestiges of an ancient religion.

3. a very slight trace or amount of something: Not a vestige remains of the former elegance of the house.

4. Biology. a degenerate or imperfectly developed organ or structure that has little or no utility,but that in an earlier stage of the individual or in preceding evolutionary forms of the organismperformed a useful function.

5. Archaic. a footprint; track.
 
Actually RDNA is a new and efficient shader architecture. With RDNA also comes its brand new Compute Unit (CU), which AMD has redesigned to increase the IPC, or single-thread performance.

The architecture is new, but it does share minor design elements from GCN, the compute units have also been completely overhauled though. "Navi 10" also features redesign in the cache hierarchy. Each RDNA dual-CU has a local fast cache AMD refers to as L0 (level zero).
 
Fair enough; thanks for pushing back on that topic. I just can't shake the feeling that GCN is still lurking under RDNA, but you can't really re-work something from scratch and have zero remnants of previous implementations or IP. I'll need to read more on RDNA then, as my first (quick) read gave me the impression it was GCN with a few tweaks instead of a proper grounds up re-work.

Cheers!
 
  • Like
Reactions: Metal Messiah.
You can think of it this way

Theres gcn the architecture and gcn the instruction set words.

Rdna is the physical architecture while using the gcn vliw instruction sets.

Why would you want to do this? It allows you to program the same command set without optimizing for the specific architecture. The speeds up development time. This means no special recompiles for vulkan apis.

That said as time progresses you can exposes the rdna specific instructions for things like dx12 which are transparent for most programmers due to the dx12 api. (Non proprietary extension)

This could mean with time dx 12 titles will get faster. But I cant gaurentee that.

That said its still a poor value next to a 2070 once that hits $450 to $475.
 
Rdna is the physical architecture while using the gcn vliw instruction sets

Are you sure about this ? Because I think GCN replaced VLIW with a traditional SIMD vector processor. The complex nature of VLIW made it harder to disassemble and to debug though.

Actually, there was some difference in the architectures though. The VLIW was poor for GPU computing purposes, unlike the non-VLIW SIMD. The principal issue was that VLIW was hard to schedule ahead of time and there’s no dynamic scheduling during execution. VLIW was all about extracting instruction level parallelism (ILP), and the non-VLIW SIMD was about thread level parallelism (TLP).

An Example "code snippet" of the VLIW compiler, as well as GCN. You can see there are some restrictions under VLIW (assuming you can understand the CODE), so that's why AMD dropped VLIW.

IMO, In 2021 we might see a completely new arch, rumored as ARCTURUS, (most probably on VLIW2, or as AMD calls it SUPER-SIMD). This is where things might change for AMD.

Code:
.VLIW
// Registers r0 contains "a", r1 contains "b"
// Value is returned in r2
00   ALU_PUSH_BEFORE
       1  x: PREDGT     ____, R0.x,  R1.x
             UPDATE_EXEC_MASK UPDATE PRED
01 JUMP   ADDR(3)
02 ALU
       2  x: SUB        ____, R0.x,  R1.x
       3  x: MUL_e      R2.x, PV2.x, R0.x
03 ELSE POP_CNT(1) ADDR(5)
04 ALU_POP_AFTER
       4  x: SUB        ____, R1.x,  R0.x
       5  x: MUL_e      R2.x, PV4.x, R1.x
05 POP(1) ADDR(6)

Non-VLIW SIMD
// Registers r0 contains "a", r1 contains "b"
// Value is returned in r2
v_cmp_gt_f32       r0,r1          //a > b, establish VCC
s_mov_b64          s0,exec        //Save current exec mask
s_and_b64          exec,vcc,exec  //Do "if"
s_cbranch_vccz     label0         //Branch if all lanes fail
v_sub_f32          r2,r0,r1       //result = a - b
v_mul_f32          r2,r2,r0       //result=result * a


s_andn2_b64        exec,s0,exec   //Do "else" (s0 & !exec)
s_cbranch_execz    label1         //Branch if all lanes fail
v_sub_f32          r2,r1,r0       //result = b - a
v_mul_f32          r2,r2,r1       //result = result * b

s_mov_b64          exec,s0        //Restore exec mask
 

TJ Hooker

Titan
Ambassador
Are you sure about this ? Because I think GCN replaced VLIW with a traditional SIMD vector processor
Yeah, AMD's previous TeraScale arch was VLIW, GCN is RISC.

But I have also read something similar to what @digitalgriffin is saying, that GCN is used by AMD to refer to both an instruction set architecture and a microarchitecture. So RDNA supposedly uses the same (or similar) ISA, but a differ uarch.
 
Last edited:
Uhm... 2 different things here that could be getting mixed up: instruction sets are completely independent/different to underlying architectures. Something that is VLIW can perfectly decode any ISA you put on top (with some reservations, clearly). That being said, GCN is not a VLIW architecture and that is not up for debate. The last VLIW GPU AMD made were the HD6K series with VLIW4 superseding VLIW5 from the HD5K series. Also, VLIW is very strict on how you use it, so @Metal Messiah. is absolutely correct there. RDNA is closer to what GCN is now than what VLIW4 used to be (from what I could read) and doesn't do more than just expose the same ISA as GCN plus a few extra tidbits (much like X86 slaps more instructions with new CPU designs).

Cheers!
 

King_V

Illustrious
Ambassador
That said its still a poor value next to a 2070 once that hits $450 to $475.

Checking all the RTX 2070 cards on PCPartpicker as of right now, there is exactly ONE model that is available for $449.99. The next cheapest is $469.99, then after that there are five $479.99 models, mostly the "low" models for each brand, one of which is an Aero cooler.

The 5700 XT performs slightly better than the 2070, albeit at a higher power consumption. How is this a poor value compared to the 2070?

It's also pretty likely that board partners will, like with almost every other card out there, Nvidia or AMD, have more expensive models, and cheaper models, which the latter will likely be below MSRP. This wouldn't be anything unusual.
 
Uhm... 2 different things here that could be getting mixed up: instruction sets are completely independent/different to underlying architectures. Something that is VLIW can perfectly decode any ISA you put on top (with some reservations, clearly). That being said, GCN is not a VLIW architecture and that is not up for debate. The last VLIW GPU AMD made were the HD6K series with VLIW4 superseding VLIW5 from the HD5K series. Also, VLIW is very strict on how you use it, so @Metal Messiah. is absolutely correct there. RDNA is closer to what GCN is now than what VLIW4 used to be (from what I could read) and doesn't do more than just expose the same ISA as GCN plus a few extra tidbits (much like X86 slaps more instructions with new CPU designs).

Cheers!

You guys are quite correct. I don't know where my head was this morning. Starting with GCN 1.0 (7000 series it was SIMD ARM based.) I think I had the 6950/6970 stuck in my head for some reason, which was the last of the VLIW.
 
Checking all the RTX 2070 cards on PCPartpicker as of right now, there is exactly ONE model that is available for $449.99. The next cheapest is $469.99, then after that there are five $479.99 models, mostly the "low" models for each brand, one of which is an Aero cooler.

The 5700 XT performs slightly better than the 2070, albeit at a higher power consumption. How is this a poor value compared to the 2070?

It's also pretty likely that board partners will, like with almost every other card out there, Nvidia or AMD, have more expensive models, and cheaper models, which the latter will likely be below MSRP. This wouldn't be anything unusual.

If you do an apple to apples comparison when they meet price parity:

2070 NVIDIA Advantages:
Quieter operation
RTX (Which is of value to some)
Better performance in Gameworks applications/More partner optimized games.
Likely better overclocking
Reputation/Mindshare (which is a 2#$%@#$% to fight)
Traditionally better VR support.
Less power*

5700 XT Advantages:
Image resharping (which is just a post process filter for edge enhancement) But I think this is a neutral like like DLSS. It introduces potential unwanted artifacts.

I really don't like NVIDIA and all the @#$@# they pull. But my money goes where there is better value.

*Less power is a very small factor in the decision process. The total power cost really isn't the concern which is a few bucks a year at most. But more power requires a more powerful PSU, and creates more case heat. More case heat means high clocked fans. More fans equals more noise. So all things being equal, you start going after the small details like power.

Is mindshare important? Have you ever gone to Walmart and they label their medicine "Compare to ingredients in Nyquil Liquid Tabs"? Now if they were priced the same you would likely stick to the brand name. But because there is a discount there, a lot of people grab the store discount version.

In this case the RTX 2070 being close to equal in performance of the 5700 XT, I would say most would pick up the brand name (NVIDIA) if priced equally. Like I said "Mindshare is a @$#%!@#$!" AMD tried to win it before with value, but I think they went the wrong route. Unfortunately, AMD marketing seems to be a weak-point in this area.

One potential attack point is to show where "AMD is everywhere." They could make licensing agreements with Microsoft/Sony to show a "Powered By AMD Navi" boot screen every time the game console is turned on. It only needs to be a couple seconds. I mean how many of us remember "Microsoft Game Studios" every time a Microsoft game starts up? I can still hear that sound ringing in my head.

However AMDs feet are in the cement. Getting prices lower at this point is a no-go.
 
Last edited:
Seems like I read somewhere that AMD no longer wants to be the "budget option". Either way, I think they should stop releasing the same performance for [nearly and sometimes] the same price as Nvidia nearly a year later. That's not going to put them on top.

It will soon be 2 years since I bought my Nvidia GPU and AMD has yet to release something that can even match it in performance.
 
Last edited:

King_V

Illustrious
Ambassador
RX 5700 ($379) competes with the RTX 2060 ($349) and some 2060 are even selling for $319 right now. Just think how many more units AMD would sell if the XT was $379 and the non-XT was $299.


Does it? Does the RX 5700 perform the same as the 2060? Or does it outperform it significantly?

Most of the talk has been about the 5700 XT outperforming the 2070, though at the moment all we have are AMD's benchmarks. I haven't even see anything from AMD regarding the non-XT version.

I don't think, until we get some kind of benchmarking, that we can say "The RX 5700 competes with the RTX 2060" - it may be true, in which case, it's poorly priced, but it could well be that the 5700 is, say, halfway between the 2060 and 2070, and the XT version is a bit above the 2070 but less than or maybe equal to the Radeon VII.
 

King_V

Illustrious
Ambassador
If you do an apple to apples comparison when they meet price parity:

2070 NVIDIA Advantages:
Quieter operation
RTX (Which is of value to some)
Better performance in Gameworks applications/More partner optimized games.
Likely better overclocking
Reputation/Mindshare (which is a 2#$%@#$% to fight)
Traditionally better VR support.
Less power*

5700 XT Advantages:
Image resharping (which is just a post process filter for edge enhancement) But I think this is a neutral like like DLSS. It introduces potential unwanted artifacts.

I really don't like NVIDIA and all the @#$@# they pull. But my money goes where there is better value.

*Less power is a very small factor in the decision process. The total power cost really isn't the concern which is a few bucks a year at most. But more power requires a more powerful PSU, and creates more case heat. More case heat means high clocked fans. More fans equals more noise. So all things being equal, you start going after the small details like power.

Is mindshare important? Have you ever gone to Walmart and they label their medicine "Compare to ingredients in Nyquil Liquid Tabs"? Now if they were priced the same you would likely stick to the brand name. But because there is a discount there, a lot of people grab the store discount version.

In this case the RTX 2070 being close to equal in performance of the 5700 XT, I would say most would pick up the brand name (NVIDIA) if priced equally. Like I said "Mindshare is a @$#%!@#$!" AMD tried to win it before with value, but I think they went the wrong route. Unfortunately, AMD marketing seems to be a weak-point in this area.

One potential attack point is to show where "AMD is everywhere." They could make licensing agreements with Microsoft/Sony to show a "Powered By AMD Navi" boot screen every time the game console is turned on. It only needs to be a couple seconds. I mean how many of us remember "Microsoft Game Studios" every time a Microsoft game starts up? I can still hear that sound ringing in my head.

However AMDs feet are in the cement. Getting prices lower at this point is a no-go.

I would go so far as to say that Ray Tracing is probably as small of a factor as less power is. I'm not really so sure about reputation/mindshare, either. How big is the "top end" market for video cards, versus, say, the mainstream/1080p level where AMD handily outdoes Nvidia in price/performance? Likewise, I think overclocking is a tiny niche, given how little extra performance can actually be eked out of it.

Quieter operation? Maybe. I know that would be an issue for me generally, though I went with the blower Founders Edition Nvidia 1080 card because, well, during the crypto-madness, MSRP during those 5 minute windows when you could get it was NOT something to pass on.

Until seeing your post, I never heard of Gameworks, so, can't say how much effect that has.

These seem mostly like a lot of things that concern only a small percentage of gamers. Versus, say, "Does it do the job for my system?"

I really don't think AMD has to outdo the price/performance of Nvidia by an overwhelming amount. It's almost like people expect AMD to need an overwhelming price/performance level that Nvidia probably couldn't manage even if they wanted to.

The new cards, if AMD's numbers hold up, outdo Nvidia currently in terms of price/performance, but at the tradeoff of more power consumption. Whether that higher consumption is enough to require noisy cooling - well, that seems kind of unlikely to me given the TDP numbers. After all, how many people have complained about the extra noise of the 1080Ti, 2080, or 2080Ti?
 
Does it? Does the RX 5700 perform the same as the 2060? Or does it outperform it significantly?

Most of the talk has been about the 5700 XT outperforming the 2070, though at the moment all we have are AMD's benchmarks. I haven't even see anything from AMD regarding the non-XT version.

I don't think, until we get some kind of benchmarking, that we can say "The RX 5700 competes with the RTX 2060" - it may be true, in which case, it's poorly priced, but it could well be that the 5700 is, say, halfway between the 2060 and 2070, and the XT version is a bit above the 2070 but less than or maybe equal to the Radeon VII.

There's no other GPU to compete with there, except their own RX Vega 56/64. But I would still rather go with the RX 5700 if it does cost a little more than the RTX 2060 since it does have 8GB of the same GDDR6. A lot of Nvidia cards lose their potency about 1-3 years down the road depending on the AAA game market because they have too little vram. I like the fact AMD puts more than enough vram on their cards to still make them relevant years later.
 
Last edited:

Latest posts