News AMD Sneaks Out The Radeon RX 5300 For Casual 1080p Gaming

Jun 20, 2020
21
10
15
Technical competency of Tom's keeps falling.
The reason for only 8x lanes on pcie 4.0 is so your GPU can use system ram at twice the rate of 3.0 while saving costs of a full 16x config and saving the cost of vram

So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x

Place is becoming a joke, hence vacant comment sections.
 
Last edited:

InvalidError

Titan
Moderator
So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x
It cannot make use of the "extra" bandwidth since 4.0x8 and 3.0x16 is the SAME bandwidth, no extra there over older GPUs that had 3.0x16.

Cost-wise, 3.0x16 has been present on GPUs down to $50, so cost isn't really much of a factor. Looking at 4GB vs 8GB RX5500 results, the more likely reason for AMD limiting entry-level GPUs to 4.0x8 is to protect the marketability of 8GB models since the 4GB models would likely match 8GB performance on 4.0x16 instead of being 50% as good on 3.0x8 and ~80% as good on 4.0x8.
 
Technical competency of Tom's keeps falling.
The reason for only 8x lanes on pcie 4.0 is so your GPU can use system ram at twice the rate of 3.0 while saving costs of a full 16x config and saving the cost of vram

So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x

Place is becoming a joke, hence vacant comment sections.

System memory is orders of magnitude slower than vram pcie 3 x16 or pcie4 x8.

Any way this card is a joke at that price. It's slower than a RX570, has less memory, and cost more. It needs a $120 price point at most
 

InvalidError

Titan
Moderator
System memory is orders of magnitude slower than vram pcie 3 x16 or pcie4 x8.
System memory is ~50GB/s vs 100-1000 GB/s for VRAM, nowhere near orderS of magnitude, merely double at the low-end and barely over an order of magnitude at the very top.

System memory does not need to be anywhere near as fast as VRAM to be useful, it only needs to be fast enough to handle less frequently used assets. Using system memory as a sort of victim cache for VRAM is actual orders of magnitude faster than waiting for software to reload assets from storage.
 
Jun 20, 2020
21
10
15
It cannot make use of the "extra" bandwidth since 4.0x8 and 3.0x16 is the SAME bandwidth, no extra there over older GPUs that had 3.0x16.....

c'mon mod read what the article said and what i said together.

""...not that it will benefit from increased throughput since the graphics card is confined to a x8 connection. ""

So YES it benefits from the pcie4.0 bc it is the same bandwidth as pcie 3.016x, which i also said.

and you have no way of knowing what it actually costs AMD, the leader in pcie 4.0, to put in an 8x vs a older 16x connection, marketing purposes aside. however, id love to see an internal price sheet .
 
System memory is ~50GB/s vs 100-1000 GB/s for VRAM, nowhere near orderS of magnitude, merely double at the low-end and barely over an order of magnitude at the very top.

System memory does not need to be anywhere near as fast as VRAM to be useful, it only needs to be fast enough to handle less frequently used assets. Using system memory as a sort of victim cache for VRAM is actual orders of magnitude faster than waiting for software to reload assets from storage.

I might be willing to argue this. It's not like you are going straight from the GPU to memory. You are going GPU->CPU memory controller->Memory->CPU memory controller->GPU. Even with a heterogeneous memory architecture that's a lot of overhead.

And 3GB was showing it's age on my 7970 about 3, 4 years back. There's the display buffer, draw buffer, draw call buffer, z buffer. A lot of memory that is required gets eaten up quick. Then you need room for shader programs to be stored.

So your 50GB theoretical throughput has a lot of latency associated with it which ultimately affects speed.
 
Last edited:

InvalidError

Titan
Moderator
So your 50GB theoretical throughput has a lot of latency associated with it which ultimately affects speed.
Latency does not matter much on GPUs since the thread wave scheduler usually knows what data a thread wave will work on and will launch those waves once the data has been pre-fetched as usual regardless of where that data comes from. As long as there isn't so much stuff needing system memory that the GPU runs out of stuff it can do from VRAM faster than it can complete the parts that require system memory, there should be little to no negative impact from using system RAM.

If you look at the RX5500's benchmarks, in VRAM-intensive places where the 8GB models can do 60-70fps, the 4GB models on 3.0x8 crash to 20-30fps and bounce back to 50-60fps on 4.0x8. It clearly does not mind the higher latency of using system RAM over 4.0x8 and greatly benefits from getting an extra ~16GB/s out of it. Had the 4GB RX5500 had a full 4.0x16, it would likely come out practically as good as the 8GB version for $40 less at least as far as the current crop of VRAM-intensive games are concerned.
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Technical competency of Tom's keeps falling.
The reason for only 8x lanes on pcie 4.0 is so your GPU can use system ram at twice the rate of 3.0 while saving costs of a full 16x config and saving the cost of vram

So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x

Place is becoming a joke, hence vacant comment sections.

Yes, but given the fact that vast majority of the desktops today supports only PCIE 3.0, its going to be a bummer.

And most PCIE 4.0 boards aren't exactly cheap.
 
Tests done on the RX5500 4Gb showed significant performance increase when using PCIE 4.0 over PCIE 3.0 - system RAM took up the slack, bringing the 4 Gb variant almost in-line with the 8Gb variant on VRAM-hungry games. So yes, THG missed an important point here.
Now, the main interest of using 8x PCIE 4.0 over 16x PCIE 3.0 is cost reduction for card traces and chip pinout - it reduces PCB complexity significantly, since you only need to trace half as many PCIE paths and it reduces pins on the GPU chip itself. Insignificant on top of the range cards, but on low price low margin products like these, can represent significant savings (15 cents x 10 million units, you do the math).
Same thing for the reduced RAM bandwidth.
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Tests done on the RX5500 4Gb showed significant performance increase when using PCIE 4.0 over PCIE 3.0 - system RAM took up the slack, bringing the 4 Gb variant almost in-line with the 8Gb variant on VRAM-hungry games. So yes, THG missed an important point here.
Now, the main interest of using 8x PCIE 4.0 over 16x PCIE 3.0 is cost reduction for card traces and chip pinout - it reduces PCB complexity significantly, since you only need to trace half as many PCIE paths and it reduces pins on the GPU chip itself. Insignificant on top of the range cards, but on low price low margin products like these, can represent significant savings (15 cents x 10 million units, you do the math).
Same thing for the reduced RAM bandwidth.

If the rendered imaged shown is the actual card, i have to say there is hardly any cost reduction for the pcb. This is because the connector is still 16x. Since traces are etched, it will be removed instead.
 

InvalidError

Titan
Moderator
Now, the main interest of using 8x PCIE 4.0 over 16x PCIE 3.0 is cost reduction for card traces and chip pinout - it reduces PCB complexity significantly
The only things that have a meaningful impact on PCB design and manufacturing costs is cutting down size and layer count. Having a narrower PCIe bus won't magically reduce layer count when a GPU still has its 128+bits wide GDDR6 bus to fan out from under the BGA which is more susceptible to noise and crosstalk than PCIe due to being single-ended. The size won't go down either since the full x16 slot is still required for mechanical support. At most, you save a few minutes of CAD time to connect pins to pads and add them to PCIe autorouter constraints... though the more likely scenario is that PCIe got copy-pasted from an existing design and removing lanes is actually extra work. The only thing you save manufacturing-wise is ~40 holes for the fan-out from under the BGA and 16 AC coupling caps on PCIe lanes, a grand saving of about $0.01 total.

The biggest problem with lack of x16 support is that most people who might consider an upgrade to a 4GB RX5500 have older systems that don't have 4.0. The GTX1650S holds up far better in scenarios that crush the 4GB RX5500 and would be the better pick for pre-4.0 systems. Doesn't make much sense to feature yourself out of your most likely market to shave 1-2% on manufacturing costs.
 
  • Like
Reactions: King_V
If the rendered imaged shown is the actual card, i have to say there is hardly any cost reduction for the pcb. This is because the connector is still 16x. Since traces are etched, it will be removed instead.
You may still have the copper on the PCB, but those would go nowhere - the pins could be left in for mechanical tolerance purposes , but the paths would be gone for simplified PCB layout - and thus less rejects.
 
The only things that have a meaningful impact on PCB design and manufacturing costs is cutting down size and layer count. Having a narrower PCIe bus won't magically reduce layer count when a GPU still has its 128+bits wide GDDR6 bus to fan out from under the BGA which is more susceptible to noise and crosstalk than PCIe due to being single-ended. The size won't go down either since the full x16 slot is still required for mechanical support. At most, you save a few minutes of CAD time to connect pins to pads and add them to PCIe autorouter constraints... though the more likely scenario is that PCIe got copy-pasted from an existing design and removing lanes is actually extra work. The only thing you save manufacturing-wise is ~40 holes for the fan-out from under the BGA and 16 AC coupling caps on PCIe lanes, a grand saving of about $0.01 total.

The biggest problem with lack of x16 support is that most people who might consider an upgrade to a 4GB RX5500 have older systems that don't have 4.0. The GTX1650S holds up far better in scenarios that crush the 4GB RX5500 and would be the better pick for pre-4.0 systems. Doesn't make much sense to feature yourself out of your most likely market to shave 1-2% on manufacturing costs.
Less traces on a PCB means a simpler mask for path tracing and thus less chances of a failed PCB - less rejects, less cost. It also makes perfect sense for OEM systems, which are brand new and where performance is whatever the manufacturer wants to make - on a B550 based system the card would work well, on a A520 based system it would still be cheap, and whoever was expecting stellar performance on such a potato would have been deluded anyway.
As for bus width, that chip has a 96 bit wide VRAM bus - that's MUCH simpler than a 256 bit wide bus and also still simpler than a 128-bit wide one. Also, simpler means "less chance of cross-parasiting".
That last one would allow one to use some PCBs that failed at using a RX5500 at worst, and since it's a reduced version of an existing design would not require THAT much re-engineering to work.
Finally, why that card and not a RX570? Simple - AMD may want to drop active support on Polaris one day - it's been out since 2016, they might want to relegate Polaris to "legacy" support by 2025. Remember how much flak they got when they stopped R700 support in their drivers while they were still selling HD42xx based motherboards? Eventhough the last beta driver for that chip allowed Windows 10 to run (well) on a chip family that came out with Vista...
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Less traces on a PCB means a simpler mask for path tracing and thus less chances of a failed PCB - less rejects, less cost. It also makes perfect sense for OEM systems, which are brand new and where performance is whatever the manufacturer wants to make - on a B550 based system the card would work well, on a A520 based system it would still be cheap, and whoever was expecting stellar performance on such a potato would have been deluded anyway.
As for bus width, that chip has a 96 bit wide VRAM bus - that's MUCH simpler than a 256 bit wide bus and also still simpler than a 128-bit wide one. Also, simpler means "less chance of cross-parasiting".
That last one would allow one to use some PCBs that failed at using a RX5500 at worst, and since it's a reduced version of an existing design would not require THAT much re-engineering to work.
Finally, why that card and not a RX570? Simple - AMD may want to drop active support on Polaris one day - it's been out since 2016, they might want to relegate Polaris to "legacy" support by 2025. Remember how much flak they got when they stopped R700 support in their drivers while they were still selling HD42xx based motherboards? Eventhough the last beta driver for that chip allowed Windows 10 to run (well) on a chip family that came out with Vista...

OK, thats possible... Althought traces from PCIE fingers to the GPU are rather short and not many of them. Nothing compared to the memory traces.

But the main issue is cost of PCIE 4.0 boards. They aren't exactly cheap. OK, B550 prices ok now but if someone has to get RX5300, they obviously don't have alot of budget. At the budget segment $10-$20 do matters.
 
  • Like
Reactions: King_V

InvalidError

Titan
Moderator
Less traces on a PCB means a simpler mask for path tracing and thus less chances of a failed PCB - less rejects, less cost.
The likelihood of mostly straight and loosely spaced 3-4cm long traces from BGA to board edge being bad is much lower than the 40 or so tightly packed single-ended delay-matched traces between each GDDR package and the GPU. As far as ordering PCBs is concerned, I have never seen "mask complexity" be a cost input. What drives up the cost is minimum feature size (smaller features are more likely to over-etch) and tighter tolerances. Once you need higher precision processing for one feature (ex.: putting vias between 0.4mm pitch BGA pads), you effectively get it for free for the rest of the board. You are far more likely to need to step up your PCB precision to route 12+Gbps GDDR than PCIe.

Where crosstalk is concerned, look at the PCIe routing on a x16 card. For the most part, PCIe traces are so far apart due to the card edge connector being so sparse that crosstalk isn't much of a concern: they go straight for the BGA without any particular considerations beyond keeping pairs tight together.
 

King_V

Illustrious
Ambassador
A potential problem that I can see here for AMD is that Nvidia has moved the GTX 1650 over to GDDR6. I get the impression that they're not planning on making the GDDR5 variant anymore, given that the GDDR6 version is about the same price (and the Super only slightly more).

That change boosted the GTX 1650's bandwidth from 128GBps to 192.

The RX 5300 at 168 is already behind. The underlying Navi architecture might make up for that, but I'm not sure.

I'm definitely anticipating seeing a side-by-side test. But if the target competition is the GDDR5 variant of the 1650, then AMD is dropping the ball here, unless its price is doing a major undercut of the 1650's.
 

King_V

Illustrious
Ambassador
Such an unfair comparison. I'd imagine the RX5300 getting destroyed, especially if testing on a PCIe 3.0 platform like a X470, B450, A520 or anything Intel on the desktop side so far due to Nvidia having both a 1GB and 3.0x16 advantage.
My instinct agrees with you on this. Yet AMD trots out that particular graph on their page. It doesn't seem possible to me unless they're comparing it to the GDDR5 version.

The only thing I can think of is that somehow this is to the 5500 XT what the 5600 XT is to the 5700. I find that unlikely, though. The 5500 XT was a bit of a disappointment - and this is a cut down version of that.

Still, I'm curious to see what actual testing gives us. Further, I imagine that now there has to be two sets of tests - one on a PCIe 3.0 platform, and one on a PCIe 4.0 platform.
 

InvalidError

Titan
Moderator
My instinct agrees with you on this. Yet AMD trots out that particular graph on their page. It doesn't seem possible to me unless they're comparing it to the GDDR5 version.
The 4GB and 8GB RX5500 look fine on 3.0x8 until the 4GB one runs out of VRAM. I can imagine RX5300 results looking decent as long as you carefully pick the benchmarks and settings to avoid breaking 3GB VRAM.
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Such an unfair comparison. I'd imagine the RX5300 getting destroyed, especially if testing on a PCIe 3.0 platform like a X470, B450, A520 or anything Intel on the desktop side so far due to Nvidia having both a 1GB and 3.0x16 advantage.

WEll, regarding this, I can only say AMD has just themselves to blame. 3GB ram and 8x slot. Vast majority of AMD users aren't even on x570/B550, don't mention Intel ones....
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Technical competency of Tom's keeps falling.
The reason for only 8x lanes on pcie 4.0 is so your GPU can use system ram at twice the rate of 3.0 while saving costs of a full 16x config and saving the cost of vram

So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x

Place is becoming a joke, hence vacant comment sections.

How many people using PCIE 4.0 today? For those using, how many will bother about RX5300 or will they prefer + able to afford something better?