AMD recently added the Radeon RX 5300's product page to the company's website.
AMD Sneaks Out The Radeon RX 5300 For Casual 1080p Gaming : Read more
AMD Sneaks Out The Radeon RX 5300 For Casual 1080p Gaming : Read more
It cannot make use of the "extra" bandwidth since 4.0x8 and 3.0x16 is the SAME bandwidth, no extra there over older GPUs that had 3.0x16.So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x
Technical competency of Tom's keeps falling.
The reason for only 8x lanes on pcie 4.0 is so your GPU can use system ram at twice the rate of 3.0 while saving costs of a full 16x config and saving the cost of vram
So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x
Place is becoming a joke, hence vacant comment sections.
System memory is ~50GB/s vs 100-1000 GB/s for VRAM, nowhere near orderS of magnitude, merely double at the low-end and barely over an order of magnitude at the very top.System memory is orders of magnitude slower than vram pcie 3 x16 or pcie4 x8.
It cannot make use of the "extra" bandwidth since 4.0x8 and 3.0x16 is the SAME bandwidth, no extra there over older GPUs that had 3.0x16.....
I know AMD can put a 4.0x16 controller in a $100 CPU complete with CCD, IOD, 15 layers CPU substrate, IHS and stock HSF. The PCIe 4.0x16 interface cannot be a substantial chunk of that.and you have no way of knowing what it actually costs AMD, the leader in pcie 4.0, to put in an 8x vs a older 16x connection
System memory is ~50GB/s vs 100-1000 GB/s for VRAM, nowhere near orderS of magnitude, merely double at the low-end and barely over an order of magnitude at the very top.
System memory does not need to be anywhere near as fast as VRAM to be useful, it only needs to be fast enough to handle less frequently used assets. Using system memory as a sort of victim cache for VRAM is actual orders of magnitude faster than waiting for software to reload assets from storage.
Latency does not matter much on GPUs since the thread wave scheduler usually knows what data a thread wave will work on and will launch those waves once the data has been pre-fetched as usual regardless of where that data comes from. As long as there isn't so much stuff needing system memory that the GPU runs out of stuff it can do from VRAM faster than it can complete the parts that require system memory, there should be little to no negative impact from using system RAM.So your 50GB theoretical throughput has a lot of latency associated with it which ultimately affects speed.
Technical competency of Tom's keeps falling.
The reason for only 8x lanes on pcie 4.0 is so your GPU can use system ram at twice the rate of 3.0 while saving costs of a full 16x config and saving the cost of vram
So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x
Place is becoming a joke, hence vacant comment sections.
Tests done on the RX5500 4Gb showed significant performance increase when using PCIE 4.0 over PCIE 3.0 - system RAM took up the slack, bringing the 4 Gb variant almost in-line with the 8Gb variant on VRAM-hungry games. So yes, THG missed an important point here.
Now, the main interest of using 8x PCIE 4.0 over 16x PCIE 3.0 is cost reduction for card traces and chip pinout - it reduces PCB complexity significantly, since you only need to trace half as many PCIE paths and it reduces pins on the GPU chip itself. Insignificant on top of the range cards, but on low price low margin products like these, can represent significant savings (15 cents x 10 million units, you do the math).
Same thing for the reduced RAM bandwidth.
The only things that have a meaningful impact on PCB design and manufacturing costs is cutting down size and layer count. Having a narrower PCIe bus won't magically reduce layer count when a GPU still has its 128+bits wide GDDR6 bus to fan out from under the BGA which is more susceptible to noise and crosstalk than PCIe due to being single-ended. The size won't go down either since the full x16 slot is still required for mechanical support. At most, you save a few minutes of CAD time to connect pins to pads and add them to PCIe autorouter constraints... though the more likely scenario is that PCIe got copy-pasted from an existing design and removing lanes is actually extra work. The only thing you save manufacturing-wise is ~40 holes for the fan-out from under the BGA and 16 AC coupling caps on PCIe lanes, a grand saving of about $0.01 total.Now, the main interest of using 8x PCIE 4.0 over 16x PCIE 3.0 is cost reduction for card traces and chip pinout - it reduces PCB complexity significantly
You may still have the copper on the PCB, but those would go nowhere - the pins could be left in for mechanical tolerance purposes , but the paths would be gone for simplified PCB layout - and thus less rejects.If the rendered imaged shown is the actual card, i have to say there is hardly any cost reduction for the pcb. This is because the connector is still 16x. Since traces are etched, it will be removed instead.
Less traces on a PCB means a simpler mask for path tracing and thus less chances of a failed PCB - less rejects, less cost. It also makes perfect sense for OEM systems, which are brand new and where performance is whatever the manufacturer wants to make - on a B550 based system the card would work well, on a A520 based system it would still be cheap, and whoever was expecting stellar performance on such a potato would have been deluded anyway.The only things that have a meaningful impact on PCB design and manufacturing costs is cutting down size and layer count. Having a narrower PCIe bus won't magically reduce layer count when a GPU still has its 128+bits wide GDDR6 bus to fan out from under the BGA which is more susceptible to noise and crosstalk than PCIe due to being single-ended. The size won't go down either since the full x16 slot is still required for mechanical support. At most, you save a few minutes of CAD time to connect pins to pads and add them to PCIe autorouter constraints... though the more likely scenario is that PCIe got copy-pasted from an existing design and removing lanes is actually extra work. The only thing you save manufacturing-wise is ~40 holes for the fan-out from under the BGA and 16 AC coupling caps on PCIe lanes, a grand saving of about $0.01 total.
The biggest problem with lack of x16 support is that most people who might consider an upgrade to a 4GB RX5500 have older systems that don't have 4.0. The GTX1650S holds up far better in scenarios that crush the 4GB RX5500 and would be the better pick for pre-4.0 systems. Doesn't make much sense to feature yourself out of your most likely market to shave 1-2% on manufacturing costs.
Less traces on a PCB means a simpler mask for path tracing and thus less chances of a failed PCB - less rejects, less cost. It also makes perfect sense for OEM systems, which are brand new and where performance is whatever the manufacturer wants to make - on a B550 based system the card would work well, on a A520 based system it would still be cheap, and whoever was expecting stellar performance on such a potato would have been deluded anyway.
As for bus width, that chip has a 96 bit wide VRAM bus - that's MUCH simpler than a 256 bit wide bus and also still simpler than a 128-bit wide one. Also, simpler means "less chance of cross-parasiting".
That last one would allow one to use some PCBs that failed at using a RX5500 at worst, and since it's a reduced version of an existing design would not require THAT much re-engineering to work.
Finally, why that card and not a RX570? Simple - AMD may want to drop active support on Polaris one day - it's been out since 2016, they might want to relegate Polaris to "legacy" support by 2025. Remember how much flak they got when they stopped R700 support in their drivers while they were still selling HD42xx based motherboards? Eventhough the last beta driver for that chip allowed Windows 10 to run (well) on a chip family that came out with Vista...
The likelihood of mostly straight and loosely spaced 3-4cm long traces from BGA to board edge being bad is much lower than the 40 or so tightly packed single-ended delay-matched traces between each GDDR package and the GPU. As far as ordering PCBs is concerned, I have never seen "mask complexity" be a cost input. What drives up the cost is minimum feature size (smaller features are more likely to over-etch) and tighter tolerances. Once you need higher precision processing for one feature (ex.: putting vias between 0.4mm pitch BGA pads), you effectively get it for free for the rest of the board. You are far more likely to need to step up your PCB precision to route 12+Gbps GDDR than PCIe.Less traces on a PCB means a simpler mask for path tracing and thus less chances of a failed PCB - less rejects, less cost.
Such an unfair comparison. I'd imagine the RX5300 getting destroyed, especially if testing on a PCIe 3.0 platform like a X470, B450, A520 or anything Intel on the desktop side so far due to Nvidia having both a 1GB and 3.0x16 advantage.I'm definitely anticipating seeing a side-by-side test.
My instinct agrees with you on this. Yet AMD trots out that particular graph on their page. It doesn't seem possible to me unless they're comparing it to the GDDR5 version.Such an unfair comparison. I'd imagine the RX5300 getting destroyed, especially if testing on a PCIe 3.0 platform like a X470, B450, A520 or anything Intel on the desktop side so far due to Nvidia having both a 1GB and 3.0x16 advantage.
The 4GB and 8GB RX5500 look fine on 3.0x8 until the 4GB one runs out of VRAM. I can imagine RX5300 results looking decent as long as you carefully pick the benchmarks and settings to avoid breaking 3GB VRAM.My instinct agrees with you on this. Yet AMD trots out that particular graph on their page. It doesn't seem possible to me unless they're comparing it to the GDDR5 version.
Such an unfair comparison. I'd imagine the RX5300 getting destroyed, especially if testing on a PCIe 3.0 platform like a X470, B450, A520 or anything Intel on the desktop side so far due to Nvidia having both a 1GB and 3.0x16 advantage.
Technical competency of Tom's keeps falling.
The reason for only 8x lanes on pcie 4.0 is so your GPU can use system ram at twice the rate of 3.0 while saving costs of a full 16x config and saving the cost of vram
So yes Tom's guy it can make use of that extra bandwidth as pcie4.0 8x=pcie3 0 16x
Place is becoming a joke, hence vacant comment sections.