News Is AMD’s Radeon RX 5500 XT Hamstrung by VRAM and PCIe Bandwidth?

InvalidError

Titan
Moderator
Most reviews, that I have watched, have said skip the 8gb version, so a bit confused was to why it keeps getting recommended.
Most sites recommend the 4GB model over the 8GB one simply because by the time most games run out of VRAM on 4GB, the RX5500's 1% lows are already well under 60fps (starting to get noticeably stuttery) and you will probably want to reduce details anyway.
 
  • Like
Reactions: King_V
Have there been other GPUs that are wired as x8? Or is this the beginning of a new trend?

Yes there have been, but they've always been very low tier cards, below $100 like the Nvidia GT cards.

This is very strange for AMD to do, on a card that is an actual gaming card. It doesn't make much sense either in my opinion, because A. What Tom's Hardware was talking about with it adding more horsepower to games and B. Most 5500 XT's are going to be slotting into budget PCI-E 3.0 boards. Not 4.0 boards, which means 3.0 x8 bandwidth is the max you'll get for these cards instead of 3.0 x16.
 
  • Like
Reactions: TJ Hooker

InvalidError

Titan
Moderator
Most 5500 XT's are going to be slotting into budget PCI-E 3.0 boards. Not 4.0 boards, which means 3.0 x8 bandwidth is the max you'll get for these cards instead of 3.0 x16.
Even if the GPU had 4.0x16, that's still only 32GB/s peak at 200+ns of latency for system RAM overflow vs 224GB/s at 60ns worst-case latency for VRAM, the performance would still suffer horribly once you exceed the 4GB on-board. The only real option is to reduce details to get the game to fit comfortably within 4GB of VRAM assuming you didn't already have to reduce graphics details just to hit steady 60+fps before hitting 100% shader load regardless of VRAM usage.
 
  • Like
Reactions: TJ Hooker

rostrow416

Distinguished
Nov 2, 2012
136
0
18,710
Chances are, if you are shelling out for an X570 board and 3rd gen Ryzen CPU, this GPU probably wouldn't be high up on your list of choices. Its more likely to be an upgrade for older or budget systems, which will be PCIE 3.0 anyway.
 

larkspur

Distinguished
Wow... This is disturbing! I likely won't be on PCI 4.0 for awhile and expect my GAMING GPU to use the 16 lanes of PCI 3.0 that I give it. Has this been fully verified Joe? I know the 5500 XT is lower-tier by today's standards but it is marketed as a value GAMING card... what's going on here? Does AMD expect me to pickup a new mobo/cpu just to FULLY utilize its newest card?

Invalid Error - Yeah, I guess going with the overpriced 8gb version makes even more sense for those of us still using PCI 3.0... But more and more I honestly think I may go Nvidia this time... this article has me thinking that AMD might have really messed this up :( Can anyone confirm this whole wired for only 8-lanes thing?

Just to sum up the crux of my concern: When you WIRE a card for x8 PCI 4.0 lanes, when you plug it into a PCI 3.0 system - it can still only use 8 lanes of PCI 3.0 even if 16 lanes are available. Is that correct?
 
Last edited:

TJ Hooker

Titan
Ambassador
Invalid Error - Yeah, I guess going with the overpriced 8gb version makes even more sense for those of us still using PCI 3.0... But more and more I honestly think I may go Nvidia this time... this article has me thinking that AMD might have really messed this up :( Can anyone confirm this whole wired for only 8-lanes thing?
If you're running out of VRAM your performance is going to tank regardless of whether it's running PCIe 3.0 or 4.0. Take a look at the results: the 4GB card may run better on PCIe 4.0, but it's still much worse than the 8GB card. Plus cranking up your settings to the point where you need >4GB of VRAM is probably going to result in poor regardless performance due to the GPU itself.

Yes, the card only has a x8 interface. I do think it would have been better for this card to have a x16 interface, but I don't think it's going to be a make or break thing in real world usage
 

larkspur

Distinguished
If you're running out of VRAM your performance is going to tank regardless of whether it's running PCIe 3.0 or 4.0. Take a look at the results: the 4GB card may run better on PCIe 4.0, but it's still much worse than the 8GB card. Plus cranking up your settings to the point where you need >4GB of VRAM is probably going to result in poor regardless performance due to the GPU itself.
Heh, well I'm in the market for a new card and clearly this isn't it. It reminds me of the 3.5gb GTX 970 fiasco where nvidia should have just been up front about it instead of everyone finding out only after some clever people figured it out. And yes - the card's performance in the reviews didn't change. But Nvidia tried to sneak it under the radar like apparently AMD is doing with this card and its sub-optimal x8 PCI wiring (I assume we'll hear a response soon).

I don't speak German, but that article makes it pretty clear that there are significant differences between running PCI 3.0 with both versions of the 5500 XT - magnified especially with the 4gb version (which all kinds of value-seekers running PCI 3.0 will buy)... This decision to go with x8 lanes seems absolutely ridiculous... it hurts the value-seekers that would buy this card and use it on a PCI 3.0 system... I mean, come on do they actually expect most people in this price range to be using the newest ryzen/threadripper MOTHERBOARDS? That's why I'm wondering if this whole thing has been tested extensively and verified and applies to all of the 5500 XTs?. I just don't see AMD screwing up this card this badly...
 

InvalidError

Titan
Moderator
This decision to go with x8 lanes seems absolutely ridiculous... it hurts the value-seekers that would buy this card and use it on a PCI 3.0 system...
It won't make any difference for most people: the 8GB RX5500 cannot sustain 60+FPS with details close to maxed out and once you start turning down details to get more uniform sustained frame rates, you won't need more than 4GB in most cases anymore. At that point, 3.0x8 is still perfectly fine.
 

TJ Hooker

Titan
Ambassador
Heh, well I'm in the market for a new card and clearly this isn't it. It reminds me of the 3.5gb GTX 970 fiasco where nvidia should have just been up front about it instead of everyone finding out only after some clever people figured it out. And yes - the card's performance in the reviews didn't change. But Nvidia tried to sneak it under the radar like apparently AMD is doing with this card and its sub-optimal x8 PCI wiring (I assume we'll hear a response soon).
Idk, AMD doesn't actually claim anywhere that the card is PCIe x16 as far as I can tell.
 

larkspur

Distinguished
It won't make any difference for most people: the 8GB RX5500 cannot sustain 60+FPS with details close to maxed out and once you start turning down details to get more uniform sustained frame rates, you won't need more than 4GB in most cases anymore. At that point, 3.0x8 is still perfectly fine.
"Most people" are not enthusiasts and will never figure out that a year later, their card's performance is being hampered by the fact that this card was intentionally weakened by somebody trying to save a few pennies... If these results are valid then obviously someone knew that x8 would be less than ideal on PCI 3.0 and I'm not talking about 2% or something reasonable - these results are serious. The admin comment states clearly:
A German site tested the AMD Radeon RX 5500 XT using PCIe 3.0 and 4.0 and found the 4GB version ran much faster with 4.0
Look at the site. It did run much faster - I'm not trying to argue semantics. My point is that (assuming that this whole thing is true) someone made a bad technology marketing decision and should be called out... Make sure folks know that the 5500 XT 4gb runs "much faster" on PCI 4.0 than on PCI 3.0 because it's wired for only 8 lanes... At least when AMD and Nvidia went PCI 3.0 they didn't hamper any of their gaming cards by wiring them only x8 or maybe they did? This could get interesting... eh??? :) Again guys, I'll believe this about ALL 5500 XTs if I get some confirmation from somewhere else...
 

InvalidError

Titan
Moderator
"Most people" are not enthusiasts and will never figure out that a year later, their card's performance is being hampered by the fact that this card was intentionally weakened by somebody trying to save a few pennies.
Many games can't do 1080p60 Ultra on the 8GB RX5500 so you already need to reduce graphics to raise the lows. A $30 difference isn't "pennies" and there are many games where the 8GB performs exactly the same as the 4GB version once details are adjusted to achieve at least 60fps. If the two variants are going to perform practically the same after adjusting settings to something playable, then the ~$30 difference (not pennies) is a waste of money unless one of the few games where the 8GB makes 60fps attainable at much higher details is a title you particularly care about. Me, I'd keep the $30 and turn down details some more where necessary.

As for whether all RX5500 are only x8, the only cost difference between x8 and x16 from the board manufacturer's point of view is a few minutes of extra one-time PCB layout work, so I'd say it should be a pretty safe bet the extra PCIe lanes are physically missing from the GPU to reduce die size by a few square millimeters and TDP by a watt or two. AMD is the only one saving pennies there.
 

larkspur

Distinguished
No you misunderstand my point... it isn't the 8gb vs the 4gb... it's the 4gb version that is truly crippled - the one that budget users using PCI 3.0 will likely buy.... the "pennies" that I am referring to is exactly what you said:
the only cost difference between x8 and x16 from the board manufacturer's point of view is a few minutes of extra one-time PCB layout work, so I'd say it should be a pretty safe bet the extra PCIe lanes are physically missing from the GPU to reduce die size by a few square millimeters and TDP by a watt or two. AMD is the only one saving pennies there.
That is the travesty! Wire it as a 16 lane card and PCI 3.0 won't be a problem - what maybe 2-5% of difference between x16 PCI 4.0 vs x16 PCI 3.0 on this lower-tier card ??? Instead we're looking at a bunch more than that and this is now when games rarely need more than 4gb...

That is what I am saying. By selling a x8 WIRED card, the manufacturer is saving pennies while the poor uneducated user is losing significant FPS when running the 4gb card and trying to get the most out of it... look at the German article that this article is using as a source - the FPS loss of the 4gb card when running in a PCI 3.0 system at higher setttings is very significant - worth far more than pennies! That is really disappointing... And don't tell me that future games won't be more vram hungry... this delta will only increase... wire it as a x16 lane card and the delta won't be as much... for pennies... such a shame!
 
Last edited:

InvalidError

Titan
Moderator
That is the travesty! Wire it as a 16 lane card and PCI 3.0 won't be a problem - what maybe 2-5% of difference if properly laid out as a x16 lane card???
There is no "wiring as x16" when the silicon itself only has x8 built into it. The PCBs are only x8 because the GPU die itself only supports x8.

AMD needs reasons for people to pay $60-100 extra for the upcoming RX5600.
 
  • Like
Reactions: daglesj
Me, I'd keep the $30 and turn down details some more where necessary.
Me, I would spend a little more for a faster GTX 1660 or 1660 SUPER, or save an extra $10 by getting a relatively similar-performing GTX 1650 SUPER, which doesn't suffer from an x8 interface causing performance anomalies in certain titles. Or one of the RX 500-series cards offering similar performance in the sub-$200 price range, albeit with higher power draw and heat output.

That's the main problem here. AMD has priced these cards slightly worse than the competition, and the performance gains are even minimal compared to similar-priced cards they were offering more than three years ago. The only notable improvement has been to efficiency, but even that still isn't quite as good as the competition. The RX 480 4GB was an excellent option when it launched for $200 back in mid-2016, and 8GB versions of the card could be found for around that price by early 2017. Getting only 10-15% more performance from an 8GB 5500XT near the start of 2020 is decidedly underwhelming. Each of these cards should have been priced at least $20 lower at launch. If that's somehow not cost effective, then they should have probably held off on the release and launched the 5600 cards first instead, though I have some doubts that those will be priced much more competitively.

AMD needs reasons for people to pay $60-100 extra for the upcoming RX5600.
The real competition with those cards will be Nvidia's offerings, which have populated that price range for the better part of the last year. A card positioned directly between AMD's current offerings would likely be around the performance of a GTX 1660 Ti. Problem is, the 1660 SUPER is now offering nearly that level of performance for $230, and I can't see AMD pricing their card just $30 more than the 8GB 5500 XT. So unless the 5600 surprises and offers substantially better performance than a 1660 Ti for around $250, it will likely end up as another another underwhelming launch.
 
  • Like
Reactions: larkspur
Let's not forget here people, most of the 5500XT reviews were done on PCIe3.0 systems and it roughly equals the GTX1650 Super on that interface. This report is simply saying it can perform even better than that in edge cases (where VRAM is exceeded)
 
  • Like
Reactions: larkspur

TJ Hooker

Titan
Ambassador
At least when AMD and Nvidia went PCI 3.0 they didn't hamper any of their gaming cards by wiring them only x8 or maybe they did? This could get interesting... eh??? :) Again guys, I'll believe this about ALL 5500 XTs if I get some confirmation from somewhere else...
AMD RX 460/550/560 were all x8. Nvidia seems to only do it on their entry level GT cards, e.g. GT 1030 was x4. It isn't really a secret or anything, i.e. you can look it up on Wikipedia.

The x8 link is baked into the silicon. It isn't going to changed from one RX 5500 XT model to another.

But it would be nice if AMD would actually release some B550 boards, hopefully with PCIe 4.0 (at least for the primary slot), lowering the cost of entry for PCIe 4.0 for those who want it.
 
Last edited:
  • Like
Reactions: larkspur

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Well, it doesn't make any difference.... This card is not going to be installed in any x570 board (considering cost of board vs card). If anyone could afford x570 board, I am dead sure he/she will go for a better card.

Then, its also tested in maximum details?? Look at those frame rates. You think its playable?? Even you get an increase, its still unplayable. So yea...pretty useless to anyone. (unless it 50 increase to 60 then its good).
 

InvalidError

Titan
Moderator
Well, it doesn't make any difference.... This card is not going to be installed in any x570 board (considering cost of board vs card). If anyone could afford x570 board, I am dead sure he/she will go for a better card.
Many 400-series AM4 boards from Asus and Gigabytes support PCIe 4.0 for the main x16 and NVMe slots, albeit without AMD's official blessing. B550 boards coming out a month or so from now should be able to do the same too.
 
AMD RX 460/550/560 were all x8. Nvidia seems to only do it on their entry level GT cards, e.g. GT 1030 was x4.
Cards like the RX 560 are pretty much entry level too, with a $100 MSRP, whereas here we have an x8 connection sneaking into cards priced more in the mid-range. That wouldn't be as much of a problem if PCIe 4.0 was already readily available on lower-end motherboards that such a card might get paired with, but its not. And even once it is, the card still won't be an ideal option for anyone upgrading an older system.

One interesting thing that's come from this though, is a better look at how PCIe bandwidth can potentially affect graphics card performance. Prior investigations looking into this have shown that a PCIe 3.0 x8 or 2.0 x16 connection won't noticeably impact anything but the highest-end graphics cards, so it shouldn't actually be a significant problem for cards in the mid-range. Those tests are naturally done with cards with plenty of VRAM though. With the 5500XT 4GB, we run into a situation where some of the most demanding games are beginning to exceed the card's VRAM, and when that happens, the limited bandwidth actually does become a notable concern.

Then, its also tested in maximum details?? Look at those frame rates. You think its playable?? Even you get an increase, its still unplayable. So yea...pretty useless to anyone. (unless it 50 increase to 60 then its good).
While that could be true for some games, they got very playable results at max settings in most of the games tested there, at least with the PCIe 4.0 connection. AC:Odyssey had the worst performance of the bunch, averaging 31fps with a minimum of 24 in their test run, which might seem quite low for a PC game, but is actually similar to the framerates the game runs at on consoles. The consoles are limited to 30fps, with similar dips into the low 20s at times on the base-model PS4 and Xbox One (despite running at lower than 1080p resolution, with reduced settings). There are plenty of people who would be fine with a roughly 30fps experience like that for a non-competitive third-person game, who are willing to trade smoother performance for better visuals. On a PCIe 3.0 connection, however, the card only manages to average 25fps with a 16fps minimum, which would be considerably less playable. It's also possible to lower a number of settings to improve framerates in that game without substantially affecting visuals, but it will likely be necessary to lower texture quality and make everything a bit blurry to keep the VRAM usage in check.

Battlefield V's average framerates drop from 63 down to 50 in their test, though in that case the 4GB card had some issues with minimums using either setup. Far Cry: New Dawn, however, went from getting 75fps with 58 minimum down to 42fps with 32 minimum on PCIe 3.0, nearly halving performance in that game and taking it from having very good performance nearly on par with the 8GB model, to having rather poor performance for a first-person game. And while the hit to performance was not as bad in Shadow of the Tomb Raider (at least in their test run), it still cut performance by more than 10%. Wolfenstein also saw a further halving of performance on the 4GB card using a 3.0 connection, but even on the faster connection performance was still relatively poor with that amount of VRAM. And this is only going to get worse as VRAM requirements rise with the next generation of games, and people may find that they need to drop texture quality to blurry low settings just to get the games to be playable on this card.

Another thing to point out though, is that there's a smaller but noticeable impact on performance in these games even with the 8GB version of the card. Odyssey saw a 6% hit to averages with a 12% hit to minimums, Battlefield saw a 9% hit with 15% to minimums, Wolfenstein saw a 13% hit with 16% to minimums, and the other two saw a less significant hit of 1-2% with a 4-5% hit to minimums. The games might still be plenty playable even with those reductions in performance, but they would run a bit smoother without them. And for a card that comes off as having kind of mediocre performance for the money, any improvement would have been welcomed.
 
  • Like
Reactions: larkspur

InvalidError

Titan
Moderator
With the 5500XT 4GB, we run into a situation where some of the most demanding games are beginning to exceed the card's VRAM, and when that happens, the limited bandwidth actually does become a notable concern.
"Most demanding games (with everything maxed out)" haven't been associated with $200 GPUs since the 3dfx days.

The most suspicious thing here is that the 8GB RX5500 suffers about as much from having its PCIe bandwidth halved as the RTX2080 does. That appears to suggest the RX5500 has a whole lot more PCIe chatter overhead: wouldn't expect a GPU that pushes 1/3 as many frames to lose the same ~10% FPS while pushing the PCIe bus 1/3 as hard... unless the RX5500 uses the PCIe bus 2-3X as much. There may be some significant PCIe bus usage deficiencies in the drivers here.
 
  • Like
Reactions: TJ Hooker