News AMD's Radeon RX 6600 XT Will Likely Be Capped At PCIe 4.0 x8

If the RX6600 only gets an x8 interface, then the likely 4GB RX6500 is screwed since the 4GB RX5500 sorely needed its 4.0x8 to hold up decently well against its 8GB counterpart..
The 4GB had problems if its VRAM filled up. Otherwise it was fine in most other cases. It still makes me scratch my head why AMD is choosing to limit it to 8 lanes when PCIe 4.0 adoption isn't quite at critical mass.

How many extra power plugs will it have? I don't like double plugs that come with most GPUs nowadays.
Since the RX 6700 was a 230W card, I'd image at worst the RX 6600 will float around 180W-200W, which would only require an 8-pin plug.
 
Reactions: vern72

escksu

Respectable
Aug 8, 2019
520
192
2,060
0
The 4GB had problems if its VRAM filled up. Otherwise it was fine in most other cases. It still makes me scratch my head why AMD is choosing to limit it to 8 lanes when PCIe 4.0 adoption isn't quite at critical mass.


Since the RX 6700 was a 230W card, I'd image at worst the RX 6600 will float around 180W-200W, which would only require an 8-pin plug.
Its to reduce cost and pin count. PCIE 4.0 8x is pretty much the same as PCIE 3.0 16x. With B550 becoming mainstream and 11 gen Intel out now,
 

InvalidError

Titan
Moderator
Its to reduce cost and pin count. PCIE 4.0 8x is pretty much the same as PCIE 3.0 16x. With B550 becoming mainstream and 11 gen Intel out now,
Having only x8 on the 4GB RX5500 was a huge mistake since it made it perform 30-40% slower than it could on 3.0x8 and could probably have closed the gap even tighter against the 8GB version with 4.0x16. The RX6600 is faster, which means it is going to suffer even more severe performance degradation if it ever runs out of VRAM. A 30-40% potential performance hit for going slightly over 8GB on a likely $300+ card is a steep price to pay to save maybe $2 on the assembled PCB.

Perhaps most important: this is hypothetically lined up to compete against the RTX3060 which does have 4.0x16 and 12GB of VRAM.
 
Its to reduce cost and pin count. PCIE 4.0 8x is pretty much the same as PCIE 3.0 16x. With B550 becoming mainstream and 11 gen Intel out now,
Unless the RX 6600 XT is also an x8 card, then that argument goes right out the window. The RX 5600 was a x16 slot card with the pins (at least most AIBs kept all of the pins), so there was literally no cost saving done, except maybe the minute amount they saved on the PCB traces.
 

InvalidError

Titan
Moderator
there was literally no cost saving done, except maybe the minute amount they saved on the PCB traces.
There is no cost difference on traces whatsoever since traces are formed by etching copper away to isolate traces from the formerly solid copper plane. The whole board gets covered in photo-resist regardless of how many traces are on them and the whole board has exposure masks too.

Well, I suppose there is an absolutely minute saving in not needing to test those extra traces and $0.001 DC-blocking capacitors on data traces going one direction, forgot which, so $0.02 saved there. Yay!
 
Reactions: hotaru.hino

ASK THE COMMUNITY

TRENDING THREADS