Angstronomics reveals the potential specifications for AMD's looming RDNA 3 silicon.
AMD RDNA 3 GPU Specs: Up To 12,288 ALUs, 96MB Infinity Cache : Read more
AMD RDNA 3 GPU Specs: Up To 12,288 ALUs, 96MB Infinity Cache : Read more
Sorry for your bad experience, but it'll be your loss if they're good. Recent AMD products have much improved reliability compared to their Zen 1 stuff.And as someone who was an early adopter of AMD's previous first MCM design, the Zen 1 based 1800X and the huge dumpster fire that turned out to be that ended up costing me over $150 more thanks to having to buy a new license for Windows and RMA shipping, I wouldn't touch these cards if they cost $1.
Sorry for your bad experience, but it'll be your loss if they're good. Recent AMD products have much improved reliability compared to their Zen 1 stuff.
On another note, I'm a little bit sad that they couldn't ship with more cache, but I guess we'll see how well they perform soon.
Zen 1 and the 1800X were not MCM designs. They were monolithic. The multi-chip approach didn't make an appearance until the 3000-series.And as someone who was an early adopter of AMD's previous first MCM design, the Zen 1 based 1800X and the huge dumpster fire that turned out to be that ended up costing me over $150 more thanks to having to buy a new license for Windows and RMA shipping, I wouldn't touch these cards if they cost $1.
PCIe 5.0 x2. : DAs for Navi33, it looks like a straightforward upgrade from the current Navi23. I just hope they give the damn thing a full X16 width* instead of the crappy X8. I know it's intended for mobile first and PCIe5 (supposedly?), but come on AMD. Make a Navi34 use the X8 instead! I know, I know; it's completely moot at this point and more than likely X8, but I'll be preemtively salty about it anyway! xD
I just hope they give the damn thing a full X16 width* instead of the crappy X8. I know it's intended for mobile first and PCIe5 (supposedly?), but come on AMD. Make a Navi34 use the X8 instead! I know, I know; it's completely moot at this point and more than likely X8, but I'll be preemtively salty about it anyway! xD
Because people buying low end cards are people, more than likely, upgrading old systems which may not even have PCIe4.I'm confused why you would want them to consume additional pcie lanes when they can't saturate the ones they already have. And this is on machines with limited pcie lanes.
As long as you keep your settings at a level where resources can stay in the GPU's VRAM, there is almost no difference between 3.0x8 and 4.0x16 until you reach 150-200fps where the amount of scene setup traffic and associated latency start becoming an issue in some titles. It is far more problematic at the 4GB low-end where 4.0x4 vs 3.0x4 can be a 50-80% loss due to having to do asset swaps from system memory much sooner and more often.Because people buying low end cards are people, more than likely, upgrading old systems which may not even have PCIe4.
Windows license? Did you also buy the winrar license? Who buys that? 🤭ended up costing me over $150 more thanks to having to buy a new license for Windows and RMA shipping
According to Angstronomics, Navi 33 outpeforms Intel's highest-tier Arc Alchemist offering and commands only half the cost of production while also being more power efficient.
DOOM proves you wrong though. Even without saturating the VRAM, it does depend a lot on PCIe bandwidth. I have no idea why, but it's the easiest counte example I could think of.As long as you keep your settings at a level where resources can stay in the GPU's VRAM, there is almost no difference between 3.0x8 and 4.0x16 until you reach 150-200fps where the amount of scene setup traffic and associated latency start becoming an issue in some titles. It is far more problematic at the 4GB low-end where 4.0x4 vs 3.0x4 can be a 50-80% loss due to having to do asset swaps from system memory much sooner and more often.
Anything is going to "push PCIe bandwidht" when it runs at 200+fps, which I had already explicitly covered - read the second half of the first sentence in the post you quoted. No mystery there.DOOM proves you wrong though. Even without saturating the VRAM, it does depend a lot on PCIe bandwidth. I have no idea why, but it's the easiest counte example I could think of.
It's not as clear cut as that though... I think you're over simplifying the problem/issue, but I'll concede for the specific case of "X8", it may not be a big issue anyway.Anything is going to "push PCIe bandwidht" when it runs at 200+fps, which I had already explicitly covered - read the second half of the first sentence in the post you quoted. No mystery there.