News Intel Employee Crows About Acquiring a Retail Arc A380 GPU

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The RX6500's pricing for how cut-down it is an insult to intelligence and testament to how screwed up the entry-level GPU market has become. Going 6GB on a 96bits bus with all of the missing bits added back in would increase the GPU's manufacturing cost by about $10 and made the $200 price tag far more palatable.
Not really. A larger design with more VRAM and a more complex PCB would increase cost by way more then just $10. Navi 14 based Radeon 5300 entry-level card had only 3 GB VRAM . Former mainstream cards like GTX 1060 had variants with only 3 GB VRAM. So, 4 GB VRAM already is some kind of upgrade of the bare minimum. 6500 XT is a good card for what it's meant to be. The criticism is simply too exaggerated and not justified. I had an RX 560 with 4 GB VRAM for some years and never felt VRAM starved at FHD. It's not the entry-level GPU market that is screwed up. Except pricing it was often like that. But current pricing affects all segments. It's the expectations that people have about entry-level that is screwed up due to absurd 400W+ or soon 600W+ high end monsters.
 
Not really. A larger design with more VRAM and a more complex PCB would increase cost by way more then just $10.
What larger design? Widening the memory controller from 64 to 96 bits, PCIe from x4 to x8 and putting media decode back in would add about 20sqmm. PCB routing is a non-issue: motherboard manufacturers can route 2x64bits wide memory busses on $60 four-layers motherboards fine despite all of the squiggling required to keep everything in-time. GDDR6 has 2x16 per chip, which makes routing 4x easier since there are 1/4th as many traces to keep in-time and they don't need to fan out across a huge slot either. The increased PCB cost would be negligible beyond the slightly increased routing effort to reduce dead space on the existing board footprints to squeeze in an extra chip.

$10 may have been a little low-ball, so I'll bump that up to $15 mainly for the die space.
 
What larger design? Widening the memory controller from 64 to 96 bits, PCIe from x4 to x8 and putting media decode back in would add about 20sqmm.
Which would be ~20% more just for the chip (maybe even more if we take yields into account). Which might seem not much to YOU. But for companies that have to care about their costs and margins that is a quite big difference. Including the cost of the PCB and the VRAM I would say we might look at $25 higher cost overall. Which means you have to price it more towards $250 than $200. Doesn't sound attractive to me for entry-level. OTOH you have to ask if 6 GB VRAM would make the card faster at entry-level gaming? Not really that much. It's negligible. You talk about missing decode. Which in fact is not true. Except the still not very widely supported AV1 codec it supports decode of the most important formats like AVC or HEVC. It mainly doesn't support encode. Which again raises the question, is that what entry-level buyers are asking for? Not really. If you want encoding for best results there is still nothing that beats CPU encoding. All other people look at encode support mainly for streaming. Especially gamers wouldn't consider entry-level cards at all. If you need streaming for your business then you usually use mobile devices with iGPUs.

With your "upgraded" 6500 XT version AMD would possibly make 10% of the potential buyers more happy. But like 90% don't care about or are even less happy because they have to pay more for about the same fps. So, is it worth the "upgrade" and the higher cost for AMD? In my opinion, definitely not. It's not like they have no other offers. If you want more VRAM, AV1 and encode, look at cards above entry-level. It's called product segmentation. 😉 For example, get the 6600 series. That's what I did. But not because I needed the mentioned features. I simply looked at a worthy average fps and power efficiency increase compared to my previous RX 560. Those are still the most important aspects for most people to upgrade. And not a single feature. It's really naive to expect that entry-level products should get the same features as high-end products. This only shows how far it has come and why even reviewers have sometimes completely unrealistic expectations. RX 6500 XT isn't a bad card at all, despite the missing features. From an economical point of view it's actually a really good product. I said that when it was launched. And cards like Nvidia's 1630 or Intel's A380 just prove it.
 
Which would be ~20% more just for the chip (maybe even more if we take yields into account). Which might seem not much to YOU. But for companies that have to care about their costs and margins that is a quite big difference. Including the cost of the PCB and the VRAM I would say we might look at $25 higher cost overall.

With your "upgraded" 6500 XT version AMD would possibly make 10% of the potential buyers more happy.
20% on top of $18 is only $3.60. A single GDDR6 chip is ~$8. The PCB stays exactly the same size and layer count since there is plenty of space to fit a third chip without its trace routing interfering with anything, that's near-zero cost beyond the slightly increased placement and routing effort. We used to have $150-220 MSRP GPUs (RX 470-590 ) with 256bits memory busses with full x16 PCIe, I'm only asking for 96bits and x8 here, nothing groundbreaking. The manufacturing cost for the RX6500 as it currently stands is about $110. There is plenty of gross margin between that and the $200 MSRP to just eat the $12-15 cost and still make a decent margin. Intel is likely still making a slim profit on the A380 at ~$140 MSRP and the A380 has everything the RX6500 lacks including tensor cores, only problems are driver maturity and being 40% behind on raw raster power.

I bet a lot more than 10% of people would notice the benefits of having 50% more VRAM and memory bandwidth since it would eliminate its worst performance issues and allow people to bump texture resolutions up one notch without worrying about blowing straight through 4GB.

BTW, AMD itself once said 4GB GPUs were stupid similar to Nvidia with RTX before launching the RT-less GTX16xx series and had to take down that post when they announced the RX6500. They should have followed their own advice and not pitched anything with less than 6GB of VRAM as a gaming product, same for Nvidia.