Moore Threads MTT S80 graphics cards based on the Chunxiao architecture is now available.
Chinese GPU Dev Starts Global Sales of $245 RTX 3060 Ti Rival : Read more
Chinese GPU Dev Starts Global Sales of $245 RTX 3060 Ti Rival : Read more
Weird. And completely unnecessary. For 3060 Ti-level performance, PCIe 3.0 x16 should be perfectly fine.Interestingly, the MTT S80 is the world's first client graphics card with a PCIe 5.0 x16 interface
If they try to establish a real market presence in western countries, that means building a distribution channel and those distributors will be vulnerable to patent infringement claims by existing GPU players (Nvidia, AMD, Intel, Imagination, and ARM). That seems likely to limit their penetration of existing markets, where such IP claims get play.Moore Threads designs GPUs for many applications (except HPC) and seems intent on selling its products outside of China.
As intel is currently finding out, and as AMD and Nvidia learned to varying degrees long ago, it comes down to the drivers. It can be the most powerful GPU ever created, and it wont matter one bit if it cant actually be used by any software because the drivers for it suck.
Yes, but I think Intel's problems are also deeper than mere drivers.As intel is currently finding out, and as AMD and Nvidia learned to varying degrees long ago, it comes down to the drivers. It can be the most powerful GPU ever created, and it wont matter one bit if it cant actually be used by any software because the drivers for it suck.
I don't see Nvidia as not trying. They tend to exploit their market leadership with very high margins, though. Also, I don't see the same creativity as AMD has shown, with their features like Infinity Cache. That complacency could be more of a recent thing, because just 2 generations earlier they introduced tensor cores and hardware ray tracing.I've owned nothing but Nvidia cards since my voodoo 2. But I won't miss that company if it disappears in the next few years. History is littered with companies that got to the top, took their customers for granted and then got eaten by people trying harder.
To the extent you're interested in cost-competitiveness, you'd do well to look into how heavily those products are subsidized. And consider that, if those brands ever achieve market dominance, they might cease to be so inexpensive.Strange to say in 2022, but I'd rather give my automotive dollars to BYD and if this GPU mob is serious I'll give them a go too.
For useless gaming it's indeed useless. But for compute I'd absolutely love faster interface.Weird. And completely unnecessary. For 3060 Ti-level performance, PCIe 3.0 x16 should be perfectly fine.
In one of the videos they put out Raja admitted that the team's lack of experience on dGPUs led them to make some design decisions which hurt the performance. It sounded like hardware design is holding back the performance for desktop which might explain the wide variations on performance (which exists in dx12/vulcan as well) beyond just drivers.Yes, but I think Intel's problems are also deeper than mere drivers.
IMO, Intel tried to do too much, in their first gen. They introduced special units for ray tracing and AI inferencing, then had to implement & optimize driver & runtime support for these + XeSS (their DLSS-equivalent). What they missed is that if the core rasterization performance and drivers for it aren't there, nobody cares much about the other stuff. Especially not ray tracing. You could make the case that XeSS can help them squeeze more performance out of their existing rasterization hardware, so I guess I'm really arguing they shouldn't have burned hardware and software resources trying to do ray tracing, also.
PCIe 5.0 x16 is nominally 64 GB/s per direction. That's close to the limit of what DDR5 can support. So, what benefit is it you see from having such a fast interface in a card presumably being used in a desktop PC?for compute I'd absolutely love faster interface.
The cards are obviously more compute-oriented given lackluster list of supported games.
Greedy? They're getting less performance per mm^2 than AMD or Nvidia. I'm not sure they can charge much less for their GPUs.Intel could have done that if they weren't so greedy with the 1st gen card that had a lot of issues.
On a new GPU architecture and not new as in we added tensor cores to a standard but more of it design.Greedy? They're getting less performance per mm^2 than AMD or Nvidia. I'm not sure they can charge much less for their GPUs.
The 4GB RX5500 which gains up to 70% more performance on 4.0x8 vs 3.0x8 says hi. 3.0x16 is borderline, especially at the very lower-end.Weird. And completely unnecessary. For 3060 Ti-level performance, PCIe 3.0 x16 should be perfectly fine.
128bits x 6400MT/s = 128GB/s, ~100GB/s usable after overheads, dead time, refresh, stalls, etc.PCIe 5.0 x16 is nominally 64 GB/s per direction. That's close to the limit of what DDR5 can support. So, what benefit is it you see from having such a fast interface in a card presumably being used in a desktop PC?
AMD has yet to disclose the host interface of its latest RDNA 3 GPUs in general and Radeon RX 7900 XT and Radeon 7900 XTX in particular.
I’d stay away from BYD products, their quality control is poor and they have been accused multiple times of using slave labor in the manufacturing of their products.Bring it on. As with the car industry and the coming rush of solid Chinese EVs, this sounds like good news. 3060ti performance should cost about $250 retail with a useful margin. I've owned nothing but Nvidia cards since my voodoo 2. But I won't miss that company if it disappears in the next few years. History is littered with companies that got to the top, took their customers for granted and then got eaten by people trying harder. Strange to say in 2022, but I'd rather give my automotive dollars to BYD and if this GPU mob is serious I'll give them a go too.
Who? Intel? Do we have evidence of that?The development of the interconnect fast enough to allow for mcm rendering without constant stutter is impressive and only going to keep getting better.
I'll believe it when I see it (and welcome it, too). Do you have a better source for this than the Internet Rumor Mill?This time next year they will unveil the GPU that beats nVidia in price, raster, ray tracing and driver performance.
This sort of discrepancy was also observed with the RX 6500XT, when comparing its x4 interface at 3.0 vs. 4.0 speeds. The key thing to highlight is that it's an outlier result.The 4GB RX5500 which gains up to 70% more performance on 4.0x8 vs 3.0x8 says hi. 3.0x16 is borderline, especially at the very lower-end.
You're computing bidirectional bandwidth? That doesn't make as much sense to highlight, because the data flow isn't symmetrical. Most of the data is going from CPU and system RAM to graphics memory.128bits x 6400MT/s = 128GB/s, ~100GB/s usable after overheads, dead time, refresh, stalls, etc.
It isn't an outlier when nearly all low-VRAM GPUs ever made benefit disproportionately from more PCIe bandwidth. The RX5500 and RX6500 are simply the most spectacular examples thanks to AMD going at least one step too far in its PCIe-cutting for a large chunk of its potential market. I omitted the RX6500 because it is mobile garbage shoehorned into a desktop thing and should just pretend it never existed now that the RX6600 almost down to $200 retail.This sort of discrepancy was also observed with the RX 6500XT, when comparing its x4 interface at 3.0 vs. 4.0 speeds. The key thing to highlight is that it's an outlier result.
The test showing that delta is an outlier, in the sense that it's a corner case that pushes the graphics card outside of its sweet spot. Even with PCIe 4.0, you typically wouldn't want to use those settings.It isn't an outlier when nearly all low-VRAM GPUs ever made benefit disproportionately from more PCIe bandwidth.
My take on it is different. I think what it shows is that the amount of graphics memory is too small, if this kind of performance discrepancy happens in games & settings that are otherwise playable on the GPU. In general, you don't want too many texture fetches from system RAM, even at PCIe 4.0 speeds, since it can cause stuttering.The RX5500 and RX6500 are simply the most spectacular examples thanks to AMD going at least one step too far in its PCIe-cutting for a large chunk of its potential market.
It made sense, for the market conditions at the time. If it launched in today's market, I would judge it much more harshly.I omitted the RX6500 because it is mobile garbage shoehorned into a desktop thing and should just pretend it never existed now that the RX6600 almost down to $200 retail.
While enthusiasts may balk at playing games at anything less than 60fps, people on a budget or who cannot be bothered to spend $300+ on a GPU may be more than willing to make that sacrifice if it gets them meaningfully better visual quality while still maintaining generally playable frame rates. On my GTX1050, I take whatever details I can get as long as I hit at least 40fps most of the time. The only reason that GPU is still remotely usable today is because it has 3.0x16 to partially cover for its 2GB of VRAM.The test showing that delta is an outlier, in the sense that it's a corner case that pushes the graphics card outside of its sweet spot. Even with PCIe 4.0, you typically wouldn't want to use those settings.
One of the biggest issues with GPGPU is it's often not worth it to move some computation to discreet GPU even if that part of algorithm is well suited for GPU parallelism because data transfer time would take more than speedup of computation.PCIe 5.0 x16 is nominally 64 GB/s per direction. That's close to the limit of what DDR5 can support. So, what benefit is it you see from having such a fast interface in a card presumably being used in a desktop PC?