News First China-Designed Gaming GPU Besieged by Bugs

bit_user

Titan
Ambassador
I still think they're doing very well, for such a young company. Only founded in 2020, right? Graphics are the most complex APIs I've ever seen. The effort to stand up a new product in this space is pretty monumental.

This generation of hardware should probably be seen as a development vehicle, in order for them to flesh out and tune their drivers. As long as their investors are willing to keep backing them, I think they'll eventually become competitive with GPUs made on similar manufacturing tech.

BTW, it was previously mentioned that they support CUDA? I'd be very curious to know how their performance is looking on some common compute benchmarks. That might actually be where most of their focus has been.
 
  • Like
Reactions: salgado18

Paul Basso

Distinguished
Jan 1, 2017
12
2
18,515
Not less than impresive for a new company in the market. Hope we can have another GPU manufacturer well stabishled soon, GPU prices are crazy right now
 

samopa

Distinguished
Feb 12, 2015
205
56
18,660
I don't get it, when Intel release ARC Alchemist, everyone (almost) bashing it for under performed, but with this S80 everyone (at least 2 first commentatorin this thread) praising it well.

I'm not American or Chinese and I don't have any prejudice or political favor towards USA or China.
For me Intel and MTT both are newbie in DirectX 9 (or above) capable discrete GPU, which both of them should receive equal treatment.
 
I'm not American or Chinese and I don't have any prejudice or political favor towards USA or China.
For me Intel and MTT both are newbie in DirectX 9 (or above) capable discrete GPU, which both of them should receive equal treatment.
Intel has had graphics on their CPUs for a long time. I also assume Intel has 10x the R&D budget to bring a dGPU to market. So no, they shouldn't be treated the same.
 

samopa

Distinguished
Feb 12, 2015
205
56
18,660
Intel has had graphics on their CPUs for a long time. I also assume Intel has 10x the R&D budget to bring a dGPU to market. So no, they shouldn't be treated the same.

They are integrated GPU, not discrete GPU, totally different animal, If else, ypu should praise Intel for their willingness to spend much more money to the unknown journey of making discrete GPU.
It's a lot easier for someone to convince others to spend less money to invest compare to convince other to spend much more money to invest to unknown.
 
  • Like
Reactions: gg83

bit_user

Titan
Ambassador
I don't get it, when Intel release ARC Alchemist, everyone (almost) bashing it for under performed, but with this S80 everyone (at least 2 first commentatorin this thread) praising it well.
Because Intel has been working on these dGPUs since at least 2018, if not before. Even the current ARC GPUs aren't a clean-sheet design - they build heavily on Intel's integrated GPUs, which they've now been making for 15-20 years. In particular, their software stack heavily leverages prior work, and their team is very experienced in developing GPU drivers and tools.

For me Intel and MTT both are newbie in DirectX 9 (or above) capable discrete GPU, which both of them should receive equal treatment.
No, the comparison almost couldn't be more stark, in terms of the organizations and their respective starting points.

They are integrated GPU, not discrete GPU, totally different animal,
Not nearly as different as you think, especially from a software point of view.
 

hannibal

Distinguished
Anyone welcoming this as competition to the GPU market has a screw loose. Does anyone want to willingly install drivers on their personal computer that could have a possible backdoor in the code for the CCP? I have tremendous respect for the Chinese people, but their government? Not so much. Even if they fix their drivers, I'm going to take a hard pass.

Most people in this place allready use phones made in China…
Nothing wrong in being carefull, but the truth is that everything leak allready.
 
D

Deleted member 1353997

Guest
They are integrated GPU, not discrete GPU, totally different animal
What makes them so different to render over a decade of experience in iGPUs meaningless?

Does an iGPU not have cache for vertices and indices? Does an iGPU not handle the same textures a dGPU does? Does it apply them differently, perhaps? Does an iGPU not conform to the same DX9/10/11/12/OpenGL/Vulkan/Metal APIs the same way a dGPU does? Does an iGPU not perform the very same matrix multiplications that a dGPU would? Do iGPUs not run the same shaders as dGPUs? Is an iGPU not supposed to support swizzling, like a dGPU would? Does an iGPU not benefit from culling? Does an iGPU not draw lines and triangles the same way a dGPU does?

What makes you so sure that Intel's experience in iGPUs couldn't possibly have benefited the development of their dGPU in any way whatsoever?
 
  • Like
Reactions: bit_user

sivaseemakurthi

Distinguished
Apr 18, 2012
52
20
18,535
I still think they're doing very well, for such a young company. Only founded in 2020, right? Graphics are the most complex APIs I've ever seen. The effort to stand up a new product in this space is pretty monumental.

This generation of hardware should probably be seen as a development vehicle, in order for them to flesh out and tune their drivers. As long as their investors are willing to keep backing them, I think they'll eventually become competitive with GPUs made on similar manufacturing tech.

BTW, it was previously mentioned that they support CUDA? I'd be very curious to know how their performance is looking on some common compute benchmarks. That might actually be where most of their focus has been.
They seem to be based on imagination graphics, they must have licensed the IP. Nobody can develop GPUs in that short span from scratch.
 

bit_user

Titan
Ambassador
They seem to be based on imagination graphics,
I know some Chinese GPUs are, but I've yet seen no indication that Moore Threads' is. Can you cite any sources or specific reasons to think so?

I actually think the current state of their drivers & overall performance is the best argument that the hardware is original. I know Imagination doesn't have the best reputation for driver quality, but I think they'd be working a lot better than we've so far heard, not to mention delivering higher benchmark scores.

they must have licensed the IP. Nobody can develop GPUs in that short span from scratch.
I'm not sure about that. AMD and Nvidia have had design centers in China for about 15 years, now. There's enough experience with various hardware design and probably even some GPU software stack development, in country.

That said, it's possible they've used parts of other IP floating around, like that of S3 or Vivante.
 

bit_user

Titan
Ambassador
Doesn't MTT have the entire Chinese government funding them?
Not sure, but here's what Tom's previously published about their funding:

"The company did enjoy ample funding for its endeavors, however. The startup has already gone through three funding rounds in a single year, with prominent investors such as Sequoia Capital China, ByteDance (of TikTok fame) and Tencent all reportedly contributing at one point or another. The latest Series A funding round brought the company some $313 million "​
(dated Nov 30, 2021)​

If anyone has more up-to-date info, please share.
 

samopa

Distinguished
Feb 12, 2015
205
56
18,660
What makes them so different to render over a decade of experience in iGPUs meaningless?

Does an iGPU not have cache for vertices and indices? Does an iGPU not handle the same textures a dGPU does? Does it apply them differently, perhaps? Does an iGPU not conform to the same DX9/10/11/12/OpenGL/Vulkan/Metal APIs the same way a dGPU does? Does an iGPU not perform the very same matrix multiplications that a dGPU would? Do iGPUs not run the same shaders as dGPUs? Is an iGPU not supposed to support swizzling, like a dGPU would? Does an iGPU not benefit from culling? Does an iGPU not draw lines and triangles the same way a dGPU does?

What makes you so sure that Intel's experience in iGPUs couldn't possibly have benefited the development of their dGPU in any way whatsoever?

Well, first, the driver update from Intel, improves significantly (almost 50%) DirectX9 performance of ARC (discrete GPU) but no (insignificant) improvement to iGPU, can't you see that its so different ?

Second, iGPU connect to CPU via "special line" which makes it possible to utilize system memory with little penalty, but discrete GPU has to communicate via PCIE Bus, which incur heavy penalty to access, that why most dGPU have local memory to minimize this effect. This will result totally different driver to develop and with totally different challenge.
 
I SEE THIS...

When I find myself in times of trouble
Mother Mary comes to me speaking words of wisdom
Larrabee
And in my hour of darkness she is standing right in front of me
Speaking words of wisdom
Larrabee
Larrabee
Larrabee
Larrabee
Larrabee
Whisper words of wisdom
Larrabee
 

sivaseemakurthi

Distinguished
Apr 18, 2012
52
20
18,535
I know some Chinese GPUs are, but I've yet seen no indication that Moore Threads' is. Can you cite any sources or specific reasons to think so?

I actually think the current state of their drivers & overall performance is the best argument that the hardware is original. I know Imagination doesn't have the best reputation for driver quality, but I think they'd be working a lot better than we've so far heard, not to mention delivering higher benchmark scores.


I'm not sure about that. AMD and Nvidia have had design centers in China for about 15 years, now. There's enough experience with various hardware design and probably even some GPU software stack development, in country.

That said, it's possible they've used parts of other IP floating around, like that of S3 or Vivante.
"The MTT S60, which is based on Imagination Technologies’ GPU IP, has 2,048 MUSA cores built on "
Its not exactly this GPU but some articles mentioned the drivers have Imagination components.
 
  • Like
Reactions: bit_user
D

Deleted member 1353997

Guest
Well, first, the driver update from Intel, improves significantly (almost 50%) DirectX9 performance of ARC (discrete GPU) but no (insignificant) improvement to iGPU, can't you see that its so different ?
Nvidia drivers don't affect AMD GPUs either (and vice versa), yet they are both dGPUs. Are you trying to say that Nvidia's and AMD's experience in making dGPUs is worthless when it comes to making dGPUs?

I mean, obviously different products are different, but experience can be applied to many things, even unrelated ones. Would you believe me if I told you that one of the algorithms used in modern AI is based on experience with Darwin's evolutionary theory? Look up the term "genetic algorithm".

With that said, iGPUs are very obviously related to dGPUs. Your claim that Intel's experience in iGPUs was meaningless for designing the Arc remains to be proven, and is quite frankly completely ludicrous.

Second, iGPU connect to CPU via "special line" which makes it possible to utilize system memory with little penalty, but discrete GPU has to communicate via PCIE Bus, which incur heavy penalty to access, that why most dGPU have local memory to minimize this effect. This will result totally different driver to develop and with totally different challenge.
Let's assume for a second that you are right (which still remains to be proven), you've only shown that Intel needs to develop new drivers. This means that you're aware Intel could just throw their existing iGPU design on a separate board with its own dedicated VRAM (and other assorted components), as long as they develop new drivers for it. And Intel is no stranger to developing drivers. They've got drivers for a wide range of different products, including network adapters, storage, and many more.

Moore Threads doesn't even have an existing iGPU it could just throw onto a board and "only" develop new drivers for. It probably doesn't even have experience in developing drivers in the first place. And you seriously expect people to treat Intel as if it were on the same level? Why?
 
  • Like
Reactions: bit_user and Upacs

jkflipflop98

Distinguished
In reality Moore Threads has 100x the R&D budget of Intel or Nvidia. They have the backing of the Chinese government and the unlimited funding that comes along with being a priority thereof.
 

samopa

Distinguished
Feb 12, 2015
205
56
18,660
I mean, obviously different products are different, but experience can be applied to many things, even unrelated ones. Would you believe me if I told you that one of the algorithms used in modern AI is based on experience with Darwin's evolutionary theory? Look up the term "genetic algorithm".

My point exactly, do you believe that MTT does not have any experience whatsoever that can help them build the dGPU ?
That's why I said to left the prejudice alone, and let them compete in field they choose, every player who jumping to the arena know what they're capable of, no need for some one from outside to boo/insult any other player.
 
Last edited:

tamalero

Distinguished
Oct 25, 2006
1,231
247
19,670
I don't get it, when Intel release ARC Alchemist, everyone (almost) bashing it for under performed, but with this S80 everyone (at least 2 first commentatorin this thread) praising it well.

I'm not American or Chinese and I don't have any prejudice or political favor towards USA or China.
For me Intel and MTT both are newbie in DirectX 9 (or above) capable discrete GPU, which both of them should receive equal treatment.
Because intel is a behemoth.
A leader in both resources and chipmaking.
With many attempts on the graphic market before and with tons of years of experience on integrated graphics.
They poached experienced engineers from pretty much every company.
And while the graphic thing was decent. Their drivers were the issue.
Not to mention their ridiculous hype. Which ended vanishing after delay after delay and failed projects such as Larrabee.

This chinese chip is something brand new and I wonder if it copies IP or does it its own way.
If the second is true.. then its even more impressive for an starting company with nowhere as much power, money and resources as intel.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
the driver update from Intel, improves significantly (almost 50%) DirectX9 performance of ARC (discrete GPU) but no (insignificant) improvement to iGPU, can't you see that its so different ?
There are lots of possible explanations for that:
  • codepaths for iGPU and dGPU have diverged, and their optimizations only apply to the latter.
  • optimizations apply mostly to scheduling and resource utilization, which is a harder problem as you scale and therefore less of an issue for iGPUs.
  • optimizations address memory latency differences between iGPU and dGPU
  • optimizations address dGPU memory and asset management, which is a rather different problem on a dGPU than iGPU.
  • optimizations target overheads that are simply less noticeable with a weaker GPU
  • most likely, some combination of the above.
I've been optimizing code for a long time, and it's not uncommon to find a huge performance bottleneck in one case that doesn't affect other cases which aren't much different. That says nothing about how much existing code & infrastructure they used to bootstrap the software effort for the dGPUs, nor even how much of it is still shared.

Second, iGPU connect to CPU via "special line" which makes it possible to utilize system memory with little penalty, but discrete GPU has to communicate via PCIE Bus, which incur heavy penalty to access, that why most dGPU have local memory to minimize this effect. This will result totally different driver to develop and with totally different challenge.
If you've never done medium or large scale software development, I can understand how you might not see ways to abstract and modularize different parts of the software stack to accommodate such variations. It's absolutely possible, though. Just from a software maintenance perspective, it's a lot easier to add some special code paths or modules than to have a completely separate stack that's rewritten from the ground up.

But, you're also missing quite a lot else. Let's say Intel did exactly what you're saying, and rewrote their graphics drivers and tools completely from scratch. They still have a team whose done this before and knows everything that needs to be done and how to do it. You don't think that counts for a lot??? Also, they have lots of quality assurance testing assets they should be able to reuse, because both product do essential the same things and implement the same APIs.
 

bit_user

Titan
Ambassador
I SEE THIS...

When I find myself in times of trouble
Mother Mary comes to me speaking words of wisdom
Larrabee
And in my hour of darkness she is standing right in front of me
Speaking words of wisdom
Larrabee
Larrabee
Larrabee
Larrabee
Larrabee
Whisper words of wisdom
Larrabee
Larrabee was designed by a completely separate team than their iGPU folks. There was an interesting battle between them that I once heard about, from an ex-Intel employee.

It was basically the CPU folks vs. the GPU folks. The former had their chance with Larrabee, and now that Intel has seen the error of that decision, the latter are getting their chance with Xe/ARC. Except, the GPU folks have a lot of catching up to do.
 

bit_user

Titan
Ambassador
"The MTT S60, which is based on Imagination Technologies’ GPU IP, has 2,048 MUSA cores built on "
Its not exactly this GPU but some articles mentioned the drivers have Imagination components.
Wow, okay. Thanks for citing!

Reading that article, it's really rattling off a laundry list of pretty much everything you could dream of doing on a GPU. I know they're just reporting what was said at the launch event. Sounds exciting, but I wonder just how much of it was real, especially considering that was barely 1.5 years after the company's founding.

Anyway, we should point out that the S60 was claimed to feature 4 different kinds of functional blocks, so there's an open question just how much of their IP is from Imagination. Perhaps just the graphics rendering portion? Maybe also display controllers, interconnect, and even codec?

It's also worth considering that the S80 could have swapped out some of that IP for their own. I'm not saying it's likely, but possible.
 

bit_user

Titan
Ambassador
Remember back in the 90s when new GPU companies were a dime a dozen? These things are so complex now, we will never escape a market duopoly.
Yes, maybe.

It's not as if there aren't other GPUs out there. Let's go through the list:
  • Imagination's IP is already powering a few Chinese dGPUs.
  • I haven't heard anything much about Qualcomm's Adreno graphics, on Windows. They've been used in Hololens and a few generations of ARM-based Windows notebooks, already. It would be an odd time for them to jump into the dGPU market now, but at least they should have their Windows drivers pretty well sorted.
  • ARM's Mali is another possibility. We'll have to see which way Mediatek goes (i.e. Mali or Nvidia), with its Windows-oriented SoC.
  • Think Silicon rounds out the list of Western GPU IP providers. However, they seem focused on the low-power and ultra low-power markets and I don't know if they have Windows drivers for any of their IP, to date.
  • Then, there's definitely a few indigenous and legacy GPU IPs, in China. We just have to wait and see what comes of those efforts.
  • In terms of legacy IP, there's S3 and Vivante.

One of the more recent casualties was Samsung's GPU, which was canceled when they decided to license RDNA from AMD, in 2019.
 
Last edited: