News Chinese GPU Targets GeForce GTX 1650 Performance

D

Deleted member 14196

Guest
And before all the political remarks, I am glad that there will at last be some competition as these get better and better, because I might want to buy a dedicated graphics card again someday. Currently they are way too expensive for me to purchase.

I just hope they’re more affordable
 
  • Like
Reactions: atomicWAR

InvalidError

Titan
Moderator
2.5 TFLOPS32 slips right between the 1050Ti's 2.1 and 1650's 2.9.

Kind of depressing how something much slower than the 1650 Super (4.4 TFP32) still sounds remotely worth being excited about today.

Is the GTX 1650 around the performance of the Radeon 780M iGPU in Phoenix?
The 780M is 8.9 TFP32, completely destroys the 1650 on paper compute power. In terms of actual performance though, what few benchmarks are available look like it is barely even. Could be teething issues or acute memory bandwidth starvation.
 

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
759
1,760
WOW. I looked up the price of the 1650, which is a 4-year-old card, that launched for $150.

A 4-year-old GTX 1650 now costs $200.....wtf.

I don't care who does it, China or Timbuktu, but the faster these rotten companies like Nvidia and AMD get competition, the better.
 
Last edited:

healthy Pro-teen

Prominent
Jun 23, 2022
58
54
610
WOW. I looked up the price of the 1650, which is a 4-year-old card, that launched for $150.

A 4-year-old GTX 1650 now costs $200.....wtf.

I don't care who does it, China or Timbuktu, but the faster these rotten companies like Nvidia and AMD get competition, the better.
Atleast AMD's offerings are better, depending on the region you could get an RX6600 at around this price that is basically an RTX 2070.
 
  • Like
Reactions: bit_user

InvalidError

Titan
Moderator
A 4-year-old GTX 1650 now costs $200.....wtf.
Nvidia hasn't made any useful GPU aimed at the sub-$200 crowd in years, you are paying for whatever crumbs are left.

It sucks but also makes sense: there isn't much profit to be made selling $150 GPUs that are barely ahead of modern IGPs, so nobody wants to be there. At $200, manufacturers can afford the extra 50sqmm of die area needed to double entry-level performance while still making decent (edit: as in much better than they would at $150) margins, which is where I believe the formerly $150 entry-level will settle at once the GPU greedflation bubble pops.
 
Last edited:

King_V

Illustrious
Ambassador
WOW. I looked up the price of the 1650, which is a 4-year-old card, that launched for $150.

A 4-year-old GTX 1650 now costs $200.....wtf.

I don't care who does it, China or Timbuktu, but the faster these rotten companies like Nvidia and AMD get competition, the better.
Nvidia and AMD, you say?

You mean like how, for a little over $200, you can get an RX 6600 that trades blows with a 1080Ti?

Let's even say that the Sapphire Pulse RX 6600, at $250, is our reference point. In 2016 dollars, that's about $200.
 
  • Like
Reactions: bit_user

King_V

Illustrious
Ambassador
If I wanted to spend $250 on a GPU, I'd be really tempted by the A750. Not thrilled with its power-efficiency and not particularly impressed by drivers yet either, though they appear to be improving faster than I expected.
I'll admit that I find the Intel intriguing, mostly because it's the new kid on the block, and I kind of like the idea in general. But, yeah, I'll agree on both points - the power efficiency is, well, awful, but the drivers look like they're making it a very serious priority, and I'm impressed.

Then again, I probably wouldn't pay $250 for a 6600, I'd go for a 6600 XT or 6650 XT around that price (unless the coolers on those particular models were awful/loud)
 

bit_user

Polypheme
Ambassador
The graphics chip targets a performance level to that of Nvidia's GeForce GTX 1650 while promising a higher energy efficiency.
Did they actually say this? Or, is this simply the author's estimate, based on specs comparison?

After the shockingly poor performance of Moore Threads' S80, I think we need to wait for actual benchmarks.



As we've learned, MTT was starting with Imagination's established IP, toolchain, and (presumably) reference drivers. So, if it's true that Zhihui 's IDM 929 uses an entirely new ISA, that creates even greater reason to be circumspect about any performance claims or projections.

"the GPU has a pixel fill rate of 19.2 GPixels/s, a texture fill rate of 76.8 GTexel/s and compute performance of 2.5 TFLOPS "​
Haven't seen these type of specs in a while, what does the 4090 do?
  • 443.5 GPixel/s
  • 1,290 GTexel/s
  • 82.58 TFLOPS
Source: https://www.techpowerup.com/gpu-specs/geforce-rtx-4090.c3889

So, the matchup would be pretty absurd, even if their drivers were on par with Nvidia's.
 
Last edited:

bit_user

Polypheme
Ambassador
The 780M is 8.9 TFP32, completely destroys the 1650 on paper compute power. In terms of actual performance though, what few benchmarks are available look like it is barely even. Could be teething issues or acute memory bandwidth starvation.
Even with LPDDR5-5600, a desktop GTX 1050 Ti would still have about 25% more memory bandwidth than the 780M iGPU. So, bandwidth could indeed play a large factor in holding back the 780M.

I want to see AMD take a dGPU-like chiplet with Infinity Cache and attach it to the I/O die of a chiplet-based CPU. Infinity Cache should be very effective at mitigating the bottlenecks of a conventional 128-bit LP/DDR laptop/desktop memory interface. Until they go with wider, in-package memory (a la Apple M-series Pro/Max), Infinity Cache will be the key to scaling iGPU performance. Even once they do, it's still enables more performance to be extracted from what memory bandwidth you've got.
 

InvalidError

Titan
Moderator
I want to see AMD take a dGPU-like chiplet with Infinity Cache and attach it to the I/O die of a chiplet-based CPU. Infinity Cache should be very effective at mitigating the bottlenecks of a conventional 128-bit LP/DDR laptop/desktop memory interface.
Why cache? MLID estimates that the Vcache chip costs ~$25 for 64MB. A 2-3GB single HBM-like DRAM die would be far more beneficial, just need to find enough uses to amortize the cost of creating a "1-high" design. 2GB may not look like much but it should be enough to take care of many of the most bandwidth-intensive items such as the Z-buffers.
 

bit_user

Polypheme
Ambassador
Why cache?
Infinity Cache is what I consider largely responsible for the competitiveness of RDNA2 against RTX 3000-series, in spite of relatively lower memory bandwidth and even Infinity Cache only having about 2x as much.

MLID estimates that the Vcache chip costs ~$25 for 64MB.
You don't even need that much. The RX 6500XT has only 16 MB of Infinity Cache. That's a 16 CU GPU, which is 33% bigger than the 750M. Making the iGPU on 6N and foregoing the die-stacking should help with costs.

In fact, what I was really kind of expecting to see is a straight Navi 24 die show up in-package of a Ryzen 6000 series processor.

A 2-3GB single HBM-like DRAM die would be far more beneficial,
For a more premium solution, HBM would be the way to go. Even if it's just a dedicated slice for the iGPU.