News Chinese GPU Dev Starts Global Sales of $245 RTX 3060 Ti Rival

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Polypheme
Ambassador
One of the biggest issues with GPGPU is it's often not worth it to move some computation to discreet GPU even if that part of algorithm is well suited for GPU parallelism because data transfer time would take more than speedup of computation.

So, faster interfaces would allow more GPU-acceleration in wide variety of programs.
Well-said. I do think offloading in-line computations (where this tends to be an issue) is always going to be a challenge, because there's always going to be some communication latency. So, you're generally not going to see really fine-grained compute being offloaded.

I think an interesting metric is to look at bandwidth per FLOPS. For computations that can be asynchronously dispatched, this can reveal potential bottlenecks. Below, I'm just using nominal and peak numbers:

GPUfp32 TFLOPSPCIe Bandwidth (uni-dir) GB/sfp32 kFLO per PCIe Byte (uni-dir)
RTX 3090
35.6​
32​
1.11​
RTX 4090
82.6​
32​
2.58​
RX 6950 XT
47.3​
32​
1.48​
RX 7900 XTX
61.4​
32​
1.92​
MTT S80
14.4​
64​
0.23​
So, it has about 11x the bandwidth per fp32 FLOPS as the RTX 4090. Still, if what you're doing is computationally cheap, then it'll be bandwidth-limited at 230 FLO per byte (or 920 FLO per fp32). Then again, if your compute runs close to memcpy speed, there's not much reason to dispatch it to a GPU in the first place.
 

bit_user

Polypheme
Ambassador
The 4GB RX5500 which gains up to 70% more performance on 4.0x8 vs 3.0x8 says hi. 3.0x16 is borderline, especially at the very lower-end.
There's a certain irony, in this particular exchange. I'm reminded of when Haswell-EP was rumored to have PCIe 4.0 and we were on opposite sides of this debate.

Because it's a recurring point of interest, I've gone back to the TechPowerUp 4090 PCIe Scaling data and picked out the outliers showing the greatest benefit from PCIe 3.0 -> 4.0.

GamePCIe 3.0 -> 4.0 Improvement @ 1080p
Assassin's Creed Valhalla
6.5%​
Divinity Original Sin 2
5.2%​
Forza Horizon 5
3.8%​
God of War
5.7%​
Hitman 3
8.5%​
Metro Exodus
11.7%​
Resident Evil Village
4.3%​
The Witcher 3: Wild Hunt
3.5%​

That's 8 out of 25 titles with above-average sensitivity to the PCIe speed delta. For gamers looking to get the max FPS out of their setup, I think we can certainly say PCIe 4.0 x16 is warranted for such a high-end GPU, if not earth-shattering.

And while the averages show 4k is less sensitive to PCIe bandwidth, there are certainly some outliers in that case, as well. I didn't use the same level of rigor, but just eyeballing the individual games I can see that Metro Exodus is even more affected, with the discrepancy being a whopping 14.6%! It's definitely not the only significant outlier @ 4k, either.
 
Last edited:

InvalidError

Titan
Moderator
Because it's a recurring point of interest, I've gone back to the TechPowerUp 4090 PCIe scaling data and picked out the outliers showing the greatest benefit from PCIe 3.0 -> 4.0.
The high-end shows almost negligible gains most of the time because the amount of data that needs to go through the bus is driven entirely by frame rate with most of the shader, geometry and texture data remaining on the GPU the whole time. On low-VRAM GPUs, bits of just about everything have to go over the bus from system memory most of the time, hence the low-VRAM GPUs' much greater sensitivity to bus bandwidth when pushing the limits of details vs fps for available VRAM.

If AMD makes a true successor to the RX5500 or a proper desktop successor to the RX6500 in the RX7500, a 4GB version of that would likely benefit quite a bit from having 4.0x16. Though I'd really hope any such successor would come with 6GB of VRAM on a 96bits bus, then 4.0x8 may still be good enough.
 

blacknemesist

Distinguished
Oct 18, 2012
483
80
18,890
Greedy? They're getting less performance per mm^2 than AMD or Nvidia. I'm not sure they can charge much less for their GPUs.
Exactly, performance per dollar is extremely low but the price is on par with AMD and NVidia segment

As far as this Chinese designed gpu... Not a chance I'd put homegrown Chinese hardware in my system. Sorry but I just can't bring myself to trust a country run by a psycho that's commiting genocide as I type this... They can't be trusted (the CCP not the people) but until they get overthrown and the Taiwanese take their government back but on the mainland... Nope.

Plz don't say but e everything is made in China. There is a massive difference between assembled and designed then made by a company that has to do whatever the CCP says and has no oversight.

Same goes for that new VR HMD. What happens if Xi decides one day he wants to use it to try and 0day as many PC's connected to the same network? Byte dance isn't in charge of anything.
That's so ironic :rolleyes: also not tech related so..
 

pug_s

Distinguished
Mar 26, 2003
434
46
18,890
I’d stay away from BYD products, their quality control is poor and they have been accused multiple times of using slave labor in the manufacturing of their products.


At least BYD don't have safety problems like Tesla in China where brakes doesn't work.
 

TheOtherOne

Distinguished
Oct 19, 2013
220
74
18,670
GPU with backdoor to feed data to Xi Jinping, nice!
Awwww how cute! Sounds like you feel safe pretending your data has never been and will never be fed to corporations and govt by [insert name of the American/European company] 😍🥰🥳💃🕺

If anyone's concern for these GPUs is not about performance and quality but rather OMG IT'S FROM CHINA and hence privacy concerns then those guys are exactly the suckers that any govt and corps. anywhere in the world would love to have them as their customers. 😂
 
Jul 7, 2022
553
531
1,760

At least BYD don't have safety problems like Tesla in China where brakes doesn't work.

At least BYD don't have safety problems like Tesla in China where brakes doesn't work.
So you think 1 crash of a Tesla is enough to completely cancel out BYD’s use of slave labor. You got some eff’ed up morals.
 

thisisaname

Distinguished
Feb 6, 2009
772
424
19,260
I am some what puzzled by the need for a "game drivers" rather than just a driver for Windows.
Games under Windows are written for Direct X and as long as the Graphics Cards has a windows driver for Direct X games should work?
Could someone educated my why this is not so?
 

InvalidError

Titan
Moderator
I am some what puzzled by the need for a "game drivers" rather than just a driver for Windows.
Games under Windows are written for Direct X and as long as the Graphics Cards has a windows driver for Direct X games should work?
Could someone educated my why this is not so?
Different driver builds contain optimizations for different things.

In Nvidia's case, the gaming drivers are for people who want support for the newest games as soon as possible while the Studio drivers are for people who prioritize stability over support for new games and features.
 
  • Like
Reactions: thisisaname

thisisaname

Distinguished
Feb 6, 2009
772
424
19,260
Different driver builds contain optimizations for different things.

In Nvidia's case, the gaming drivers are for people who want support for the newest games as soon as possible while the Studio drivers are for people who prioritize stability over support for new games and features.

I see so the games should still run without an optimized driver just not as well as it could?
 

InvalidError

Titan
Moderator
I see so the games should still run without an optimized driver just not as well as it could?
Not just performance. For Studio drivers which are intended for production-oriented environments, you likely also get more thorough regression testing to ensure new drivers don't introduce artifacts - nothing gets hurt by a flickering transparent layered texture in a game while days of work in a production environment may need to get redone if a new bug gets introduced in a production environment before getting spotted.
 
  • Like
Reactions: thisisaname

bit_user

Polypheme
Ambassador
I am some what puzzled by the need for a "game drivers" rather than just a driver for Windows.
Games under Windows are written for Direct X and as long as the Graphics Cards has a windows driver for Direct X games should work?
Could someone educated my why this is not so?
"Game-ready" drivers involve AMD engineers profiling new & popular games and optimizing the drivers specifically for them. That could mean optimizing a codepath in the driver that other games don't typically use, but I believe it can even involve effectively rewriting some of a game's shaders.
 

bit_user

Polypheme
Ambassador
Not just performance. For Studio drivers which are intended for production-oriented environments, you likely also get more thorough regression testing to ensure new drivers don't introduce artifacts - nothing gets hurt by a flickering transparent layered texture in a game while days of work in a production environment may need to get redone if a new bug gets introduced in a production environment before getting spotted.
No... they don't use Direct3D for rendering Pixar movies.

As for bugs, I think game-ready drivers contain game-specific bug fixes, as well as optimizations. Sometimes, the bug is actually in the game, and the driver fix is merely a work around. This most often happens because the other brand's driver might have a bug or a quirk that the game is dependent on.

In the Linux/OpenGL world, AMD spent a long time making their drivers "bug-compatible" with Nvidia's. Sadly, this was needed mostly for professional applications that were developed solely using Nvidia's drivers.
 
  • Like
Reactions: thisisaname

InvalidError

Titan
Moderator
No... they don't use Direct3D for rendering Pixar movies.
Doesn't matter what API you use, new bugs are still highly problematic in a production environment and that is why production machines sometimes don't get touched once setup for the remainder of their service life to avoid needing a re-tune after updates. The texture layering issue was just a simple example of an obvious bug that mostly doesn't matter for games which most people have likely seen many times and can easily imagine not being remotely acceptable for any sort of production output.
 
  • Like
Reactions: thisisaname

bit_user

Polypheme
Ambassador
The texture layering issue was just a simple example of an obvious bug that mostly doesn't matter for games which most people have likely seen many times and can easily imagine not being remotely acceptable for any sort of production output.
It's a nonsensical example, because what you mean by "production" lives in a different universe than the software stacks and rendering techniques used for games. Nobody is using hardware-accelerated, textured polys for feature films, or even special effects in television productions.

Sure, at some level, there's some device driver functionality that's common to both games & non-gaming code (to the extent anyone even does production rendering on Windows, which I doubt), but that's going to be separated from the D3D driver code that contains the rendering support for it and fancy optimizations used by games.

dx10arch.png

https://learn.microsoft.com/en-us/w...a-and-later-display-driver-model-architecture
 
Last edited:

InvalidError

Titan
Moderator
It's a nonsensical example, because what you mean by "production" lives in a different universe than the software stacks and rendering techniques used for games. Nobody is using hardware-accelerated, textured polys for feature films, or even special effects in television productions.
What people use for hardware-accelerated rendering doesn't matter. The fact that new bug introductions in production drivers can wreck havoc on a production environment is. As I stated earlier, I used the texture layering example for illustrative purposes only - something every gamer is likely familiar with. It was never meant to represent actual production where far more subtle bugs in OpenCL, CUDA or whatever else that may not be immediately obvious could be far more devastating.
 

bit_user

Polypheme
Ambassador
What people use for hardware-accelerated rendering doesn't matter. The fact that new bug introductions in production drivers can wreck havoc on a production environment is.
Your point is a non-sequitur for multiple reasons:
  1. The volatile bits of driver code where the game-specific fixes & optimizations occur would be in the user-mode display driver, not touching the parts used by the GPU compute APIs used for production rendering.
  2. Almost nobody is doing production rendering on Windows.
  3. If you're using GPU compute, you basically have to update your driver to get support for the latest userspace runtime of the compute API. There's limited cross-version compatibility between Nvidia's CUDA runtimes and their drivers. Of course, nobody is making you upgrade CUDA, but the incentives in doing so are to benefit from optimizations and new features in new versions of higher-level APIs, like TensorRT and OptiX.
  4. As you upgrade OS kernels, you're also sometimes pushed to upgrade drivers, because there are also limitations on cross-version compatibility. OS kernel upgrades are needed for security, features, and performance.
You probably wouldn't upgrade these components as often as gamers upgrade their drivers, but it's still a case where you'd typically upgrade them several times over the service life of the machine.
 

InvalidError

Titan
Moderator
Almost nobody is doing production rendering on Windows.
Doesn't change my point that a production environment may still get severely affected by any new bugs anywhere in the GPU/CUDA/OpenCL/whatever other driver components regardless of whether it is used in Unix, Linux, BSD, Windows for render farm, supercomputing or whatever else.
 

ManDaddio

Reputable
Oct 23, 2019
97
58
4,610
Bring it on. As with the car industry and the coming rush of solid Chinese EVs, this sounds like good news. 3060ti performance should cost about $250 retail with a useful margin. I've owned nothing but Nvidia cards since my voodoo 2. But I won't miss that company if it disappears in the next few years. History is littered with companies that got to the top, took their customers for granted and then got eaten by people trying harder. Strange to say in 2022, but I'd rather give my automotive dollars to BYD and if this GPU mob is serious I'll give them a go too.
Nvidia isn't going to fade into Oblivion. Worst case scenario they would be bought out by someone else if it ever came to that. That doesn't mean the technology disappears. It just means the company is absorbed. Nvidia is a business that does a lot of research and development. They have to turn a profit for investors. That's how it works when you don't steal money from taxpayers like governments love to do and citizens seem to love to let them do around the world.

VAT tax after taxing people 40 to 50% for their free health care and free school. This is a truly insane world we live in today.
 

ManDaddio

Reputable
Oct 23, 2019
97
58
4,610
I wouldn't be counting on these guys to be bringing anything great to the table anytime soon.
Just based on what you wrote in your article it seems like they are a long way away from being what we would consider mainstream.
Drivers would be the biggest deal here.
Even up until recently AMD has been struggling with their drivers.

And I don't even think we need to talk about Intel.

Now we have some Chinese company wanting to jump on the scene. A lot of those companies cheat their way into the mainstream. I won't say anything more about that but I'm sure you know what I mean.
 
Status
Not open for further replies.