News China-Made Moore Thread GPU Can’t Match RTX 3060

Status
Not open for further replies.
yes, it's using a CPU EPS12V rather than an 8-pin PEG (PCI Express Graphics) connector
Is there a techincal reason why GPUs use their own PCIe power cables (6pin/8pin) and not EPS power cables? I always found it stupid that GPUs use essentially the same connector (aside from the bit of plastic between the 2 pins) but different pinouts. A pair of EPS would give you 600w instead of having to do 4x 8pin PCIe power.
 
Is there a techincal reason why GPUs use their own PCIe power cables (6pin/8pin) and not EPS power cables? I always found it stupid that GPUs use essentially the same connector (aside from the bit of plastic between the 2 pins) but different pinouts. A pair of EPS would give you 600w instead of having to do 4x 8pin PCIe power.
Best guess: Intel and others define the ATX specs, including EPS12V. PCI-SIG is in charge of PCIe-related things like the 6-pin and 8-pin connectors. I'm guessing back when EPS12V was created and PCIe was doing 8-pin, the idea of up to 300W for a GPU seemed absurd and so they opted for an alternative standard where dual 8-pin connectors could be used on a single harness. Note that combined, that means most PSUs can push 300W through the dual-connectors on a single harness approach.

Perhaps just as important, it also means that PCIe can have PSU requirements where only 150W is needed on an 8-pin connector. Imagine if every PSU that supported even one 8-pin PEG connector needed to deliver 300W on that connector. It would mean all PSUs would have to be over 700W (one EPS12V for the CPU, one for the GPU, plus 24-pin and other connectors). Today most decent desktop PCs do have 700W and higher PSUs, but the first PCIe 8-pin connectors came out maybe ten years ago. Back then, 300W under load for an entire PC was pretty common.
 
Is there a techincal reason why GPUs use their own PCIe power cables (6pin/8pin) and not EPS power cables? I always found it stupid that GPUs use essentially the same connector (aside from the bit of plastic between the 2 pins) but different pinouts. A pair of EPS would give you 600w instead of having to do 4x 8pin PCIe power.
I agree, the whole thing seems pretty dumb. As jarred mentioned though, back when the PCIe-aux connector was introduced, GPUs were barely over the 75W mark and often used floppy or HDD power connectors for extra power until they switched to the PCIe plug and adapters during the transition to native PCIe cables. The X700 card in my P4 is one of those that used a floppy connector for extra power.
 
I still have an old Thermaltake PSU that proudly advertises the NEW 6-pin PCIe connector. Which I did actually need at the time. GPU was just shy of 150W.

So many scary dual 4-pin to 6-pin PCIe adapters... I have piles of them, some still in their little bags.
 
Thanks, good thoughts there. Compatibility is a reason for a lot of awkward work arounds that we have to deal with. Would have been nice if they designed it to be compatible with CPU power though. Extending GPU power would have been the same as extending CPU power and not having to keep up two separate standards that are very similar but very incompatible. The 6-pin GPU power could have been a 4+2pin connector that could also have been used for CPUs if you need more CPU power instead of GPU power.

Also could have made PSUs easier as instead of every PSU brand having their own take on how their own connectors work as they try to make their connectors work for either CPU or GPU power, they all could have been 1-to-1 connectors and possibly have made things more compatible then the mess that modular PSUs are now.
 
Is there a techincal reason why GPUs use their own PCIe power cables (6pin/8pin) and not EPS power cables? I always found it stupid that GPUs use essentially the same connector (aside from the bit of plastic between the 2 pins) but different pinouts. A pair of EPS would give you 600w instead of having to do 4x 8pin PCIe power.

There are several reasons. One of which (and possibly the biggest) being backwards compatibility. At the time of the P4 (the progenitor of the modern 8-pin CPU connector) connector's introduction, graphics cards used 4-pin disk drive power connectors, either the larger "Molex" or smaller "burg" power connectors. There was a clear and obvious distinction between the CPU and peripheral power connectors. When PCI-E was later introduced, a new power connector that was capable of delivering more power, with a positive latching mechanism, and more consistent insertion/removal was made. The existing p4 connector couldn't be used because that was for the CPU and the CPU only. Back then, the CPU would often have its own 12v rail on the power supply. A similarly shaped 4-pin connector would've only lead to confusion.

Instead, a new 6-pin connector was made that had different shaped pins so it couldn't (easily...) be inserted into anything but a PCI-E device. It had more 12v conductors (3 instead of 2) so in theory was able to handle more power, too. Eventually, CPU and GPU power requirements kept growing; Intel decided to make a new 8-pin connector to replace P4 while offering backwards compatibility. Graphics cards started placing multiple 6-pin connectors on cards, then PCI-SIG released a new 8-pin connector that also provided backwards compatibility with its 6-pin predecessor.

Long story short, while the two 8-pin connectors might look similar today, they have a different family history that keeps them incompatible.
 
The founder of the company Jensen's man in China, two years after founding the company they already have a product out!. Another example of the Chinese "appropriating" tech secrets of the Western companies which operated in China?
 
The founder of the company Jensen's man in China, two years after founding the company they already have a product out!. Another example of the Chinese "appropriating" tech secrets of the Western companies which operated in China?
Were it not for the patent insanity surrounding everything, I bet there would be quite a few companies looking to get into the GPU business where AMD and Nvidia are getting 60+% gross margins. The fundamental math of 3D rendering isn't rocket science, piecing together something capable of rendering is something that even undergrad people have done. Getting the performance and compatibility up to market expectations is a whole other ball game though.

The Chinese GPUs may seem impressive for being put together so quickly, though they also mention that game compatibility is limited to about a dozen games, broad and mostly glitch-free compatibility may take a while if it is ever achieved.
 
Were it not for the patent insanity surrounding everything, I bet there would be quite a few companies looking to get into the GPU business where AMD and Nvidia are getting 60+% gross margins. The fundamental math of 3D rendering isn't rocket science, piecing together something capable of rendering is something that even undergrad people have done. Getting the performance and compatibility up to market expectations is a whole other ball game though.

The Chinese GPUs may seem impressive for being put together so quickly, though they also mention that game compatibility is limited to about a dozen games, broad and mostly glitch-free compatibility may take a while if it is ever achieved.
You are right, the software part is much more difficult to optimise for widespread use. My suspicion is that this company was mostly built to make in-house AI and facial recognition tech they imported from Nvidia for their big brother and military use.
 
It still takes great engineers to copy another product. To reverse engineer something, you still need to know engineering. Reverse engineering simply allows you to skip over a lot of the testing phase. Just because some Chinese engineers are great, that doesn't mean they will necessarily make a great product.
 
Last edited by a moderator:
Status
Not open for further replies.