News DeepSeek reportedly urged by Chinese authorities to train new model on Huawei hardware — after multiple failures, R2 training to switch back to Nvi...

According to report DeepSeek used Nvidia's PTX assembly to make more efficient use of the H20 and hide communication latencies. While it's lucky for Nvidia that a company invested in significant hardware-specific performance engineering, it's somewhat surprising that the Huwei hardware kept crashing and given the level of expertise that those same software engineers couldn't also make it perform well.
 
Now let's not forget who were dealing with here--we're dealing with China, the country that is supposedly graduating over seven times (!) the number of engineers than the United States. Seven times the number of engineers! That's seven times the speed of DeepSeek versus chatGPT. Or seven times the compute power of their data centers versus ours. Or seven times the number of token acceptance. Or seven times as fast at bringing the next LLM to market...
But gee, China is not doing things seven times as well as we're doing them. Hmm. Come to think of it, we're kicking China's ass, and we have been now for quite some time. In fact, it appears as though Nvidia's server is running about seven times as efficiently than cloudmatrix, that row of junk computers that Huawei has been bragging about lately. OpenAI has about seven times as many LLMs as DeepSeek has, while together offering over seven times the compute power... Plus, doesn't the United States have over seven times the number of businesses involved in the AI race?
One thing I know is that China has about seven times as many AI patents filed compared to USA patent filings. Y' ever wanted to know where they file the Chinese patents? They file them in the bin that sits right on top of the paper shredder. I heard that Chinese toilet paper has "patented in China" printed on its wrapper. And, no, that's not a brand name you're reading. bwHAHAhahahah!🤣🤣🤣
 
Last edited: