News New homegrown China server chips unveiled with impressive specs — Loongson's 3C6000 CPU comes armed with 64 cores, 128 threads, and performance to...

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Im seriously starting to wonder if Tom's has been bought by the Chinese government. Every day some new BS story about some new phony breakthrough in China. 90% of greatly exaggerated or just outright false.

Can you point out when Loongson specs claims has been wildly false? Did you buy and test them? Can you point us to some reviews online?
 
  • Like
Reactions: bit_user
They will power it with coal power plant. Efficiency and green concerns it's not an issue.
Education should be an issue for you though. 38% of electricity is generated from green energy in China. The carbon intensity of electricity generation in China was 560g of CO2/kWh in 2024, which was at the same level as the U.S. in 2010. Where were you back then criticizing the environmental impact of iPhones? Or is your greenwashing bs reserved only for China?
 
  • Like
Reactions: phead128
Im seriously starting to wonder if Tom's has been bought by the Chinese government. Every day some new BS story about some new phony breakthrough in China. 90% of greatly exaggerated or just outright false.
It's truly sad to see individuals blinded by hate against China becoming anti-tech advocates. Living in fear and spreading hate will not bring anything positive to your life.
 
  • Like
Reactions: phead128
You are wasting your time, these anti-tech innovation sour losers don't want to see the reality, they prefer to believe lies to feel safer.
This site is about learning and sharing of tech news and information. You're right that some people are unreachable, but the effort isn't wasted on many of us.

To a large degree, most of the critical comments are based on reporting what amounts to PR without substance. If there were substance, that would address such complaints. So, I'd ask that anyone with good information to share about any of these Chinese tech products please do so. Things like: detailed specs or technical information, independent benchmarks, independent reviews, etc.

IMO, you can't really defend against nationalism with more nationalism. That's not going to win over anyone and will do no good in the attempt.
 
So much western copium, this is a huge breakthrough for a country that is playing catch up, people seem to gloss over the fact that AMD and Intel were in the same boat just a few years back.
China is a massive market that these sanctions are killing Intel & AMD shares there, and it forces China to divert resources to cover that gap which in turn promotes innovation.
China is already catching up in a lot of fields, last i heard they just dipped into 2nm and the results were heading in the right direction.
"Breakthroughs" are magnitudes easier to achieve when you're starting from garbage and have the competition to copy from. The only western tech company that gets praise for copying the competition is AMD. Everyone else has to develop their own breakthroughs to get praise, and most of the time they don't get it then.
 
The only western tech company that gets praise for copying the competition is AMD. Everyone else has to develop their own breakthroughs to get praise, and most of the time they don't get it then.
Spoken like a disgruntled Intel shareholder!
: D

There are few truly novel ideas, in computing. Most of the time, what really matters is execution. That's what AMD has nailed and it's probably the main thing that sets apart the modern AMD from 10-15 years ago.
 
  • Like
Reactions: snemarch
Spoken like a disgruntled Intel shareholder!
: D

There are few truly novel ideas, in computing. Most of the time, what really matters is execution. That's what AMD has nailed and it's probably the main thing that sets apart the modern AMD from 10-15 years ago.
Spoken like someone who thinks they are clever and snarky, but as usual missed the point the poster was making.

Most of the innovations on the CPU side now are in the packaging. Something AMD has nothing to do with since they don't manufacture anything after the failure of their spun off manufacturing arm. AMD's only real innovation over the last decade is 3D Vcache. Except for the fact, that wasn't their innovation. It's TSMC's. AMD put a marketing name on TSMC's work and takes all the credit for something they played no roll in developing. People constantly ask why Intel doesn't copy AMD. They don't have to, anyone can license the 3D stacking technology from TSMC. Intel has chosen not to as they don't see a profitable enough market for such a niche product.

Then we move over to the GPU side and things get really ugly. Real time ray tracing in modern games, Gsync, DLSS, Reflex, frame generation, ray reconstruction, gpu direct storage, pretty much everything driving the gaming graphics industry forward is coming from Nvidia. The irony being that Nvidia was/is trashed for most of the features when they were released, and then AMD releases their inferior version and suddenly the feature is awesome and the reason you should buy AMD over Nvidia. It's a scenario that continues to play out over and over again. AMD gets credit for innovations that other companies brought to market first. Intel, Nvidia and TSMC do not ever benefit in the same way.
 
Most of the innovations on the CPU side now are in the packaging. Something AMD has nothing to do with since they don't manufacture anything after the failure of their spun off manufacturing arm. AMD's only real innovation over the last decade is 3D Vcache. Except for the fact, that wasn't their innovation. It's TSMC's.
If it were as you say, they wouldn't have been way out in front of everyone else in their use of chiplets and 3D cache.

AMD partnered with TSMC to co-develop these things. AMD also had to do all the design work to make chiplet communication efficient and to avoid significant penalties, when using 3D cache. From Intel's first experiments with chiplet-based CPUs, especially with their superior EMIB technology, we can now appreciate that it's not exactly a trivial to get the design right.

They don't have to, anyone can license the 3D stacking technology from TSMC.
Yeah, now that they had early partners, like AMD, who helped them refine it. Even then, it's not like you just click a checkbox in your chip layout tool.

Intel has chosen not to as they don't see a profitable enough market for such a niche product.
That's backwards logic: "Intel isn't doing it, so it must not be smart or worthwhile." I think the real answer is that Intel is still trying to master chiplets, as we see from Arrow Lake's underwhelming performance. They'd better figure out how to do that right, before adding any more complexity.

Then we move over to the GPU side and things get really ugly. Real time ray tracing in modern games, Gsync, DLSS, Reflex, frame generation, ray reconstruction,
AMD's GPU situation mirrors where their CPU situation was, about 5 years ago. They had fallen behind their competition and were steadily catching up. The big difference is that Nvidia never took its foot off the gas, like Intel did (i.e. due to its 10 nm problems).

With that said, AMD used HBM on their GPUs long before Nvidia did. AMD went big with on-die caches in RDNA2, yet it took Nvidia until RTX 4000 to match them on it. AMD went 64-bit with CDNA, which Nvidia didn't match until Hopper. Also speaking of CDNA, AMD's Matrix cores supported all data types, well before Nvidia's Tensor cores went there.

gpu direct storage,
GPU Direct is somewhat of a gimmick, in that it only works in very specific hardware setups. For its part, AMD had SSG back in 2016.

The irony being that Nvidia was/is trashed for most of the features when they were released,
No, pretty much just DLSS. And that was pretty bad, until version 2.0.

Ray tracing on RTX 2000 was pretty much a gimmick, like it was on RDNA2, except even fewer games supported it back then.

GSync was good, but Nvidia kept it as a proprietary standard for way too long, and even when it open up the certification, it still didn't let partners implement the full featureset in monitors that didn't have their chip.

AMD gets credit for innovations that other companies brought to market first.
That's not what I see, but maybe we run in different crowds.
 
  • Like
Reactions: thestryker
AMD's only real innovation over the last decade is 3D Vcache.
Which is something they accidentally discovered was meaningful for client parts (bit_user already explained the co-design aspect). I don't think this was anywhere near as innovative as their chiplet strategy. They've shared enterprise and client chiplets which allows them to gain benefits from each side of the equation (3D V-cache and AVX512 may not have ever graced client without this). AMD also streamlined their CPU design much sooner than Intel did which has allowed them to do revisions faster.
Real time ray tracing in modern games, Gsync, DLSS, Reflex, frame generation, ray reconstruction, gpu direct storage, pretty much everything driving the gaming graphics industry forward is coming from Nvidia. The irony being that Nvidia was/is trashed for most of the features when they were released, and then AMD releases their inferior version and suddenly the feature is awesome and the reason you should buy AMD over Nvidia. It's a scenario that continues to play out over and over again.
I can't think of a single thing you've listed that people said one should buy AMD over nvidia to get. AMD has certainly gotten praise for the more open nature of things like VRR, upscaling and frame generation but none of those have been better than nvidia's implementation.
 
  • Like
Reactions: bit_user
I don't think this was anywhere near as innovative as their chiplet strategy.
I don't know about that, but it certainly wasn't as fundamental to their successful CPU business strategy. What it did accomplish was letting them put a feather in their cap and winning them a position of prestige, among gamers.

On the Intel side of the ledger, I credit their hybrid CPU strategy. However, this is tarnished by their over-reliance on Thread Director and whatever software thing they launched with Raptor Refresh that would tune thread scheduling of specific games and titles that it was trained on.

Basically, Intel's mistake was to try and solve the hybrid CPU scheduling complexity entirely in the hardware and OS, but that's really backwards. I know they faced the challenge of accelerating today's software, but they shouldn't have given up on the long-term approach of working to update threading APIs, so that apps could do their part.

AVX512 may not have ever graced client without this
Who can say, since even the Zen 4 APUs added support for it? Maybe they only did that because their desktop processors supported it, but maybe they'd have done it anyway, since the hard part is the initial implementation it and it doesn't really cost that much to leave it in the mobile version of the core.
 
  • Like
Reactions: thestryker
What it did accomplish was letting them put a feather in their cap and winning them a position of prestige, among gamers.
That was effectively an unintended side effect of their chiplet strategy. Its primary purpose hasn't been anywhere near the success.
Basically, Intel's mistake was to try and solve the hybrid CPU scheduling complexity entirely in the hardware and OS, but that's really backwards.
I think doing it with SMT may have also made this trickier than it otherwise could have been.
 
I think doing it with SMT may have also made this trickier than it otherwise could have been.
I disagree. As Intel has said, their E-cores are faster than if you pair the thread with another one, on the same P-core. It should therefore be no more difficult to schedule than SMT and even provide a middle step between running a thread exclusively on a P-core vs. having it share a P-core with another. In fact, if Intel had done more to help latency-sensitive apps effectively use SMT, then it should've carried over nicely to hybrid.

In Lunar & Arrow Lakes, what Intel has instead done is just thrown more hardware at the problem, by making the E-cores perform a lot more like P-cores. I've noticed some even call the "medium cores", rather than little ones. Imagine: if they'd done more to help games effectively use hybrid CPUs, maybe they could've kept the E-cores closer to Gracemont or Crestmont and squeezed 24 or 32 of them into Arrow Lake, without much increase in die area!
 
  • Like
Reactions: snemarch
I disagree. As Intel has said, their E-cores are faster than if you pair the thread with another one, on the same P-core. It should therefore be no more difficult to schedule than SMT and even provide a middle step between running a thread exclusively on a P-core vs. having it share a P-core with another.
My train of thought is more about addressing the software side in a more simple fashion rather than having to optimize scheduling for three potential thread locations.
In Lunar & Arrow Lakes, what Intel has instead done is just thrown more hardware at the problem, by making the E-cores perform a lot more like P-cores. I've noticed some even call the "medium cores", rather than little ones. Imagine: if they'd done more to help games effectively use hybrid CPUs, maybe they could've kept the E-cores closer to Gracemont or Crestmont and squeezed 24 or 32 of them into Arrow Lake, without much increase in die area!
E-cores were never going to stay small with the designs Intel was using in mobile and the enterprise application.
 
My train of thought is more about addressing the software side in a more simple fashion rather than having to optimize scheduling for three potential thread locations.
It's not well-optimized for two. The existing core-affinity APIs are not a good solution and I'm therefore not sure how many apps even use them.

E-cores were never going to stay small with the designs Intel was using in mobile and the enterprise application.
I know the rise of the E-core has turned more into the rise of the Medium-core, but PPA would be better with more Gracemont/Crestmont-style cores than Skymont. I think a lot of the reason why Skymont improved quite so much was to reduce the scheduling hazard of a latency-sensitive thread landing on one. Also, since AMD hasn't gone to a true E-core model, there wasn't really pressure on Intel to stick with that.

One of these days, someone (besides Apple) is going to do a rethink of the concurrency model on modern operating systems and then we might see a revival of true E-cores.
 
  • Like
Reactions: thestryker
I think a lot of the reason why Skymont improved quite so much was to reduce the scheduling hazard of a latency-sensitive thread landing on one.
E-cores seem to be used a lot more for more standard operations with ARL whether that's due to necessity or bad software I can't say. It is pretty wild to see overclocking E-cores improving game performance for some titles. I don't recall hearing of that being the case with ADL/RPL though that may also have been due to less headroom.
 
  • Like
Reactions: bit_user
The company has been at this since 2001, here's the thing, this is complete China supply chain, even more than Huawei's CPUs, those run on ARM's architecture, this runs on LoongArch, the company's own ISA architecture, no not x86, it's a RISC ISA. So will Microsoft support it? You think?? (...ahem rude).

It is a great achievement, a necessary direction for China's autonomy, sovereignty.

Do regular Chinese people buy these to build PCs, no, they buy Intel and AMD like you and me. It's for big business, government, their cloud. The scale is large enough to sustain this push for autonomy.
 
The company has been at this since 2001, here's the thing, this is complete China supply chain, even more than Huawei's CPUs, those run on ARM's architecture, this runs on LoongArch, the company's own ISA architecture, no not x86, it's a RISC ISA.
Just have to point out that it did borrow from MIPS, at least on the system level. When they first added support to it in the Linux kernel, they copied vast amounts of code straight out of the MIPS tree, to the point where the kernel maintainers wondered why they didn't just add it as another flavor of MIPS.
 
It’s equivalent to older chips with fewer cores and I’m assuming a rather large power draw and weaknesses in other areas that weren’t tested or shown. It would be DOA in a market not incentivised to field then like the west.
 
  • Like
Reactions: bit_user
Just have to point out that it did borrow from MIPS, at least on the system level. When they first added support to it in the Linux kernel, they copied vast amounts of code straight out of the MIPS tree, to the point where the kernel maintainers wondered why they didn't just add it as another flavor of MIPS.
Borrow from? That sure is biased, they were all in on MIPS, they built MIPS chips like AMD builds x86, or Apple builds ARM, and they made good headway in that area, but MIPS kind of pulled the rug from under them. So they pivoted away.