News Intel data center CPU sales hit the lowest point in 13 years

The article said:
Sales of Intel’s data center CPUs in 2024 hit their lowest level in more than a decade due to increased competition from AMD, the transition to higher-core count models amid a drop in the number of CPUs, and a market shift to AI servers that use up to eight GPUs and just two CPUs.
No mention of ARM? Amazon, Microsoft, and Google all have their own ARM-based server CPUs. That's got to be taking a bite out of the x86 server CPU sales volume, especially with more server workloads migrating from on-prem servers to cloud.

Also, nothing about China and their push to become self-reliant for their infrastructure? While their server CPUs don't yet seem very competitive, I'd imagine they're getting to a point where Intel and AMD are starting to see some tapering in Chinese demand for them.

It's these macro factors that are largely out of Intel's control. China isn't coming back to them, no matter what. The same is largely true of everyone who's already jumped on the ARM train. Intel's best chance with them would probably be to start designing its own ARM or RISC-V cores.
 
No mention of ARM? Amazon, Microsoft, and Google all have their own ARM-based server CPUs. That's got to be taking a bite out of the x86 server CPU sales volume, especially with more server workloads migrating from on-prem servers to cloud.

Also, nothing about China and their push to become self-reliant for their infrastructure? While their server CPUs don't yet seem very competitive, I'd imagine they're getting to a point where Intel and AMD are starting to see some tapering in Chinese demand for them.

It's these macro factors that are largely out of Intel's control. China isn't coming back to them, no matter what. The same is largely true of everyone who's already jumped on the ARM train. Intel's best chance with them would probably be to start designing its own ARM or RISC-V cores.
Yep, ARM and China are definitely factors as well; increased core-count CPU's and competition from AMD haven't driven Intel's server CPU's down 50% in ~5 years by themselves. Cloud computing and AI datacenters have also grown rapidly, so while server racks and blades have been consolidating due to higher core counts, I'm pretty certain that there's been a net increase in the number of CPU's required (and therefore sold) to fuel all that growth.
 
  • Like
Reactions: bit_user
No mention of ARM? Amazon, Microsoft, and Google all have their own ARM-based server CPUs. That's got to be taking a bite out of the x86 server CPU sales volume, especially with more server workloads migrating from on-prem servers to cloud.

Also, nothing about China and their push to become self-reliant for their infrastructure? While their server CPUs don't yet seem very competitive, I'd imagine they're getting to a point where Intel and AMD are starting to see some tapering in Chinese demand for them.

It's these macro factors that are largely out of Intel's control. China isn't coming back to them, no matter what. The same is largely true of everyone who's already jumped on the ARM train. Intel's best chance with them would probably be to start designing its own ARM or RISC-V cores.
Can you provide a rundown of why x86 is losing market share to ARM/RISC-V? I'd be interested in your analysis of these architectures and their strengths and weaknesses.
 
No mention of ARM? Amazon, Microsoft, and Google all have their own ARM-based server CPUs. That's got to be taking a bite out of the x86 server CPU sales volume, especially with more server workloads migrating from on-prem servers to cloud.
Yeah, I think Intel really lost major opportunities to ARM, especially in the mobile processor market. Instead of ARM becoming the default ISA, Intel could have continued investing more R&D into the ultra-low power market that mobile needed. They did low power successfully before with Pentium M, which became the dominant "Core" processors for desktop. But it felt like Atom was the lowest they went, and ARM became successful, even despite have a different instruction set than the more well-known x86. That was a lot of money they might have gotten over many years, if they were the dominant CPU in mobile.
 
No mention of ARM? Amazon, Microsoft, and Google all have their own ARM-based server CPUs. That's got to be taking a bite out of the x86 server CPU sales volume, especially with more server workloads migrating from on-prem servers to cloud.

Also, nothing about China and their push to become self-reliant for their infrastructure? While their server CPUs don't yet seem very competitive, I'd imagine they're getting to a point where Intel and AMD are starting to see some tapering in Chinese demand for them.

It's these macro factors that are largely out of Intel's control. China isn't coming back to them, no matter what. The same is largely true of everyone who's already jumped on the ARM train. Intel's best chance with them would probably be to start designing its own ARM or RISC-V cores.
Segment operating income (loss):
Intel Products:
Client Computing Group $ 10,920
Data Center and AI 1,338
Network and Edge 931

They made almost ten times the money from client than they did from datacenter in 2024. Still 5times more if you also count edge.
I don't see much incentive for them to try in a market that, as you said yourself, is basically drowning in cheap ARM cpus.

In my opinion that's why intel tried to make their fabs appealing to arm and risc-v makers, they would make a lot more money making arm cpus for others.
 
This is what happens when you get comfortable at the top, and charge for services that are already in the chip you pay an arm and a leg for (look out automotive companies). Not only that, while EPYC was running laps around Intel in most workloads, these chips were still more expensive. I hope they turn it around for competition sakes, but I’m not hopeful.
 
Can you provide a rundown of why x86 is losing market share to ARM/RISC-V? I'd be interested in your analysis of these architectures and their strengths and weaknesses.
I'll take a swing at it.
The article said:
Cloud service providers increasingly prefer high core-count CPUs, thus reducing the number of processors and servers they deploy.
It turns out there's a distribution of how powerful a task needs to be and it probably follows some power law - zillions that just need any little processor for a fraction of a second, few that need powerful core(s) for minutes, hours, days, months, or years. Hence having a few ecores can be a win. In fact, for a great deal of uses, a handful of ecores is all they ever need. So any old core will do, even with less cache, sharing higher-level caches, etc. And really today's ecores can be very powerful, compared to anything ten, twenty, fifty years ago!

So they might as well be whatever cores are the cheapest, and you can load up a bunch on the chip because even these weak cores will be idle 95% of the time so the design of a 100-core chip can take a lot of shortcuts - that are a complete fail if all the cores try to be active the same time.

Meanwhile Intel has spent the last twenty years or so optimizing the balance of CPU and IO features so they don't interfere, so each core can be most effective. Not to mention that's how Microsoft charged for Windows server products - by CORE. So one powerful core was more cost-effective than six or sixteen slower cores.

Only now is Intel following the demand and loading cores onto chips - although they did offer products like that in previous years and decades, and nobody bought them. Go figure. And whether Microsoft has gone away from core pricing, I don't know. A quick glance at Google suggests they still license by core. Except that Microsoft doesn't have to price them that way for Azure, LOL. And big customers get unlimited licenses, so presume AWS has the same dispensation.
 
This ship will never be able to stern back in the right direction. It will only get worst from now on.

They are not leaders in any field anymore. When the money to sustain those bribes to OEM to maintain their mobile dominance is going to be gone, Intel dam will crumble and nothing will be able to save them.

AMD will recuperate all their business in client, it is going to be the end.
 
Can you provide a rundown of why x86 is losing market share to ARM/RISC-V? I'd be interested in your analysis of these architectures and their strengths and weaknesses.
ARM CPUs are believed to be more efficient in the cloud, where operating costs are a significant portion of overall costs. Also, ARM will license its IP to cloud operators, who can then order their CPUs made to their particular specifications. Since they're buying these CPUs directly from the fab (TSMC or Samsung, currently), that also reduces their purchase price.

RISC-V is a future threat for x86 and ARM, not a current one. Well, it's made big inroads into the low-power/embedded market, so it's already hurting ARM there.

Intel could design its own ARM-compatible or RISC-V cores, if it wanted to. I think they probably will, but maybe they're waiting until those markets mature just a bit more. If I had to guess, I'd say they'll skip ARM and go straight to RISC-V.
 
This is what happens when you get comfortable at the top, and charge for services that are already in the chip you pay an arm and a leg for (look out automotive companies). Not only that, while EPYC was running laps around Intel in most workloads, these chips were still more expensive. I hope they turn it around for competition sakes, but I’m not hopeful.
You don`t need anything from Intel for competition. You forget that Intel acted in monopolistic way and would do anything to prevent competition in the past.

They need to die, nobody will miss them. Another one is going to take over anyway. Nvidia desperatly need a CPU uarch, and they will bet on ARM, but ARM with Windows and MS is a mess. It will take 10 years for ARM to address compatibility issues with all the software industry.

x86 is far from over and it run incredibly well with similar efficiency in many applications and better compute overall by a large margin.
 
They did low power successfully before with Pentium M, which became the dominant "Core" processors for desktop. But it felt like Atom was the lowest they went,
Actually, no. Intel tried making x86-powered microcontrollers, called Intel Quark. These were targeted at the IoT market, not phones.

That was a lot of money they might have gotten over many years, if they were the dominant CPU in mobile.
Oh, it wasn't for lack of trying.
 
ARM CPUs are believed to be more efficient in the cloud, where operating costs are a significant portion of overall costs. Also, ARM will license its IP to cloud operators, who can then order their CPUs made to their particular specifications. Since they're buying these CPUs directly from the fab (TSMC or Samsung, currently), that also reduces their purchase price.

RISC-V is a future threat for x86 and ARM, not a current one. Well, it's made big inroads into the low-power/embedded market, so it's already hurting ARM there.

Intel could design its own ARM-compatible or RISC-V cores, if it wanted to. I think they probably will, but maybe they're waiting until those markets mature just a bit more. If I had to guess, I'd say they'll skip ARM and go straight to RISC-V.
The problem with ARM is there is no standard and there is a massive amount of different architecture from different companies. It is all over the place and the software industry has zero interest into making the offering more compatible when there is no need. Beside the big player making their custom chips for there custom workloads, there is nothing compelling with ARM atm.

It will take a decade for a shift to occur. Until all Microsoft environment will be predisposed to a x86 environment, ARM will never take off in PCs or Mobiles.
 
So they might as well be whatever cores are the cheapest, and you can load up a bunch on the chip because even these weak cores will be idle 95% of the time so the design of a 100-core chip can take a lot of shortcuts - that are a complete fail if all the cores try to be active the same time.
No, Sierra Forest (Intel's 144 E-core CPU) is definitely designed to run at 100% occupancy. Phoronix benchmarked it and they're not bad. They competed well against Zen 4 and Zen 4C, especially on the integer workloads:

Unfortunately for Intel, AMD launched Zen 5 and Zen 5C not long after. Those took back the crown, with interest.

Only now is Intel following the demand and loading cores onto chips - although they did offer products like that in previous years and decades, and nobody bought them. Go figure.
Are you talking about Xeon Phi? Those cores were awfully weak at anything except vector workloads. They were basically designed to compete against Nvidia's datacenter GPUs, which they couldn't manage to do very well.