News Intel data center CPU sales hit the lowest point in 13 years

The article said:
Sales of Intel’s data center CPUs in 2024 hit their lowest level in more than a decade due to increased competition from AMD, the transition to higher-core count models amid a drop in the number of CPUs, and a market shift to AI servers that use up to eight GPUs and just two CPUs.
No mention of ARM? Amazon, Microsoft, and Google all have their own ARM-based server CPUs. That's got to be taking a bite out of the x86 server CPU sales volume, especially with more server workloads migrating from on-prem servers to cloud.

Also, nothing about China and their push to become self-reliant for their infrastructure? While their server CPUs don't yet seem very competitive, I'd imagine they're getting to a point where Intel and AMD are starting to see some tapering in Chinese demand for them.

It's these macro factors that are largely out of Intel's control. China isn't coming back to them, no matter what. The same is largely true of everyone who's already jumped on the ARM train. Intel's best chance with them would probably be to start designing its own ARM or RISC-V cores.
 
No mention of ARM? Amazon, Microsoft, and Google all have their own ARM-based server CPUs. That's got to be taking a bite out of the x86 server CPU sales volume, especially with more server workloads migrating from on-prem servers to cloud.

Also, nothing about China and their push to become self-reliant for their infrastructure? While their server CPUs don't yet seem very competitive, I'd imagine they're getting to a point where Intel and AMD are starting to see some tapering in Chinese demand for them.

It's these macro factors that are largely out of Intel's control. China isn't coming back to them, no matter what. The same is largely true of everyone who's already jumped on the ARM train. Intel's best chance with them would probably be to start designing its own ARM or RISC-V cores.
Yep, ARM and China are definitely factors as well; increased core-count CPU's and competition from AMD haven't driven Intel's server CPU's down 50% in ~5 years by themselves. Cloud computing and AI datacenters have also grown rapidly, so while server racks and blades have been consolidating due to higher core counts, I'm pretty certain that there's been a net increase in the number of CPU's required (and therefore sold) to fuel all that growth.
 
No mention of ARM? Amazon, Microsoft, and Google all have their own ARM-based server CPUs. That's got to be taking a bite out of the x86 server CPU sales volume, especially with more server workloads migrating from on-prem servers to cloud.

Also, nothing about China and their push to become self-reliant for their infrastructure? While their server CPUs don't yet seem very competitive, I'd imagine they're getting to a point where Intel and AMD are starting to see some tapering in Chinese demand for them.

It's these macro factors that are largely out of Intel's control. China isn't coming back to them, no matter what. The same is largely true of everyone who's already jumped on the ARM train. Intel's best chance with them would probably be to start designing its own ARM or RISC-V cores.
Can you provide a rundown of why x86 is losing market share to ARM/RISC-V? I'd be interested in your analysis of these architectures and their strengths and weaknesses.
 
No mention of ARM? Amazon, Microsoft, and Google all have their own ARM-based server CPUs. That's got to be taking a bite out of the x86 server CPU sales volume, especially with more server workloads migrating from on-prem servers to cloud.
Yeah, I think Intel really lost major opportunities to ARM, especially in the mobile processor market. Instead of ARM becoming the default ISA, Intel could have continued investing more R&D into the ultra-low power market that mobile needed. They did low power successfully before with Pentium M, which became the dominant "Core" processors for desktop. But it felt like Atom was the lowest they went, and ARM became successful, even despite have a different instruction set than the more well-known x86. That was a lot of money they might have gotten over many years, if they were the dominant CPU in mobile.
 
  • Like
Reactions: atomicWAR
No mention of ARM? Amazon, Microsoft, and Google all have their own ARM-based server CPUs. That's got to be taking a bite out of the x86 server CPU sales volume, especially with more server workloads migrating from on-prem servers to cloud.

Also, nothing about China and their push to become self-reliant for their infrastructure? While their server CPUs don't yet seem very competitive, I'd imagine they're getting to a point where Intel and AMD are starting to see some tapering in Chinese demand for them.

It's these macro factors that are largely out of Intel's control. China isn't coming back to them, no matter what. The same is largely true of everyone who's already jumped on the ARM train. Intel's best chance with them would probably be to start designing its own ARM or RISC-V cores.
Segment operating income (loss):
Intel Products:
Client Computing Group $ 10,920
Data Center and AI 1,338
Network and Edge 931

They made almost ten times the money from client than they did from datacenter in 2024. Still 5times more if you also count edge.
I don't see much incentive for them to try in a market that, as you said yourself, is basically drowning in cheap ARM cpus.

In my opinion that's why intel tried to make their fabs appealing to arm and risc-v makers, they would make a lot more money making arm cpus for others.
 
  • Like
Reactions: rluker5
This is what happens when you get comfortable at the top, and charge for services that are already in the chip you pay an arm and a leg for (look out automotive companies). Not only that, while EPYC was running laps around Intel in most workloads, these chips were still more expensive. I hope they turn it around for competition sakes, but I’m not hopeful.
 
Can you provide a rundown of why x86 is losing market share to ARM/RISC-V? I'd be interested in your analysis of these architectures and their strengths and weaknesses.
I'll take a swing at it.
The article said:
Cloud service providers increasingly prefer high core-count CPUs, thus reducing the number of processors and servers they deploy.
It turns out there's a distribution of how powerful a task needs to be and it probably follows some power law - zillions that just need any little processor for a fraction of a second, few that need powerful core(s) for minutes, hours, days, months, or years. Hence having a few ecores can be a win. In fact, for a great deal of uses, a handful of ecores is all they ever need. So any old core will do, even with less cache, sharing higher-level caches, etc. And really today's ecores can be very powerful, compared to anything ten, twenty, fifty years ago!

So they might as well be whatever cores are the cheapest, and you can load up a bunch on the chip because even these weak cores will be idle 95% of the time so the design of a 100-core chip can take a lot of shortcuts - that are a complete fail if all the cores try to be active the same time.

Meanwhile Intel has spent the last twenty years or so optimizing the balance of CPU and IO features so they don't interfere, so each core can be most effective. Not to mention that's how Microsoft charged for Windows server products - by CORE. So one powerful core was more cost-effective than six or sixteen slower cores.

Only now is Intel following the demand and loading cores onto chips - although they did offer products like that in previous years and decades, and nobody bought them. Go figure. And whether Microsoft has gone away from core pricing, I don't know. A quick glance at Google suggests they still license by core. Except that Microsoft doesn't have to price them that way for Azure, LOL. And big customers get unlimited licenses, so presume AWS has the same dispensation.
 
  • Like
Reactions: LibertyWell
This ship will never be able to stern back in the right direction. It will only get worst from now on.

They are not leaders in any field anymore. When the money to sustain those bribes to OEM to maintain their mobile dominance is going to be gone, Intel dam will crumble and nothing will be able to save them.

AMD will recuperate all their business in client, it is going to be the end.
 
  • Like
Reactions: Peksha
Can you provide a rundown of why x86 is losing market share to ARM/RISC-V? I'd be interested in your analysis of these architectures and their strengths and weaknesses.
ARM CPUs are believed to be more efficient in the cloud, where operating costs are a significant portion of overall costs. Also, ARM will license its IP to cloud operators, who can then order their CPUs made to their particular specifications. Since they're buying these CPUs directly from the fab (TSMC or Samsung, currently), that also reduces their purchase price.

RISC-V is a future threat for x86 and ARM, not a current one. Well, it's made big inroads into the low-power/embedded market, so it's already hurting ARM there.

Intel could design its own ARM-compatible or RISC-V cores, if it wanted to. I think they probably will, but maybe they're waiting until those markets mature just a bit more. If I had to guess, I'd say they'll skip ARM and go straight to RISC-V.
 
  • Like
Reactions: LibertyWell
This is what happens when you get comfortable at the top, and charge for services that are already in the chip you pay an arm and a leg for (look out automotive companies). Not only that, while EPYC was running laps around Intel in most workloads, these chips were still more expensive. I hope they turn it around for competition sakes, but I’m not hopeful.
You don`t need anything from Intel for competition. You forget that Intel acted in monopolistic way and would do anything to prevent competition in the past.

They need to die, nobody will miss them. Another one is going to take over anyway. Nvidia desperatly need a CPU uarch, and they will bet on ARM, but ARM with Windows and MS is a mess. It will take 10 years for ARM to address compatibility issues with all the software industry.

x86 is far from over and it run incredibly well with similar efficiency in many applications and better compute overall by a large margin.
 
They did low power successfully before with Pentium M, which became the dominant "Core" processors for desktop. But it felt like Atom was the lowest they went,
Actually, no. Intel tried making x86-powered microcontrollers, called Intel Quark. These were targeted at the IoT market, not phones.

That was a lot of money they might have gotten over many years, if they were the dominant CPU in mobile.
Oh, it wasn't for lack of trying.
 
  • Like
Reactions: rluker5
ARM CPUs are believed to be more efficient in the cloud, where operating costs are a significant portion of overall costs. Also, ARM will license its IP to cloud operators, who can then order their CPUs made to their particular specifications. Since they're buying these CPUs directly from the fab (TSMC or Samsung, currently), that also reduces their purchase price.

RISC-V is a future threat for x86 and ARM, not a current one. Well, it's made big inroads into the low-power/embedded market, so it's already hurting ARM there.

Intel could design its own ARM-compatible or RISC-V cores, if it wanted to. I think they probably will, but maybe they're waiting until those markets mature just a bit more. If I had to guess, I'd say they'll skip ARM and go straight to RISC-V.
The problem with ARM is there is no standard and there is a massive amount of different architecture from different companies. It is all over the place and the software industry has zero interest into making the offering more compatible when there is no need. Beside the big player making their custom chips for there custom workloads, there is nothing compelling with ARM atm.

It will take a decade for a shift to occur. Until all Microsoft environment will be predisposed to a x86 environment, ARM will never take off in PCs or Mobiles.
 
  • Like
Reactions: LibertyWell
So they might as well be whatever cores are the cheapest, and you can load up a bunch on the chip because even these weak cores will be idle 95% of the time so the design of a 100-core chip can take a lot of shortcuts - that are a complete fail if all the cores try to be active the same time.
No, Sierra Forest (Intel's 144 E-core CPU) is definitely designed to run at 100% occupancy. Phoronix benchmarked it and they're not bad. They competed well against Zen 4 and Zen 4C, especially on the integer workloads:

Unfortunately for Intel, AMD launched Zen 5 and Zen 5C not long after. Those took back the crown, with interest.

Only now is Intel following the demand and loading cores onto chips - although they did offer products like that in previous years and decades, and nobody bought them. Go figure.
Are you talking about Xeon Phi? Those cores were awfully weak at anything except vector workloads. They were basically designed to compete against Nvidia's datacenter GPUs, which they couldn't manage to do very well.
 
  • Like
Reactions: thestryker
The problem with ARM is there is no standard and there is a massive amount of different architecture from different companies.
I'm not sure what you're talking about, but ARM actually maintains the ISA standard and is very rigorous that all implementations be 100% faithful and not include any nonstandard extensions.
 
These issues are largely due to Intel being late to the core scaling party. There are plenty of reasons for this, but I think hubris amounts to the underlying factors.

Qualcomm's delays getting Centriq off the ground (before it was canceled) likely led to executives dismissing the Arm threat. That time is when they needed to be formulating what the high core count/occupancy part response would be. I also don't think they thought AMD could scale up the cores so quickly. and that it would only take 3 generations to get a solid platform. By the time the writing was on the wall their process node struggles ensured no timely response.

CWF is Intel's chance at retaining and even capturing more of the cloud market, but it really needs to launch sooner than later. I think whether or not Intel makes a pivot to another architecture for this segment hinges on CWF.

GNR should be good enough for the high performance segment, but a new core would go a long way. I don't think things are anywhere near as dire there now that they have core count parity. Something I don't think a lot of people outside of the industry consider is that the majority of server CPUs are middle of the stack. This was one of those places where AMD really stuck it to Intel with Zen 2 because Intel's top end was 28 cores and AMD's middle was 32.

The high core count systems are the ones replacing racks worth of older servers and I suspect with all of the CXL memory devices launching this year the rate of replacement will increase. It seems like CXL should be one of the biggest data center drivers over the next few years.
 
Actually, no. Intel tried making x86-powered microcontrollers, called Intel Quark. These were targeted at the IoT market, not phones.


Oh, it wasn't for lack of trying.
I blame Microsoft for that. In my closet I have both a Lumia 930 and a Leagoo T5c. I also have a Dell Venue 11 pro 5130 that I use to stream video while doing dishes and once a year also as an e reader and slow web browser while on vacation.
Back at the peak of the cheap atom tablet boom Windows mobile was also a thing. So was Windows RT on ARM.

Microsoft let Windows Mobile die. It died from lack of apps. My 930 is snappy and fluid with what is left, RT could have brought more compatibility with a relative sea of apps.

Even better MS could have made a stripped down version of Windows 8 running in tablet mode and ported over their great performing basic phone apps. Intel already had stronger CPUs in the Zenfone 2 and T5c than I have in my 5130. And the battery life in those 2 phones while emulating arm chips was just fine per reviews ( although my 8core airmont T5c did drain the battery faster than a normal phone doing very light tasks like streaming audio, and now trying to run outdated android with no compatibility updates on the thing for at least half a decade is an exercise in frustration).

Intel could never beat ARM while emulating it in a low power scenario but they did have the hardware ready to run desktop Windows or Linux in a phone. Linux isn't popular enough, but MS could have had a lasting mobile presence and chose not to. With x86 Windows phones out there I'm sure somebody would have been able to customize a Linux build to run on them. The more efficient use of hardware would have been compelling.

Man I wanted a real Windows phone. The handheld market is an indicator that there is some demand for very mobile systems that do more than Android or IOS stuff.

Edit: the Z3580 in the Zenfone 2 is a little slower than the Z3775 in my 5130, but is fairly comparable.
 
Last edited:
These issues are largely due to Intel being late to the core scaling party.
I note your use of the word "largely", but here's a counterpoint to the notion that even their recent dip is really just about core counts. Here's one matchup I hope you'd agree is fair: Xeon 6980P (Granite Rapids 128 core) vs. EPYC 9755 (Turin 128 core). SMT and full AVX-512 in each. Even the process nodes are similar, with Intel at an apparent advantage: Intel 3 vs. TSMC N4

wchSxnq.png


KLr8yVD.png


https://www.phoronix.com/review/amd-epyc-9965-9755-benchmarks#page-14

Intel even had a significant memory bandwidth advantage, via their MRDDR5-8800. As noted in the article, something weird is going on with Intel's dual-CPU scaling, but you can just ignore that and focus on the 1P vs 1P case, where AMD is still a whopping 18.4% faster. They average similar power, with Intel at 331.8 W vs. 324.1 W for the AMD CPU.

That said, we can see some prior generations where Intel is faster on a per-core basis. However, the article notes that Intel's unit sales still decreased in the Xeon 6 era, even though it was competing against the older Zen 4-based EPYCs for part of the last quarter. The article blames this on customers simply not needing as many CPUs as before, due to their higher core counts (to paraphrase: "they're just too good!" LOL).

"unit sales of Intel server CPUs declined slightly once again as customers switched to high-core-count models."

The article then notes that ASPs have increased, but the latest news on that front is Intel slashing prices. So, we'll see how ASPs hold up and whether they can drive enough unit sales to make up for it.

There are plenty of reasons for this, but I think hubris amounts to the underlying factors.
For Ice Lake, Intel probably planned it way back when they thought it'd be competing against 32-core EPYC. So, I can give them a pass on that one.

Sapphire Rapids marked an attempt to shift strategy so they're no longer competing on raw core-count, but instead have special-purpose accelerators to give them an edge. It didn't seem to work out very well, overall. However, I'm sure it was enough to maintain a lead in some specific workloads. However, Sapphire Rapids was a victim of its own ambition, with Intel going slightly nuts on the whole "tile" thing.

Emerald Rapids was really what Sapphire Rapids should've been, but it came too late and couldn't keep up against AMD's 96 Zen 4 Genoa or its 128 Zen 4C Bergamo.

Qualcomm's delays getting Centriq off the ground (before it was canceled) likely led to executives dismissing the Arm threat. That time is when they needed to be formulating what the high core count/occupancy part response would be.
I think it's precisely around that time when they must've started planning the 144-core Sierra Forest. If we go back and look at their roadmaps, Sierra Forest probably appeared too soon to be a response to Ampere Altra.

I also don't think they thought AMD could scale up the cores so quickly. and that it would only take 3 generations to get a solid platform.
I agree with this up to 2019's launch of the Zen 2-based Rome EPYC. After this, it should've been clear that AMD could easily scale further. The CCD chiplets were relatively small, so even if they just increased them to 12 cores, such a CPU would've been pretty viable way back then. Probably the main thing holding it back was just DRAM capacity.

Intel's hubris of being able to out-scale AMD took the form of making their cores much bigger, which really hurt them in the core-scaling race. Once they settled on their P-core vs. E-core strategy, I think they lost any remaining reservations and just went nuts on core size (e.g. AMX). However, their E-core Xeons were too far behind to properly support this strategy.

CWF is Intel's chance at retaining and even capturing more of the cloud market, but it really needs to launch sooner than later. I think whether or not Intel makes a pivot to another architecture for this segment hinges on CWF.
It's going to launch later:


Also, I disagree it's the final make-or-break for Intel. They could still switch ISA. Even if they skip ARM, due to licensing costs & other concerns, RISC-V should be a viable option and they could take an early lead in that segment. It'd be fascinating if AMD jumps on the ARM bandwagon, while Intel joins the wave of RISC-V upstarts trying to rain on their parade.

Something I don't think a lot of people outside of the industry consider is that the majority of server CPUs are middle of the stack. This was one of those places where AMD really stuck it to Intel with Zen 2 because Intel's top end was 28 cores and AMD's middle was 32.
Yeah, it's worth looking at how their respective value scales down. Also, AMD and Intel have both responded to this need with a smaller socket to help reduce platform costs. Somehow, I'm doubtful it's going to look much different.

The high core count systems are the ones replacing racks worth of older servers and I suspect with all of the CXL memory devices launching this year the rate of replacement will increase. It seems like CXL should be one of the biggest data center drivers over the next few years.
It'll be interesting to watch.
 
  • Like
Reactions: thestryker
Intel already had stronger CPUs in the Zenfone 2 and T5c than I have in my 5130. And the battery life in those 2 phones while emulating arm chips was just fine per reviews ( although my 8core airmont T5c did drain the battery faster than a normal phone doing very light tasks like streaming audio, and now trying to run outdated android with no compatibility updates on the thing for at least half a decade is an exercise in frustration).

Intel could never beat ARM while emulating it in a low power scenario
I don't understand this focus on ARM emulation, at all. Phone apps are basically never natively-compiled. They use bytecode that's JIT-compiled for the native CPU when the app gets installed on the phone. There are a handful of exceptions (mostly games), which can bundle some native code. However, the Android NDK had full support for multi-targeting native code at x86, AArch32, and AArch64. TBH, I'm not even sure if Android will emulate native code, at all!

The OS and all of the core runtime libraries would be natively compiled. For the vast majority of apps out there, I don't see any case where emulation is happening. I'm sure the situation for Windows was virtually the same.
 
Last edited:
Not unless there's a paradigm shift at Intel. Last year Gelsinger boasted how Lunar Lake was going to be so good, that Intel did not need to do ARM designs as their new x86 was better in every way.
A good sales person sells what they have today. If Intel launched an ARM CPU tomorrow, I'm sure they'd just spin those prior remarks (if people even remember them) as applying to their competitors and would just tell you why their new ARM CPUs are so special that it doesn't apply to them.

IMO, Intel's biggest problem with Lunar Lake is cost. Making everything at TSMC has got to be trashing its margins. I also think they went too far in the number of SKUs they tried to make from a 4+4 core die. That hints at the other big issue it faces, which is that their core config limits how much of the laptop market it can serve. We'll see how well Arrow Lake can pick up the rest, but it faces similar issues with its margins (i.e. in the models that aren't just Meteor Lake respins).

Not only do we have Qualcomm and maybe by late this year Nvidia/Mediatek will have their ARM SoC available, but next year AMD is also releasing ARM based cpu's called Soundwave and are pouring good resources into the project. Intel will be the only one's with their head in the sand.
If Windows/ARM can break into corporate laptop market, it might be enough to pivot the laptop market away from x86.
 
Last edited:
  • Like
Reactions: thestryker
I think it's precisely around that time when they must've started planning the 144-core Sierra Forest. If we go back and look at their roadmaps, Sierra Forest probably appeared too soon to be a response to Ampere Altra.
Qualcomm announced Centriq in 2014 and by 2016 had already faced delays. I'd be surprised if SRF had anything to do with Qualcomm. I'd bet it was more of a response to Arm's Neoverse than anything else.
It's going to launch later:
Yeah I forgot to put the 1H26 back in after rewriting that paragraph.
Also, I disagree it's the final make-or-break for Intel. They could still switch ISA. Even if they skip ARM, due to licensing costs & other concerns, RISC-V should be a viable option and they could take an early lead in that segment. It'd be fascinating if AMD jumps on the ARM bandwagon, while Intel joins the wave of RISC-V upstarts trying to rain on their parade.
I just don't see this happening unless it was already in the works and I can't imagine that it was. There doesn't appear to be any real advantage to them designing a general purpose core on another ISA. I could see plenty of advantages for integrating cores into other products, but just not a standalone.

I'm still waiting to see what Intel does with edge since the latest Xeon D are still ICL based and there haven't been any big networking Atoms based on any of the current architectures. This is definitely a place I could see the potential for a processor based on non-x86.
I agree with this up to 2019's launch of the Zen 2-based Rome EPYC.
That's what I was thinking when I said 3 generations, but I forgot there never was a Zen+ Epyc, only TR. I don't think anyone saw the 64c Zen 2 coming before it was too late.
 
  • Like
Reactions: bit_user