Review AMD Ryzen 5 9600X and Ryzen 7 9700X Review: Zen 5 brings stellar gaming performance

Page 10 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
This Zen5 launch has been quite polarising. Tom's Hardware and TechPowerUp seem to love Zen5 while Techspot/Hardware Unboxed and Gamers Nexus have panned it.
Quite frankly it's the difference between tech journalism and techtubers. HUB/GN are primarily focused on gaming and consumer value whereas the other two take a broad view and focus on the parts themselves. Tom's also has a much more dim view of the 9700X than 9600X but since the reviews were combined for initial coverage I don't think that comes through very well. TPU was also more down on the 9700X than 9600X.
 
In corporate world there is a saying "no one has been fired for buying Intel" when it comes to computers. Previously it was "no one has been fired for buying IBM." The main thing is that for corporations Intel has a HUGE amount of mind share and that keeps corporations buying their products, even though AMD might have a superior product at the time. With the current 13/14th Gen issues this is killing that mind share though. Intel has also used their size to make it such that AMD's CPUs aren't supported for PRD instances of some software (SAP HANA is an example of this). I do wonder how much longer that will keep going as the Epyc CPUs have been faster than Xeon for a while now.
With all the issues and lawsuits telling the big bosses that intel could F up big, I bet it will be soon that they actually trade blows, and need to fight a C/P war so we could benefit as in the Athlon/P3 era where crazy budget friendly offerings have huge margin if you dare tinker, and not costing an arm and a leg for CPU and mobo
+1 to both of you. It's true, the Intel marketing statement that "Nobody gets fired for buying Intel" has been very pervasive and that's because Intel used to be the undisputed x86 king. Lawsuits aside, the issues that the bosses will undoubtedly be experiencing will have them telling their purchasers that they WILL be fired if they buy Intel. This forces the (often lazy) purchasers to look at other options.

Intel Xeon has never been competitive with AMD EPYC and it has only gotten worse for Xeon over the years. These days, EPYC absolutely rules the data centre space by a staggering margin in the following categories:
  1. Overall Performance
  2. Multi-threaded Performance
  3. Performance-per-Watt
  4. Acquisition Cost
Now more and more purchasers will be looking at EPYC. Once they adopt EPYC servers, Xeon (and therefore Intel) will be up Schitt's Creek without a paddle and headed for a waterfall.
 
Can't wait to get my hands on 9700X & fiddle with the new curve optimizer settings AMD have in store! Be nice gong from 12 threads to 16 threaded energy efficient chip, especially in a case with RX 7900 XTX & the power that consumes.
 
Quite frankly it's the difference between tech journalism and techtubers. HUB/GN are primarily focused on gaming and consumer value whereas the other two take a broad view and focus on the parts themselves. Tom's also has a much more dim view of the 9700X than 9600X but since the reviews were combined for initial coverage I don't think that comes through very well. TPU was also more down on the 9700X than 9600X.
And I think it's part of the high expectation on Zen 5 making the more negative impression, people generally expects more of a higher TDP but quicker or more core count SKU for the budget, they are kind of the better kid with more demanding hopes from others, and maybe due to the last minute QC recall making the lower end, low power parts debut first is the main issue here, if they debut the 9900x and 9950x first or at the same time, with these being so efficient the verdict will likely change
 
You said it causes regressions in gaming results. Please link (if video, please specify time).


Single-threaded, multi-threaded, PBO on/off? How'd they measure power consumption - full system, package-only; electrical or self-reporting?

TechPowerUp found a 60.2% efficiency gain in Cinebench MT:
efficiency-multithread.png

As I mentioned above, that was package-only & measured electrically.

The information is out there if you REALLY want to know and are not just wasting my time.
 
13600K performance for 14700K money.

Because the only thing we care about is server CPU architecture.

Take it or leave it, fans of computer gaming.

(some will try to disprove this fact by arguing that AMD are in fact incompetent and don't know what they're doing)
 
Can't wait to get my hands on 9700X & fiddle with the new curve optimizer settings AMD have in store! Be nice gong from 12 threads to 16 threaded energy efficient chip, especially in a case with RX 7900 XTX & the power that consumes.

What do you use your computer for?

Because a 7800X3D costs less, uses a similar amount of power and is much better for gaming. On the other hand, if you are looking for a productivity CPU, the 13600K can be had for much less, while offering slightly higher multicore performance...
 
13600K performance for 14700K money.

Because the only thing we care about is server CPU architecture.

Take it or leave it, fans of computer gaming.

(some will try to disprove this fact by arguing that AMD are in fact incompetent and don't know what they're doing)
The 9700X and 9600X looks not bad in gaming, but loose to 13600k at heavily multithreaded workload, eats a ton less power and use the same AM5 socket, those who slot in the market will slot in, LGA1700 is practically a EOL socket now so if you go for 13600/14700k you are out of luck for future upgrades, and RPL issues are still to be solved. At this point of time everyone buying and remotely watches reviews should know that if you buy a AM5 CPU you should get a 7800X3D, if you need high MT performance, wait for 9900x or 9950x or get the 7000 series equivlant, anyway the same motherboards will have at least one more gen of refresh before going AM6 or so, they didn't replace 7800X3D at this point so why bother
 
+1 to both of you. It's true, the Intel marketing statement that "Nobody gets fired for buying Intel" has been very pervasive and that's because Intel used to be the undisputed x86 king. Lawsuits aside, the issues that the bosses will undoubtedly be experiencing will have them telling their purchasers that they WILL be fired if they buy Intel. This forces the (often lazy) purchasers to look at other options.

Intel Xeon has never been competitive with AMD EPYC and it has only gotten worse for Xeon over the years. These days, EPYC absolutely rules the data centre space by a staggering margin in the following categories:
  1. Overall Performance
  2. Multi-threaded Performance
  3. Performance-per-Watt
  4. Acquisition Cost
Now more and more purchasers will be looking at EPYC. Once they adopt EPYC servers, Xeon (and therefore Intel) will be up Schitt's Creek without a paddle and headed for a waterfall.
I'm the primary influencer at my job for new hardware purchases. A year after I started there we did a small hardware upgrade and in 2018 we went with Naples based servers. Since then, we have added Rome servers and might replace everything with Bergamo.
 
First, I think you're reading too much into the launch pricing. As the price of these CPUs settles, they'll represent a better value. As for comparisons vs. those X3D models, the 9700X is certainly faster at the many things which derive little or no benefit from the 3D VCache.


Oh, but look closely and you'll see the 7600X performing 5.2% worse:
relative-performance-games-1280-720.png

So, basically, you just discovered that the 7600X and 9600X are both CPU-bottlenecked, in that test setup. The faster CPU runs faster, but not enough that it can reap some power savings.


Gaming isn't really all that intensive on CPU power, in the first place. Their test setup used a RTX 4090 for the GPU. If you've got a GPU like that, a swing of such a small amount from your CPU is hardly a blip on the radar. If you've got a lower-end GPU, then it'll be more of a bottleneck and your CPU will burn still less power. In other words, try not to twist in so many knots, reaching for a point to make.


Fixed that, for you.

Basically, I think their point was that it's a well-rounded CPU - certainly better than the 5800X3D - that still has some serious gaming potential. It's definitely not a purebred gaming CPU, nor does the article characterize it as such.
Reviews and commentary within the review are based on the current price, not what it may drop to in the future. It's irrelevant that these CPU's will be $10 in 2035. You basically missed the entire point that was being made, or intentionally ignored it to try an make up some sort of counterargument. All of the power usage and efficiency charts for Techpowerup reviews are all on one page, and you couldn't even be bothered to read through that one page. Power usage is calculated at 1080p, not 720p, so the chart you posted is useless. I even gave you the performance difference, and you still managed to post the wrong chart. This is the performance chart you're supposed to be looking at.

relative-performance-games-1920-1080.png

3.5% faster than a 7600x.
They were even kind enough to post the efficiency comparisons for you, but I guess you didn't make it that far down the page.
efficiency-gaming.png


That's not a 40% efficiency gain. That's almost 3%. Even better the stock 9700x had worse efficiency in gaming in their testing than the 7700x.

efficiency-gaming.png


This was a really lazy and uniformed attempt at a counterargument.
 
Last edited:
There's really no excuse for them holding the 9700X back with a 65W PPT like they chose to and force their customers to use PBO to get the performance it really should be delivering. For the 9600X it is far more reasonable and it seems to me there should be an option for them between 65W and 105W they could have chosen for the 9700X.
It's a strategic move by AMD to move more customers to the more expensive X3D models. Just wait and see, they aren't going to performance limit X3D chips with TDP so they will have higher clock speeds vs the vanilla Zen5 models unlike the two previous generations. This will make the X3D chips faster across the board so users will no longer have to choose between better gaming performance or better performance in everything else. This will increase the potential market for their more expensive models. Meanwhile, the regular Zen models with their lower PPT will compete better head to head with Intel's non-K sku's for office desktop users and other lower demand markets.
 
You can't really compare X3D to non-X3D to get a meaningful view of what has happened with Zen 5. Rather than showing that, yes, there are games where a 5800X3D can still beat the 9600X/9700X, it's far better to look at like-for-like data. Zen 4 X-class chips often lose to the 5800X3D as well, but the 5800X3D really falls off in non-gaming performance (except maybe in 7-zip where the added cache helps).

The 9600X beats the 7600X by 11.9% in overall gaming performance, and the 9700X beats the 7700X by 12.1%. That is based purely on architectural improvements. Naturally, a gaming-focused X3D chip will show different results in either case, but to see what has happened in that arena, we'll need to wait for the 9000 series X3D processors to arrive. I would expect relatively similar margins when we get 9800X3D and compare it with 7800X3D, but we'll have to wait for the hardware to actually say how those perform.

Ignoring for a moment the X3D results, we have AMD 7600X tying the competing 14600K, and 7700X trailing the competing 14700K by 3%. That has changed to AMD holding a 12% and 9% lead with the new architecture. We'll need to see what Arrow Lake CPUs bring to the table to get the generational comparisons, but a potential ~10% boost in potential gaming performance from a CPU architectural upgrade is nothing to scoff at.

I'm hopeful that the new X3D parts will further improve on what we've seen in prior generations (meaning, boost clocks closer to the regular X parts), but stacking a die on top of the CPU cores will inevitably limit cooling to some degree. At the same time, by dropping power use up to 40% in some cases, that could seriously improve how the Zen 5 chips behave in stacked X3D designs. Check back in 2~3 months and we'll hopefully have the rest of the story.
Two points here. One, your 9000 results differ from the other reviews. Even the most pro-AMD sites are dumping on these CPU's.

4d7d0d89b902d2a488e78e3139c6bb65258cc9c593ef39480234f87707a3abd2.png

The complete sub heading on their Youtube page is "Zen 5 Sucks for gaming." It's jarring to see a site like AMD Unboxed declare Zen 5 sucks for gaming, and then come here and see the headline that Zen 5 is stellar for gaming. It's not like the reviews all over the place either. All the major sites except this one had the 9000 barely faster than the 7000 series. I'm not going to speculate on why, but just bring up the point.

Second point, these CPU's aren't released in a vacuum, you can't pretend that better options don't exist, if they do. As you pointed out, older X3D chips don't do well outside of gaming vs vanilla Zen options. So why isn't the focus of these vanilla Zen CPU's on their non-gaming performance which they are designed for by AMD? If you're going to focus on the gaming performance, you can't then ignore the performance of the X3D CPU's AMD is actually targeting gamers with because "it isn't a fair comparison."

If you're a gamer, you should not be buying vanilla 9000 series CPU's. That fact alone, makes the headline of this site's review extremely disingenuous. By AMD's own prelaunch admission, Zen5 is not going to beat a 7800X3D in gaming. As I said in an above post, it looks like AMD is holding these CPU's back and the 9000X3D is likely to put a beating on vanilla 9000 in gaming. AMD seems intent on giving X3D a clock speed advantage it hasn't had in previous generations in an attempt to made X3D not just a gamer's CPU, but an enthusiast CPU faster in all aspects of performance.

I'd like to see the gaming tests you ran that showed a 40% reduction in power usage vs 7000 series. I did not see a single review that showed those results. 9700x had worse efficiency than the 7700x in gaming tests (14 games) Techpowerup ran. There's going to be some variance between test results from different sites, but not that extreme.
 
Last edited:
Reviews and commentary within the review are based on the current price, not what it may drop to in the future.
Certainly not. Reviews are almost never published for already-launched products, yet they're frequently referred back to, long after the initial product launch. This decouples them somewhat from the exact launch price of the product. Therefore, they shouldn't overly hinge on the launch price, although they do need to acknowledge it.

Power usage is calculated at 1080p, not 720p, so the chart you posted is useless. I even gave you the performance difference, and you still managed to post the wrong chart. This is the performance chart you're supposed to be looking at.
relative-performance-games-1920-1080.png
3.5% faster than a 7600x.
Whether 3.5% or 5.2% doesn't change the big picture, which you're missing like the forest for the trees.

The key point is that these CPUs aren't using much power, compared to a RTX 4090. Pair them with an even slower GPU (which is realistically what someone with such CPUs would have) and the GPU will become more of a bottleneck, thereby reducing the CPU power consumption even further.

This was a really lazy and uniformed attempt at a counterargument.
Actually, you didn't even acknowledge my real counterargument, probably because you realized I'm right. And whether I cited the 720p or 1080p performance data is irrelevant to it, which makes me think you're just harping on that as a distraction.

Let me say it as clearly as possible: CPU energy efficiency is not a big issue, for most games. That is why Toms didn't focus on it. You're barking up the wrong tree.

P.S. in case you didn't notice, TechPowerUp also touted efficiency as one of the 9600X's benefits, with no caveats for gaming efficiency:
 
Last edited:
The complete sub heading on their Youtube page is "Zen 5 Sucks for gaming."
Since I still have the last page of TechPowerUp's 9600X review open, let's review what they said about its gaming attributes.

Pro: "Fastest sub-$300 gaming CPU available"
Con: "Slower than 7800X3D and 13700K in gaming"

"Ryzen 9600X delivers outstanding gaming performance that's better than any previous AMD processor, including the 7950X3D—only the 7800X3D is faster. This is even more impressive coming from the lowest SKU in the stack!"

"In gaming the 9600X is slightly more energy efficient than Intel's 14600K (66 W vs 76 W), but the differences are not as night-and-day as on the higher-end models. ... there is no doubt that 65 W is the optimal setting for the 9600X. Energy efficiency is very good overall, and well-balanced, too."

"The PBO Maximum run ... achieved really nice gains in both apps and games. If you play with Curve Optimizer you can unlock even more performance."

"If you want something that's still great for gaming, yet more affordable, then the 9600X should be on the top of your list. It's the fastest sub-$300 gaming CPU. On the Intel side there's not a single gaming-focused alternative to the 9600X."

I'm sure you're familiar with the extensive testing TechPowerUp did, and yet they still came to a decidedly different conclusion than Hardware Unboxed.

It's jarring to see a site like AMD Unboxed declare Zen 5 sucks for gaming, and then come here and see the headline that Zen 5 is stellar for gaming.
I suspect what happened with Zen 5 is that the internet rumor mill spun wildly unrealistic expectations of what it would deliver. Disappointed gamers want to see some blood, and some of these reviewers are willing to give it to them - especially if they were complicit in over-hyping it, in the first place.

it looks like AMD is holding these CPU's back
Why would they do that? In a world also soon to be inhabited by Arrow Lake, AMD can't afford to do such things.

I'd like to see the gaming tests you ran that showed a 40% reduction in power usage vs 7000 series.
I'm pretty sure nobody ever said this. Such efficiency gains were just for things like rendering. You're either conflating unrelated things or you're confused.
 
Those are the stock settings so it isn't wonky at all.
The issue is that no one is running at stock settings, and comparing performance at a frequency lower than what most users will use is misleading, especially when the cpus scale differently with ram frequency, and different generations get varying frequencies.
Every am5 cpu is able to do 2x16gb at 6000MT/s, and that's what most people will be using, testing at 5200MT/s for 7000 series and 5600MT/s for 9000 series, when the kit used would work at 6000MT/s for both is misleading, and gives an otherwise nonexistent advantage to the 9000 series chips because they're advertised to support a faster JEDEC profile. Don't even get me started on enabling EXPO only for the PBO results, further inflating the number and making the "+X% performance from PBO" claim false.
 
The issue is that no one is running at stock settings, and comparing performance at a frequency lower than what most users will use is misleading, especially when the cpus scale differently with ram frequency, and different generations get varying frequencies.
Every am5 cpu is able to do 2x16gb at 6000MT/s, and that's what most people will be using, testing at 5200MT/s for 7000 series and 5600MT/s for 9000 series, when the kit used would work at 6000MT/s for both is misleading, and gives an otherwise nonexistent advantage to the 9000 series chips because they're advertised to support a faster JEDEC profile. Don't even get me started on enabling EXPO only for the PBO results, further inflating the number and making the "+X% performance from PBO" claim false.
Unfortunately*, you're wrong there.

OEMs/SIs very often ship products with single channel (1 DPC) configurations and/or with XMP/EXPO not enabled. This has been demonstrated several times via different blind tests against many of them by many different outlets.

In fact, testing JEDEC and out of the box settings for the CPU and motherboard is the right way to do it, but neither AMD nor Intel like it because it doesn't show them in the best possible light.

So, I actually appreciate Tom's for being one of the few outlets that shows how a person would, more than likely, experience an OEM product when received. In fact, even then, as I said above, they should also test with single modules instead, as a lot of them ship with just one, which is an abomination!

Regards.
 
Every am5 cpu is able to do 2x16gb at 6000MT/s
That is entirely false. There is a reason that the OFFICIAL specification for RAM on Ryzen 7000 series was 5200MT/s as that was the highest speed that AMD could guarantee that EVERY chip will hit. Just look at Zen 1 at first. The 1800X was able to do 2666MT/s for dual channel single rank but only 2400MT/s for dual channel dual rank RAM. BIOS updates helped achieve higher stable speeds but officially all AMD could guarantee was the 2666/2400MT/s. Once you got to Zen+ it went to 2933MT/s and Zen 2 3200MT/s. The same thing is happening with Zen 4 > Zen 5. Remember going beyond the official specs means winning the silicon lottery. 99.999% of chips in Zen 4 can probably hit 6000MT/s, however, you cannot say that all will and you might have that chip that doesn't. Therefore testing for STOCK settings is the best way to achieve consistent and reliable data. Also it gives you a floor as to what you can more or less expect to see.
 
In fact, testing JEDEC and out of the box settings for the CPU and motherboard is the right way to do it, but neither AMD nor Intel like it because it doesn't show them in the best possible light.
Anandtech has been testing this way for a lot of years. They always put this disclaimer at the top of their Test Bed page.

"As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's highest officially-supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the highest official frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance.


While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC-supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date."
 
Two points here. One, your 9000 results differ from the other reviews. Even the most pro-AMD sites are dumping on these CPU's.

4d7d0d89b902d2a488e78e3139c6bb65258cc9c593ef39480234f87707a3abd2.png

The complete sub heading on their Youtube page is "Zen 5 Sucks for gaming." It's jarring to see a site like AMD Unboxed declare Zen 5 sucks for gaming, and then come here and see the headline that Zen 5 is stellar for gaming. It's not like the reviews all over the place either. All the major sites except this one had the 9000 barely faster than the 7000 series. I'm not going to speculate on why, but just bring up the point.

Second point, these CPU's aren't released in a vacuum, you can't pretend that better options don't exist, if they do. As you pointed out, older X3D chips don't do well outside of gaming vs vanilla Zen options. So why isn't the focus of these vanilla Zen CPU's on their non-gaming performance which they are designed for by AMD? If you're going to focus on the gaming performance, you can't then ignore the performance of the X3D CPU's AMD is actually targeting gamers with because "it isn't a fair comparison."

If you're a gamer, you should not be buying vanilla 9000 series CPU's. That fact alone, makes the headline of this site's review extremely disingenuous. By AMD's own prelaunch admission, Zen5 is not going to beat a 7800X3D in gaming. As I said in an above post, it looks like AMD is holding these CPU's back and the 9000X3D is likely to put a beating on vanilla 9000 in gaming. AMD seems intent on giving X3D a clock speed advantage it hasn't had in previous generations in an attempt to made X3D not just a gamer's CPU, but an enthusiast CPU faster in all aspects of performance.

I'd like to see the gaming tests you ran that showed a 40% reduction in power usage vs 7000 series. I did not see a single review that showed those results. 9700x had worse efficiency than the 7700x in gaming tests (14 games) Techpowerup ran. There's going to be some variance between test results from different sites, but not that extreme.
To be clear here, I have not personally used/tested Zen 5 yet. My understanding from what Paul has said is that if you just drop the chip into a socket AM5 motherboard after updating the BIOS, and load into Windows and start benchmarking, performance and perhaps even stability can be off. He mentioned uninstalling and reinstalling AMD chipset drivers between every CPU swap as an example, and that could have real ramifications for performance.

He also mentioned something about tweaking / disabling certain things in the BIOS that aren't needed. I'm not sure exactly what he was referring to, but it sounds like he used "this is a pre-release CPU on an existing platform and so bugs and oddities can happen, so let's use the best-case turn off unnecessary stuff settings for testing." So, that's how our testing was done, on the hardware Paul has listed in the article. Any site that uses XMP/EXPO profiles as a default, with more aggressive settings and such, could have a very different experience.

CPU power use while gaming will not be the best-case efficiency for virtually any CPU. Core i9 13th/14th Gen processors rarely come anywhere near saturating the CPU cores with work, to the point where they run at much lower temps and power (outside of the initial shader compilation). Single-threaded workloads likewise aren't the primary concern for overall CPU efficiency. So the "40%" number is from workloads where the CPU is loaded up and running full tilt and comparing it against other CPUs, not while gaming. I suspect (but don't have good monitoring tools to truly measure) that my 13900K CPU rarely uses more than 125W while gaming, and probably often less than that, but that doesn't make it a more efficient CPU in general.

Using FPS/W to discuss efficiency for the CPU gets really messy because the GPU is still often a major contributing factor. Testing at 720p doesn't fully fix this, as CPU work changes with resolution as well based on what I've seen, just by virtue of what will fit within the various caches, LOD scaling, texture resolution stuff, etc. — and low/medium/high/ultra can and certainly does change the amount of CPU work that needs to be done. It's certainly possible to use the metric, but I'd be very cautious about applying it to general statements. If we had an infinitely fast GPU so that we could be 100% CPU limited in all games while running at least 1080p high settings, it would be a more useful metric, but even the 4090 will still limit performance in some cases (ray tracing games are a good example) and thus the overall FPS/W metric can still get skewed.

Ultimately, AMD has created a problem for itself with the X3D chips, because it keeps refusing to launch new processor architectures with the full complement of chips. Every time there's a new architecture now, we end up with the initial salvo of non-X3D processors followed a few months later by the X3D variants. I well and truly hate this, because it leads to the exact problems we're now seeing. Now there will apparently be N3 variants of Zen 5 at some point, while these initial parts are N4P, which means we may end up with a further muddying of the waters.

Architecturally, Zen 5 appears to have a lot going for it. It offers higher IPC, better thermals and power, various modified buffer sizes and branch prediction, etc. We tend to think about those things more than some other places, and perhaps that also skews the view of the Ryzen 9000 chips. I mean, 5~10 percent faster used to be about the most we could expect from a new CPU architecture in the early 00s, so getting 15~20 percent feels quite impressive to me. BIOS and firmware stuff probably needs more tuning and updates to reach full maturity and performance, but that's often the case at launch for any new architecture. X3D will as usual show some real benefits for gaming, on top of already existing improvements in the core architecture, but Zen 5 looks strong to me. Now I just need to wait for 9800X3D or 9950X3D to come out...
 
Ultimately, AMD has created a problem for itself with the X3D chips, because it keeps refusing to launch new processor architectures with the full complement of chips. Every time there's a new architecture now, we end up with the initial salvo of non-X3D processors followed a few months later by the X3D variants.
The way X3D CPUs are constructed naturally leads to this sequencing. They first have to get the base die production-ready and properly supported. Only then can they really worry about all the tuning and optimization needed for the extra X3D die. So, it's almost guaranteed that the base die products will be ready for launch ahead of the X3D version. In such a cut-throat business, I'm not sure they can afford to sit on the non-X3D models until the X3D versions are ready for launch.

Now there will apparently be N3 variants of Zen 5 at some point, while these initial parts are N4P, which means we may end up with a further muddying of the waters.
Are we certain those are going to be full Zen 5 cores? During Computex, I think I read that it was the Zen 5C chiplets that would be made on N3. This makes a lot more sense to me, since the Mike Clark interview mentioned doing both N4P and N3 at the same time. If they were truly concurrent and both versions of Zen 5 chiplets, then why the heck would they follow through on the N4P version and not just launch the N3?

5~10 percent faster used to be about the most we could expect from a new CPU architecture in the early 00s, so getting 15~20 percent feels quite impressive to me.
Between Sandybridge and Skylake, gen-on-gen IPC increases were always single digits. Zen 5 still beats this, but it's partially masked by the fact that they took a step back on frequencies. It's looking like Arrow Lake's Lion Cove might do the same.
 
  • Like
Reactions: thestryker
Just want to note that this one is no longer true as AMD is commanding a higher percentage of revenue when compared to volume than Intel.
I probably should've worded it better. I meant that the cost of an EPYC CPU is lower than a Xeon CPU in the same class. I saw a listing for an EPYC CPU on Newegg once and, out of curiosity, looked at the prices of Xeons. I was shocked that the Xeon cost more than the EPYC despite having WAY fewer cores.

Lemme see if I can still find it (or something similar)... Ok, this should work. I wanted to use a single-socket EPYC as my example (because comparing dual-socket arrangements becomes way too complicated for a simple forum post) and the top one I could find on Newegg was the old Milan-based EPYC 7702P from Q3'2019. I'll compare that with a Xeon Platinum 8458P from Q2'2021. I know that there's almost a two year difference between them but it doesn't really matter:

Intel Xeon 8458P:
3133-front.jpg

44 Cores, 88 Threads
Base Frequency: 2.7GHz
Max Boost Frequency: 3.8GHz
RAM Type: DDR5-4800
Max RAM Supported: 4TB
PCI-Express Generation: 5
Max PCIe Lanes: 80
TDP: 350W
Price: $8000

AMD EPYC 7702P:
71FBDg-X70L.jpg

64 Cores, 128 Threads
Base Frequency: 2GHz
Max Boost Frequency: 3.35MHz
RAM Type: DDR4-3200
Max RAM: 4TB
PCI-Express Generation: 4
Max PCIe Lanes: 128
TDP: 200W
Price: $4800

Now, I don't know about you but if I'm building a single-socket server, I am NOT going to pay an extra 67% for PCIe5, DDR5, higher clock speeds, 40 fewer threads and 150W more power use. Servers are all about throughput and that's why extra threads >>>>>>>>>>> clock speeds in servers. PCIe5 looks better on paper with a data transfer rate of ~32Gbps but I would rather have 128 PCIe4 lanes rather than 80 PCIe5 lanes because servers are all about data transfer over a network and PCIe4 lanes have a data transfer rate of ~16Gbps.

With the extra cores and PCIe lanes, more users can use the server at the same time without introducing any latency penalties. After all, the fastest Ethernet ports on Newegg's most expensive AMD SP3 and Intel LGA4766 motherboards is "only" 10Gbps so the motherboard's LAN port would be a bottleneck that erases any advantage that PCIe5 could have.

Please note that I am only referring to a data centre server's use-case, not that of a number-crunching supercomputer. I would note however, that Frontier, currently the world's most potent supercomputer, uses these same Milan cores found in the EPYC 7702P so clearly, the EPYC 7702P would be very suitable for a number-cruncher, especially if paired with a Radeon Instinct CDNA GPU.

Lastly, and for many, most importantly, the EPYC 7702P is far more efficient as its 64 cores use 150W less than the Xeon's 44 cores. For many server operators, that is the most important statistic because all server components are high-performance (fast enough) so greater efficiency = greater profitability.

Things get remarkably worse for Intel if we start talking about multi-socket solutions because the advantages relating to core-count, number of PCIe lanes and efficiency advantage that AMD already enjoys literally doubles.

Let's again remember that this EPYC CPU is two years older than the Xeon and still kicks it all over the place despite costing $3200 less.

I don't really know about the percentage of revenue invested because I'm not really into the server side of things. I based what I said on the pricing that I saw at Newegg on the server CPUs that they sell.

One thing I saw that I thought was hilarious, hilarious because I never thought that you'd be able to get one of these anymore and the prices that whoever is selling them is trying to charge:
AMD Opterons at Newegg in 2024?!

They must be a collectors' item or something. Like, the Opteron 6278 (Interlagos) was the first 16-core x86 CPU ever produced.
 
Last edited:
  • Like
Reactions: helper800
I'm the primary influencer at my job for new hardware purchases. A year after I started there we did a small hardware upgrade and in 2018 we went with Naples based servers. Since then, we have added Rome servers and might replace everything with Bergamo.
Exactly. To be honest, I don't know all that much about the server side but it just seemed like the most logical and realistic chain of events. Thanks for the confirmation. 😊👍
 
  • Like
Reactions: helper800