Review AMD Ryzen 5 9600X and Ryzen 7 9700X Review: Zen 5 brings stellar gaming performance

Page 12 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I regards to the 7700, 7700X and 9700X testing. The 9700X looks way less spectacular when comparing it to the 7700 (non-X) for two main reasons I have to admit I missed completely:
1- Bundled cooler. I mean, come on AMD. Why can't you bundle the cooler for the 65W like in past generations? That IS scummy. Same with 9600X and 7600 (non-X); no bundled cooler for a purely budget part in the lineup.
2- The MSRP of the corresponding 65W parts is actually lower WITH A BUNDLED COOLER than the new 65W parts. That actually hits me where it hurts and I'm not gonna lie, I am salty on this one. If you compare street price vs street price now... Oof. The value goes off a cliff for sure.

Those two points, for me at least, are big ones. AMD better drop the prices of these two new CPUs soon.

Also, some new information:

AMD did something to Zen5's SMT implementation and made it worse, so disabling it actually claws back some performance, which is stupid, but here we are.

Regards.
 
I'm assuming you're getting fleeced similarly on Epyc CPUs as that's pretty common behavior.

I meant anyone who buys directly from Intel period, but yes end user wise those would be the case.
True. This wasn't meant as a slam against Intel CPUs, but just to point out that the ones who get better-than-street pricing are mainly the folks buying directly from Intel. The same should go for AMD. In both cases, customer negotiating power has everything to do with market conditions. During the peak of pandemic-era supply crunch, it was reported hard enough even to get CPUs from these companies that there wasn't much wiggle room on pricing.

This doesn't change the fact that what you quoted is 100% accurate.
Yes, I just wanted to call out that 4P/8P is niche market and not a common deployment practice for hyperscalers.

I don't know that AMD has had any 4P/8P at all since switching to Bulldozer which was well over a decade ago.
Correct. Everything I've read suggests they're firmly limited to 2P scalability.

I expected you to be above this sort of pettiness but apparently not.
Sorry, that came out worse than I intended. I was mainly concerned that by answering questions without more context, it could leave the questioner with a false impression of how PCIe scaling actually works in practice. I went back and edited this to be less pointed, as I don't think you were intentionally trying to be misleading.

I think one big mistake of the original question was to pick an old model Xeon, as the point of comparison. You addressed it in passing, though I wonder if it didn't deserve more attention.
 
Last edited:
In other words, you made a claim, but rather than backing your claim, you want @bit_user to prove your assertion for you.

You make the claim, you provide the evidence. Unless, of course, you have no evidence.

"That which can be asserted without evidence can be dismissed without evidence."

No.

In other words, I don't care what he thinks or wants. He's behaving like people here are his towel boys or something.

I also don't care what you think.

I know that Zen 5 sucks for desktop, because the evidence of it was plentiful even before its release, and it has since not been debunked. It has rather been very much confirmed. I have stated my thoughts on why it ended up being so.

And what you said about evidence and proof may apply in a court of law, but if you look around, I'm sure you will notice that we are not in one. This is an Internet forum, sir.
 
The information is out there if you REALLY want to know and are not just wasting my time.
If you're going to make a claim, you should be prepared to back it up. If you're not willing to stand behind it to the extent even of being able to provide a source, then I guess that shows what you word is worth.

BTW, the reason many of us ask for a source isn't just to be annoying. It's because, too many times, when I've checked people's sources it turns out they're not actually saying what was claimed. Other times, there's some issue with the source or their methodology.

Not only that, but your statement was incredibly vague, to the point that even if we believed you, it's still hard to know what to make of your claim.
 
What do you use your computer for?

Because a 7800X3D costs less, uses a similar amount of power and is much better for gaming. On the other hand, if you are looking for a productivity CPU, the 13600K can be had for much less, while offering slightly higher multicore performance...

Stock 9700X is 14.4% faster than i5-13600K at Cinebench R24 ST.
cinebench-single.png

It does this while being 17.2% more efficient, single-threaded:
efficiency-singlethread.png

and 55.4% more efficient, multi-threaded:
efficiency-multithread.png

It also uses 12% less power, on single-threaded:
power-singlethread.png

And 42.4% less power, multi-threaded:
power-multithread.png

That translates into being cheaper and easier to cool.
 
  • Like
Reactions: Hotrod2go
Not that they'll refuse a RMA based on it, but using PBO technically voids your warranty. There's really no excuse for them holding the 9700X back with a 65W PPT like they chose to and force their customers to use PBO to get the performance it really should be delivering.
Here are a few reasons why they probably did it:
  1. PBO barely affects single-threaded or gaming performance, hence a higher TDP wouldn't be relevant to many of their customers.
    cinebench-single.png
    relative-performance-games-1280-720.png
  2. Feedback from OEMs might've suggested the 7700X was too expensive to cool for its value proposition. Or, maybe AMD was just looking to make the 9700X a more economical option for them.
  3. Variance in die quality could mean they can't guarantee all CPUs will overclock as well as the review samples have.

I think you really only need to see the data supporting point #1 and consider that more TDP isn't exactly "free" (see point #2), in order to understand why it doesn't ship with a higher TDP. Point #3 was speculative, but since we're speculating...

This is believable and still a scummy move.
Yeah, let's imagine AMD did us wrong and then get upset at them for it.
🙄
 
Last edited:
I regards to the 7700, 7700X and 9700X testing. The 9700X looks way less spectacular when comparing it to the 7700 (non-X) for two main reasons I have to admit I missed completely:
1- Bundled cooler. I mean, come on AMD. Why can't you bundle the cooler for the 65W like in past generations? That IS scummy. Same with 9600X and 7600 (non-X); no bundled cooler for a purely budget part in the lineup.
2- The MSRP of the corresponding 65W parts is actually lower WITH A BUNDLED COOLER than the new 65W parts. That actually hits me where it hurts and I'm not gonna lie, I am salty on this one. If you compare street price vs street price now... Oof. The value goes off a cliff for sure.

Those two points, for me at least, are big ones. AMD better drop the prices of these two new CPUs soon.

Also, some new information:

AMD did something to Zen5's SMT implementation and made it worse, so disabling it actually claws back some performance, which is stupid, but here we are.

Regards.
But what about the effectiveness of the memory controller in Zen 5 compared to Zen 4?
Folks who purchase a 9700X will more than likely be enthusiastic gamers (who don't already have a 7800X3D) or/& are over clockers looking to get every single bit of performance out of their AM5 platform.
If they follow guides such as what SkatterBencher published for his adventure into 9700X, they will find new found areas of performance not seen in 8c/16t chip with relatively low power consumption compared to Intel. Although he uses ASUS ROG Crosshair X670E Hero in his venture, much if not 99% of the settings will be applicable in good quality mid to high end AM5 boards anyway.
Paying for the "X" in an AMD chip assures more than likely the end user will have better quality silicon to indulge in OC affairs.
 
  • Like
Reactions: Roland Of Gilead
In other words, I don't care what he thinks or wants. He's behaving like people here are his towel boys or something.

...

I know that Zen 5 sucks for desktop, because the evidence of it was plentiful even before its release, and it has since not been debunked. It has rather been very much confirmed. I have stated my thoughts on why it ended up being so.

And what you said about evidence and proof may apply in a court of law, but if you look around, I'm sure you will notice that we are not in one. This is an Internet forum, sir.
I think I've more than adequately addressed your first point, in post #284.

Regarding your second point, it's a mistake to heed pre-release leaks, because:
  1. Many are using engineering sample CPUs, which don't perform like retail.
  2. By definition, they're using pre-release BIOS & drivers, which might not be fully tuned and could also be hampered by debug instrumentation.
  3. Leaked benchmarks are easily faked.

So, we really need to base our judgements on the official launch reviews. The fact that you might be harking back to pre-launch leaks for your claims further underscores the importance of checking your sources. Either you can check them or post them and we'll check them. Either way, it's no less than I'd expect for someone with the username Reality_checker ...unless you were being ironic.

On your last point, this is a moderated forum where we like to have fact-based discussions. If that's not what you interested in (contrary to your username), then there are indeed many other places on the internet you might find yourself more at home.
 
  • Like
Reactions: Roland Of Gilead
Stock 9700X is 14.4% faster than i5-13600K at Cinebench R24 ST.
cinebench-single.png

It does this while being 17.2% more efficient, single-threaded:
efficiency-singlethread.png

and 55.4% more efficient, multi-threaded:
efficiency-multithread.png

It also uses 12% less power, on single-threaded:
power-singlethread.png

And 42.4% less power, multi-threaded:
power-multithread.png

That translates into being cheaper and easier to cool.
Bruh, come on now. It's not more efficient than the i5 in MT. It's just slower. You can absolutely tell by the fact that still remains slower with PBO while drawing more power. Let's not play stupid just to defend a company. Nobody is getting fooled by this.
 
I regards to the 7700, 7700X and 9700X testing. The 9700X looks way less spectacular when comparing it to the 7700 (non-X) for two main reasons I have to admit I missed completely:
  1. Bundled cooler. I mean, come on AMD. Why can't you bundle the cooler for the 65W like in past generations? That IS scummy. Same with 9600X and 7600 (non-X); no bundled cooler for a purely budget part in the lineup.
  2. The MSRP of the corresponding 65W parts is actually lower WITH A BUNDLED COOLER than the new 65W parts. That actually hits me where it hurts and I'm not gonna lie, I am salty on this one. If you compare street price vs street price now... Oof. The value goes off a cliff for sure.
But the 9700X is faster! Its stock CB R24 ST is 16.4% faster than 7700. In the MT case, the difference is 9.2%. In gaming, it's 5.1% faster. Is that worth nothing?

I think you're falling in the trap of thinking too much about the TDP and confusing higher TDP with higher value. As I mentioned the previous post, PBO doesn't help lightly-threaded or gaming performance virtually at all. So, for those use cases, the CPU is at no real deficit from its lower TDP.

There's an additional wrinkle that an included cooler would pose: how big to make the cooler? Consider that if someone is buying this CPU with the intent of using PBO on multithreaded workloads, they'll potentially need a rather substantial cooler. For others, just an 88 W cooler is all they'll need. If you include a cooler, it'll either be overkill for some or insufficient for others.

Finally, it's not as if a bundled cooler costs nothing. By not including it, AMD is leaving themselves more room for future price drops. Then, you'll be able to take the savings and put it towards the cooler you really want. The dilemma of not knowing how big to make it would mean wasting money for one group of users or the other, if one were included.

Also, some new information:

AMD did something to Zen5's SMT implementation and made it worse, so disabling it actually claws back some performance, which is stupid, but here we are.
I need to spend some more time digging into the meat of that article, because all of the benchmarks I looked at (CB ST, CB MT, MP3 (ST), Gaming, Apps) are in line with what I would've expected. So, I'm not quite sure what they're on about.
 
Bruh, come on now. It's not more efficient than the i5 in MT.
The data is right there, in my post. 15.7 points per Watt vs. 10.1 for the i5-13600K. That's 55.4% more efficient, by my math.

It's just slower.
Yes, it's 8.7% slower on CB24 MT, but that's not enough to put them in different performance tiers - especially when you consider the other areas where it's faster (ST, apps, gaming).

You can absolutely tell by the fact that still remains slower with PBO while drawing more power.
In point of fact, PBO Max is 0.5% faster than the i5-13600K (stock) on CB24 MT. So, not slower.
cinebench-multi.png

Yes, PBO Max kills its efficiency. If you really want MT performance, it's not the best option. Nobody said otherwise. However, people usually don't have a single consideration that completely dominates all others. For many, its MT performance is surely more than adequate. As a developer, I care about MT performance, but the biggest CPU I have at home is still just 6 cores - it doesn't dominate all other considerations for me.

Let's not play stupid just to defend a company. Nobody is getting fooled by this.
The facts are very straight-forward. In this post, you resorted to directly contradicting them, which shows you're getting pretty desperate.

I think the one who's defending a company isn't me. We've deconstructed all of the fallacies you've tried to gin up. Time to move on.
 
The data is right there, in my post. 15.7 points per Watt vs. 10.1 for the i5-13600K. That's 55.4% more efficient, by my math.


Yes, it's 8.7% slower on CB24 MT, but that's not enough to put them in different performance tiers - especially when you consider the other areas where it's faster (ST, apps, gaming).


In point of fact, PBO Max is 0.5% faster than the i5-13600K (stock) on CB24 MT. So, not slower.
cinebench-multi.png

Yes, PBO Max kills its efficiency. If you really want MT performance, it's not the best option. Nobody said otherwise. However, people usually don't have a single consideration that completely dominates all others. For many, its MT performance is surely more than adequate. As a developer, I care about MT performance, but the biggest CPU I have at home is still just 6 cores - it doesn't dominate all other considerations for me.


The facts are very straight-forward. In this post, you resorted to directly contradicting them, which shows you're getting pretty desperate.

I think the one who's defending a company isn't me. We've deconstructed all of the fallacies you've tried to gin up. Time to move on.
So it's faster (by 0.5% , lol) than the 13600k and slower than the 14600k while consuming a lot more power when in reality it has to compete against the 13700 / 14700k. Nobody measures any other device like that, they do it normalized. Have you seen a fan review doing only 100% tests and measuring dba / performance. No, because that's stupid. They normalize the dba and then compare temperatures. When you do that, the 9700x is kinda awful.

TLDR, if you care about MT efficiency you avoid anything amd on the midrange like the plague. You buy any Intel chip at that price point (literally anything) limit it to 88w and you have better efficiency than what amd will ever achieve at that price point on the am5 platform. Ever.
 
So it's faster (by 0.5% , lol) than the 13600k and slower than the 14600k while consuming a lot more power
See, the only way you can make it look bad is by focusing on one corner case. This is tantamount to admitting you've lost.

when in reality it has to compete against the 13700 / 14700k.
There are now deals pricing the i7-13700K below it, but the i7-14700K is still more expensive. However, the really inconvenient fact, for you, is that the stock 9700X CB24 ST score is 8% faster than the i7-13700K and 4.7% faster than the i7-14700K. So, you really can't pretend the 9700X brings nothing significant to the table.

if you care about MT efficiency you avoid anything amd on the midrange like the plague.
This has been shown false again and and again and again. The 9700X smashes the i7-13700K on MT efficiency by a whopping 84.7% and the i7-14700K by 67.0%!

efficiency-multithread.png


Even if someone is silly enough to enable PBO Max, they still get just a 1.1% efficiency deficit vs. the stock i7-14700K!

You buy any Intel chip at that price point (literally anything) limit it to 88w and you have better efficiency than what amd will ever achieve at that price point on the am5 platform. Ever.
Sure. Throw double the cores at the problem, keeping the same power budget, and of course they'll operate more efficiently.

You can do that now. However, I expect that as the 9700X's price settles, we'll see it slide below the i7-13700K's price. Once we're talking about matching it against an i5, your assertion will no longer hold. That's obvious, given how close the MT performance of the 9700X is to the i5, at its stock power.
 
See, the only way you can make it look bad is by focusing on one corner case. This is tantamount to admitting you've lost.
What's the corner case?

There are now deals pricing the i7-13700K below it, but the i7-14700K is still more expensive. However, the really inconvenient fact, for you, is that the stock 9700X CB24 ST score is 8% faster than the i7-13700K and 4.7% faster than the i7-14700K. So, you really can't pretend the 9700X brings nothing significant to the table.
How is it inconvenient for me? What are you talking about? I already mentioned the good ST performance.

This has been shown false again and and again and again. The 9700X smashes the i7-13700K on MT efficiency by a whopping 84.7% and the i7-14700K by 67.0%!
No, it doesn't. Or - well - it does, the same way it smashes the 7950x in efficiency, lol. Bud, put all of those CPUs at the same 88w, the 9700x will be lagging behind basically everything at it's price point. Why are you pretending otherwise? I don't get it. What do you have to gain out of this?

Sure. Throw double the cores at the problem, keeping the same power budget, and of course they'll operate more efficiently.

You can do that now. However, I expect that as the 9700X's price settles, we'll see it slide below the i7-13700K's price. Once we're talking about matching it against an i5, your assertion will no longer hold. That's obvious, given how close the MT performance of the 9700X is to the i5, at its stock power.
And why is throwing double the cores at the problem an issue? I'd say throw 5 times the cores, no objections. If the price is the same, I don't care about how many cores someone throws at the problem.

Even if the price drops down to the 13600k level, the 13600k at 140w has the same performance as the 9700x at 170w. I don't see how limiting them both to 88w will have the 9700x as the winner, but the sad thing is it doesn't even matter. The 13600k is a 2 year old i5, the 9700x is a brand new "i7" tier. If you don't find that pathetic well, whatever. The way things are going amd's mid range CPUs will be losing to chadmonts, p cores won't be required, lol.
 
See, the only way you can make it look bad is by focusing on one corner case. This is tantamount to admitting you've lost.


There are now deals pricing the i7-13700K below it, but the i7-14700K is still more expensive. However, the really inconvenient fact, for you, is that the stock 9700X CB24 ST score is 8% faster than the i7-13700K and 4.7% faster than the i7-14700K. So, you really can't pretend the 9700X brings nothing significant to the table.


This has been shown false again and and again and again. The 9700X smashes the i7-13700K on MT efficiency by a whopping 84.7% and the i7-14700K by 67.0%!
efficiency-multithread.png

Even if someone is silly enough to enable PBO Max, they still get just a 1.1% efficiency deficit vs. the stock i7-14700K!


Sure. Throw double the cores at the problem, keeping the same power budget, and of course they'll operate more efficiently.

You can do that now. However, I expect that as the 9700X's price settles, we'll see it slide below the i7-13700K's price. Once we're talking about matching it against an i5, your assertion will no longer hold. That's obvious, given how close the MT performance of the 9700X is to the i5, at its stock power.
Man, time to stop argue with the genius, we've seen so many corner case or goal moving for circle arguement, and yet AMD and Zen 5 is so bad, unreliable and X3D exploding he will still get zen 5 X3D day one, so when troll radar is on I don't bother argue with him, let him think whatever it is and we discuss logically alongside is better
 
What's the corner case?
MT efficiency under PBO.

well - it does, the same way it smashes the 7950x in efficiency, lol.
Yes, stock vs. stock.

put all of those CPUs at the same 88w, the 9700x will be lagging behind basically everything at it's price point. Why are you pretending otherwise?
I'm not pretending anything. You made that claim of the i7-13700K and I accepted it. As long as they have any kind of price parity, you'd be right on that point. That said, it's not what most people who buy a K-series i7 do. So, it's of limited relevance to the real world, but certainly someone could.

I don't get it. What do you have to gain out of this?
I just hate spin and misleading narratives.

And why is throwing double the cores at the problem an issue?
I didn't say it was. I was just pointing out what that entails and explaining why it works. Understanding that should also help people see when other models might not be more efficient.

Even if the price drops down to the 13600k level, the 13600k at 140w has the same performance as the 9700x at 170w.
...and back we are to that corner case I was talking about. So, you're now officially just rehashing the same point.

Like I said: time to move on.
 
To be clear here, I have not personally used/tested Zen 5 yet. My understanding from what Paul has said is that if you just drop the chip into a socket AM5 motherboard after updating the BIOS, and load into Windows and start benchmarking, performance and perhaps even stability can be off. He mentioned uninstalling and reinstalling AMD chipset drivers between every CPU swap as an example, and that could have real ramifications for performance.

He also mentioned something about tweaking / disabling certain things in the BIOS that aren't needed. I'm not sure exactly what he was referring to, but it sounds like he used "this is a pre-release CPU on an existing platform and so bugs and oddities can happen, so let's use the best-case turn off unnecessary stuff settings for testing." So, that's how our testing was done, on the hardware Paul has listed in the article. Any site that uses XMP/EXPO profiles as a default, with more aggressive settings and such, could have a very different experience.
Steve from Hardware Unboxed has posted a follow up on Twitter

View: https://twitter.com/HardwareUnboxed/status/1822024131501814094


Steve from GN also chimed in on this post:

View: https://twitter.com/GamersNexus/status/1822026387911537092


Now your results don't even agree with AMD's own internal testing. Your 12% improvement is more than twice the 5% AMD claims there should be. Whatever "tweaking" Paul did, if that is the source of the discrepancy, skewed the results enough that they aren't representative what a user's out of box experience will be. I understand you didn't run these tests, but Paul should be disclosing what settings he changed to produce the results he posted in this review.
CPU power use while gaming will not be the best-case efficiency for virtually any CPU. Core i9 13th/14th Gen processors rarely come anywhere near saturating the CPU cores with work, to the point where they run at much lower temps and power (outside of the initial shader compilation). Single-threaded workloads likewise aren't the primary concern for overall CPU efficiency. So the "40%" number is from workloads where the CPU is loaded up and running full tilt and comparing it against other CPUs, not while gaming. I suspect (but don't have good monitoring tools to truly measure) that my 13900K CPU rarely uses more than 125W while gaming, and probably often less than that, but that doesn't make it a more efficient CPU in general.
I know cpu power isn't maxed out in gaming and you know this, but most of the people reading your reviews don't and I'm disappointed you're pretending like you don't know this. You're on these boards enough to know, if you had nickel for every time someone here posted that Intel CPU's use 250W+ for gaming, there would be a national shortage of nickels. This is a post from another site I visit:
+3.5% in gaming while more than halving the power and leaving room for overclock to get more performance. For some magic reason, performance gains attainable by intel chips by enabling ludricrous amounts of watts and disabling every sane protection are good and are suggested as "normal stuff" in reviews and benches;
People are reading reviews like the one on this site and thinking Zen5 uses half the power of Zen4 while gaming. No, it doesn't, not even remotely close. As you stated, 13900k is typically under 125W while gaming, which doesn't qualify as "ludicrous amounts of watts." This site's review touts the gaming performance of Zen5, which nobody including AMD is seeing, and a huge drop in power usage, however, no where in the review does it state that these two things don't happen at the same time which leads to all the garbage we see on message boards with people posting inaccurate information.
Using FPS/W to discuss efficiency for the CPU gets really messy because the GPU is still often a major contributing factor. Testing at 720p doesn't fully fix this, as CPU work changes with resolution as well based on what I've seen, just by virtue of what will fit within the various caches, LOD scaling, texture resolution stuff, etc. — and low/medium/high/ultra can and certainly does change the amount of CPU work that needs to be done. It's certainly possible to use the metric, but I'd be very cautious about applying it to general statements. If we had an infinitely fast GPU so that we could be 100% CPU limited in all games while running at least 1080p high settings, it would be a more useful metric, but even the 4090 will still limit performance in some cases (ray tracing games are a good example) and thus the overall FPS/W metric can still get skewed.
I've never cared for synthetic benchmarks or performance measures that don't mirror any sort of real world usage patterns. Don't do CPU gaming benchmarks at 4k with an RTX 2060. I get that, but no one is buying enthusiast hardware and gaming at 720p, so I don't care what results are produced at those settings either. Any benchmarks that has to be specifically setup to isolate the hardware being tested is probably not reflecting any soft of real world scenario and is useful only for academic purposes and arguing on the internet.
I mean, 5~10 percent faster used to be about the most we could expect from a new CPU architecture in the early 00s, so getting 15~20 percent feels quite impressive to me.
In the early 00's, CPU releases weren't on a 2 year cycle. Often we would see multiple releases in the same year of higher clocked models. Intel is still releasing on a yearly cycle today, even though Rpl refresh was a joke and shouldn't have been released, it's not normally that bad. Pick any two releases two years apart from the early 00's, and Zen 5's improvement isn't going to look too great. I can't remember any two year release window, even including Intel's 4 core eternity, that only saw a 5% increase in gaming performance.
 
I've never cared for synthetic benchmarks or performance measures that don't mirror any sort of real world usage patterns. Don't do CPU gaming benchmarks at 4k with an RTX 2060. I get that, but no one is buying enthusiast hardware and gaming at 720p, so I don't care what results are produced at those settings either. Any benchmarks that has to be specifically setup to isolate the hardware being tested is probably not reflecting any soft of real world scenario and is useful only for academic purposes and arguing on the internet.
I realize your comments are addressed to Jarred, but I wanted to speak in defense of low-res gaming benchmarks.

First of all, they test at multiple resolutions specifically so you have data that's more relevant to your own usage patterns. Given that, the argument that it's functionally irrelevant seems somewhat deflated.

The reason for synthetics is to find weaknesses in a product, because doing so provides predictive power for workloads that weren't benchmarked. This is important, since there are many thousands of different programs and usage models readers have for these CPUs, and if they understand them to the extent that they know which benchmarks their usage correlates with, they can get some idea of how it will behave on the hardware under test.

Another argument for pushing benchmark metaparameters to an extreme is that it aids in system optimization. If we didn't have data on how CPU-bottlenecked a game was, then debugging & optimizing performance problems would involve a lot more guesswork. Here's another big benefit of having multi-resolution data, in which clear trends can be spotted.

Intel is still releasing on a yearly cycle today, even though Rpl refresh was a joke and shouldn't have been released, it's not normally that bad.
I think the annual refreshes are expected by Intel's partners, as well as helping drive their own revenue. Raptor Refresh wasn't supposed to happen, but Meteor Lake-S got cancelled too late for Intel to make a decent replacement. One can debate whether Raptor Refresh was the appropriate response to that situation, but we should at least understand that releasing it wasn't Intel's original plan for that release.

There's also precedent for Raptor Refresh, if you remember what happened with Broadwell. The desktop version of that CPU was also effectively cancelled (they did eventually release a couple socketed CPUs, but these were late and not intended as a proper generation successor to Haswell).

BTW, Kaby Lake also seems to fit this mold, but I never heard similar rumors giving a rationale for it.

Pick any two releases two years apart from the early 00's, and Zen 5's improvement isn't going to look too great.
Why are you trying to compare it to the era before Dennard Scaling collapsed? This is either an ignorant or a disingenuous argument.

There are at least two other significant trends that've broken down since then, which might've escaped your attention:


I can't remember any two year release window, even including Intel's 4 core eternity, that only saw a 5% increase in gaming performance.
Really? It looks to me like both Haswell and Skylake were examples of such:
 
  • Like
Reactions: thestryker
Man, time to stop argue with the genius, we've seen so many corner case or goal moving for circle arguement, and yet AMD and Zen 5 is so bad, unreliable and X3D exploding he will still get zen 5 X3D day one, so when troll radar is on I don't bother argue with him, let him think whatever it is and we discuss logically alongside is better
No corner cases or goal moving. My arguments have stayed the same. MT iso power is what matters when it comes to efficiency. Period.

I never said zen 5 are so bad or unreliable or that the x3d are exploding, you are just making stuff up (it's called strawman) cause you can't actually argue against what im saying. Common tactic, I used to do that as well, then I turned 12 and stopped it.
 
No corner cases or goal moving. My arguments have stayed the same. MT iso power is what matters when it comes to efficiency. Period.
And it's fantastic, there! As I've shown, repeatedly.

The only way you get it to look bad (and here's the corner case), is by looking at the PBO Max data for fully-MT workloads. You're so fixated on the 9700X on MT at PBO Max, yet you keep talking about how efficient the i7's are when limited to 88 W. If someone is going to run the 9700X at PBO Max, the same person would likely OC their i7 - certainly not run it below stock TDP! So, we see that it's really a disingenuous argument, rather than reflective of a legitimate scenario.
 
Last edited:
  • Like
Reactions: jeremyj_83
The benchmarks you linked are completely gpu bound though.
They were using the high-end GPU configuration, from those reviews. As such, I reasoned they should be minimally GPU-bound, compared to the other configurations tested.

If you have better gaming benchmarks, comparing the i7-2600K vs. the i7-4770K and the i7-4770K vs. the i7-6700K, please feel free to share.
 
And it's fantastic, there! As I've shown, repeatedly.

The only way you get it to look bad (and here's the corner case), is by looking at the PBO Max data for fully-MT workloads. You're so fixated on the 9700X on MT at PBO Max, yet you keep talking about how efficient the i7's are when limited to 88 W. If someone is going to run the 9700X at PBO Max, the same person would likely OC their i7 - certainly not run it below stock TDP! So, we see that it's really a disingenuous argument, rather than reflective of a legitimate scenario.
How is it fantastic there "as you've shown repeatedly"? Im not fixated at PBO max either, i don't know where you got that idea from. Im saying at any wattage, whether it's at the 170 PBO max or at the stock 88w, any intel chip beats it. By a lot. The 9700x is a bit better than a 12700k and loses to everything else.


Man I honestly feel like you are trolling me. It scores 1200 points in CBR24 at 88 watts. Anything at or even below it's price point scores a lot more at the same watts. It's literally one of the most inefficient CPUs for the price in MT performance. Do you realize that a 13700k at 88w scores higher than the 9700x at 170 watts? How can you keep telling me it's efficient, I don't get it. It's good in ST performance and efficiency, it's horrible in MT performance and efficiency. That's about it.