News Two AMD Zen 5 CPUs may receive a significant performance uplift — Ryzen 7 9700X and Ryzen 5 9600X rumored for 105W TDP option with next AGESA update

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Raise the TDP, allow the processor to draw more power. This has a small effect single threaded but lets multithreaded workloads perform better.
For completeness, here's the corresponding single-threaded composite performance graph:

PTi97VZsAWRPgtEuV3csAd.png

Source: https://www.tomshardware.com/pc-components/cpus/amd-ryzen-5-9600x-cpu-review/3
We see that PBO speeds up the 9700X by just 2.8% and the 9600X by 3.3%. That's more than TechPowerUp found, in CB R24 (ST), but Paul's testing also showed greater benefits there (but similar to these composite scores).


I guess let's look at 1080p gaming:
Here, we see PBO deliver a 7.6% improvement for the 9700X, while the 9600X gets a 7.4% improvement. Again, this is better than what TechPowerUp found.
 
  • Like
Reactions: Hotrod2go
For NON-GAMING workloads how? Based on HU video the uplift in common apps like Adobe, Blender etc. is minimal almost none. Unless decently cheaper than 7000 series it's waste of sand.
Go check out Phoronix's reviews along with Wendell's reviews.

Just because it's not great for Gaming doesn't mean there aren't uses for it.

It just wasn't designed to appeal to "Gamers".
 
Go check out Phoronix's reviews along with Wendell's reviews.

Just because it's not great for Gaming doesn't mean there aren't uses for it.

It just wasn't designed to appeal to "Gamers".
I don't believe its main issues with gaming actually go as deep as hardware design. Furthermore, I think they are potentially addressable via software & microcode, but we'll see what they can actually manage to do, there.

As for non-gaming uses, they certainly did beef up its vector/floating-point performance quite a lot.
 
I don't believe its main issues with gaming actually go as deep as hardware design. Furthermore, I think they are potentially addressable via software & microcode, but we'll see what they can actually manage to do, there.

As for non-gaming uses, they certainly did beef up its vector/floating-point performance quite a lot.
I know they have a Cross-CCD Latency Penalty / Regression bug to work out.

AnandTech's Cross Core Latency shows MAJOR regressions, I'm assuming it's the rapid/'Too Aggressive' "Go to Sleep" desire to put the cores to sleep that's causing major regressions

But that's just one of many issues, from MLiD's internal leaks, there has been major internal problems with the development of Zen 5
 
  • Like
Reactions: bit_user
Looks like AMD CPUs may get a small performance uplift in Windows, perhaps globally, soon, due to a Windows bug that cuts 3-4% of performance. Combined with higher power limits that's a significant uplift, though at the expense of power consumption.

screenshot-2024-08-16-at-13-07-18.png


 
But that's just one of many issues, from MLiD's internal leaks, there has been major internal problems with the development of Zen 5
Leaker channels, MLiD, Coreteks etc. hype releases up too much pre launch. I guess it gets views and buys them beer. They then go on to say, after launch.. “didn’t meet expectations and the fault is”…cue another leak.. more views, more beer.

There is no mileage in a review that says “this widget is in line with realistic expectations” so the banner is pure click bait. It’s either exceptional or exceptionally bad.

Intel was milking the same basic design, improved iteratively, up to the 9900k then a change in design philosophy was implemented, P cores and E cores. This saw fruition with the 12th gen.
AMD have been milking Zen and improving it iteratively but the core design must be well within the area of diminishing returns regarding improvements.
I’m not criticising either company there, the big gains are made early in the lifetime of a particular concept, as the low hanging fruit is knocked off gains become harder, more obscure.. then comes revolution and a new concept.

Hypothesising… what was there with the 3000 to 7000 Zen chips regarding latency was good, perhaps with some new instructions or configuration the inter CCD latency was seen to cause an instability? Increase latency to mitigate ? I don’t know, I’m throwing it out there. If something is working well you don’t fix it unless there is a reason.
 
  • Like
Reactions: bit_user
Leaker channels, MLiD, Coreteks etc. hype releases up too much pre launch. I guess it gets views and buys them beer. They then go on to say, after launch.. “didn’t meet expectations and the fault is”…cue another leak.. more views, more beer.
Not only that, but they almost need to find someone to "blame" for it falling short of expectations, so that people forget it was them who helped hype up those expectations.

I wish there was some kind of blowback for these folks, but they're operating in a zone largely consequence-free zone and with the incentives heavily weighted towards being sensationalist. Perhaps the best we can hope is that people eventually learn to take them with a grain of salt and stop paying so much attention to them.

Intel was milking the same basic design, improved iteratively, up to the 9900k then a change in design philosophy was implemented, P cores and E cores. This saw fruition with the 12th gen.
AMD have been milking Zen and improving it iteratively but the core design must be well within the area of diminishing returns regarding improvements.
I’m not criticising either company there, the big gains are made early in the lifetime of a particular concept, as the low hanging fruit is knocked off gains become harder, more obscure.. then comes revolution and a new concept.
Wow, that's a thoughtful perspective.

What I see happening is that AMD has been dipping its toe into the hybrid realm, using its C cores. They're not as ambitious as Intel's approach, but given how Zen is already more biased towards area-efficiency, I think they don't need to be as extreme as Intel's E-cores. Indeed, Strix Point does look quite good, compared to Meteor Lake. Both with similar thread counts and process nodes.

Meanwhile, Intel is trying to paper over some scheduling hazards of their hybrid architecture by trying to narrow the gap between P-cores and E-cores, taking them in a direction closer to AMD's C-cores (performance-wise, not architecturally). If this actually hurts perf/area and perf/W, as I expect, then I think this situation is unfortunate.

The real problem is software, but that's such a slow-moving behemoth that I don't see either company even trying to alter its trajectory. To be clear, what I mean is that software uses somewhat antiquated constructs to employ multiple cores. We need some real investment in OS and API technologies, here. Threads need to stop being the basic building block for multi-core scaling.

what was there with the 3000 to 7000 Zen chips regarding latency was good, perhaps with some new instructions or configuration the inter CCD latency was seen to cause an instability? Increase latency to mitigate ?
I've heard inter-CCD latency increased due to more aggressive core parking. As I think about it, I'm not sure if that theory holds water, given how I believe those benchmarks work. Indeed, ChipsAndCheese doesn't give that argument any play. In fact, they don't even speculate why:


Perhaps we're overthinking this and it was really just something like a chip bug they had to mitigate via relaxed timing.

FWIW, I also think the effect of core-to-core latency seems to be overplayed. It'd be interesting to look for scaling discontinuities in multithreaded benchmarks vs. single- & dual- CCD CPUs. See how often and severely they really occur. Perhaps people seizing on the inter-CCD latency problem are just grasping at straws.
 
Last edited:
Status
Not open for further replies.