News AMD announces Zen 5 Ryzen 9000 processors launches in July — four new Ryzen 9, 7, and 5 processors with a 16% IPC improvement

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
It's curious how the TDP dropped so much since Zen 4, except for the 16-core CPU. Does it mean the current chip is actually constrained by its power limit, and the new model can work faster also because it is more efficient? I'm curious now.
I think there is something about Zen 5 preventing them from reliably operating at the higher frequencies the (assumed) N4P node could permit over N5, so they get to fully benefit from the decreased power consumption instead. So it is trivial to drop the TDPs on the 12-core and 8-core without sacrificing performance.

As for the lower base clocks, if you have sufficient cooling and a demanding workload, you'll hit higher clocks instead. They have lowered them to flex on Intel in power efficiency charts. This is just a theory. It could be that Zen 5 has major problems, or the press release had incorrect information.
 
Given RPL & RPR's demonstrated superiority in productivity benchmarks against Zen 4, my expectation is that Zen 5 will achieve rough parity with RPR at the midrange, but with better power efficiency.
No way and no way. There is not a chance the R5 will be on par with RPL i5 and the R7 with the RPL i7. They won't even be close, in raw performance or efficiency.
 
Many things can make a 4090 work.
Depending on what you plan on using the computer for, the cpu may not even matter.
You can never remove all bottlenecks.
A system without bottlenecks would have infinite performance 😛

For gaming, I tend to turn g-sync/v-sync on and as long as my game's fps is running at the max refresh rate of my monitor then any supposed bottlenecks don't matter.
Depends on the game along with stuff like Ray Tracing but none of the current CPUs truly push it...or at least consistently push it.
 
  • Like
Reactions: jordanbuilds1
While I've never cared about socket longevity in the past AMD effectively confirming at least Zen 6 support (2027+) on AM5 it's on my radar.

I wouldn't be so sure of that, they did not confirm Zen 6 support. Only that AM5 is supported with new processor releases until 2027+. However, as you see with AM4, they keep releasing variants of processors based on Zen3+, not processors based on Zen 4 or 5. So while technically AM4 is still supported, it's really been done since the 5800x3d launched.
 
Well this sucks, 16% to me isnt worth the cost, work & potential risk of something going wrong in the update. I can't believe how sensitive my PC is in updating &/or changing stuff, ridiculous really.
I can't argue on the cost/benefit case for you.

But as to the risk, I'm surprised, it's not been my experience for quite a few years now, very unlike the "good old days" of Windows NT or Unix, when hardware changes were much more trouble.

I've just done another round of upgrades, where three systems got the CPUs swapped around, a chunk of 128GB RAM got split into two and finally one 5950X got replaced by a 7950X3D with new DDR5-5600. The others were one Ivy Bridge i7 and one Kaby Lake i7 with 32GB DDR3-2400 each being replaced by two 5800X3D with DDR4-3200, while currently retaining a GTX 1080ti and an RTX 3070 on a THD and a 3k display respectively, the big system runs an RTX 4090 on 42" @4k with up to 144 Hz variable refresh via a dual DP KVM.

That meant all mainboards, CPUs and RAM got swapped, dropped, added or somehow moved, all GPUs remained, some storage transformed e.g. from SATA RAIDs to NVMe but generally stayed in place for an illusion of "personality" permanence.

Most were Windows10, some also dual boot to Windows 11, others have one or two extra Linux variants they run, e.g. Proxmox, RHV/oVirt or just RHEL8, Fedora 40 or Debian 12.

Most of my systems really have gone from Windows 7 via 10 to 11 with every piece of hardware replaced several times without problems but I keep those mostly lean and mean. The two smaller systems in this swap were from my kids, who run games with lots of complex patches and they really do not appreciate a fresh install.

Whether I swap the hardware or upgrade the Windows OS, as long as I don't do both at the same time, it's hardly ever been more than one or two reboots, extra drivers for RAID, extra sensors or power mangement. LAN and WiFi tend to require driver updates, but I always keep some really generic USB NIC around to download drivers I forgot to copy into a local folder before.

The nicest surprise is really things like Intel software RAID boot drives surviving even on AMD boards or easily being transplanted onto NVMe drives. A 10 year old Paragon disk manager license still just gets the job done, while a TrueImage from 2016 tends to complain about hardware changes but currently seems to accept them again (that was broken earlier this year). I use both to create backups one some big chunk of spinning rust, because that is a lesson I learned early more than 40 years ago now.

The biggest bother with Windows is licenses. While my OS use MAK keys, Office and various other applications would like to charge extra and it can be a bit of a bother to reactivate them.

With Linux I usually have to fiddle with the network setup since it's no longer just /dev/ethN. Sometimes blacklists need updates for GPU passthrough etc. it's generally a bit more involved but manageable as long as GPUs aren't changed.

I don't do storage transplants quite as often on Linux, because it typically self-contained on swappable SATA or USB-SSDs, but these days even ZPOOLs tend to re-attach or migrate as long as you take care to manage versions and features correctly.

Was it worth doing?

Well, for the first time ever Microsoft's flight simulator no longer sucks on an RTX4090 using an HP Reverb VR headset.

Since gaming isn't the primary mission, that system used a 5950X before.

In case you don't use VR for FS: All your plane windows basiscally become screens and the outwide world view is being projected on them. And with that you have two rather de-coupled updates streams. Your head movement means the plane's inside need to reflect your movement and that's been fast enough for a long time. But the outside scenery viewed through the windows need to update, too, and in the past that's often been really stuttery perhaps even below 20 Hz, completely killing the illusion of flight, even if the head perspective is updated at 90Hz or better.

Instead the illusion is that of sitting in nice big commercial airplane simulator that just happens to be broken...

Evidently all that requires more horsepower than just driving a 4k display and I couldn't be sure if that was a CPU or a GPU bottleneck.

To me a flight simulator without VR makes no sense at all, so essentially it was a dud for years.

Swapping the 5950X for a 7950X3D did the trick for me, frame rates aren't always perfectly smooth for outside world view, but good enough to create the illusion I was aiming for.

I can't say how much of that improvement is due to 3D V-cache or the higher speed/IPC, the RTX 4090 and storage remained the same.

Of course now I just notice more just how badly Microsoft is generating the "ground truth": Unless you're in an area with hand digitized buildings, the generated stuff is completely wrong in building styles and the generated ground activity like cars moving around is totally nonsensical, cars going around in circles, driving into buildings or rivers, appearing and disappearing in the oddest manners etc.

Can't really beat the uncanny valley only move it.

Every other game I've tried also runs pretty smooth now, ARK ASCEND is the next most demanding title in my library and there with DLSS frame rates generally fall somewhere between the 60 - 144 FPS my 4k monitor is capable of.

Would a 5800X3D have been good enough?

For gaming, quite possibly. But again, that's not the use case which pays for the machine. And for that stuff the extra cores were just more important.

I took a bet on the 9950X3D not coming out until next year and jumped for the 7950X3D instead. So far I have no regrets and an upgrade should be doable, if I need those extra few %.
 
No mention about NPU?
Just posted the same thing. Was thinking this would be the most compelling reason to wait. Without it, it's just a slightly faster 7000 series, but given that a 7950x is now just over $500 now, you're getting no added value assuming the 9950x will have a similar msrp to the 7950x and a street price to match in the range of $799.
 
I am running my first AMD processor in decades. The 7950X3D and am very satisfied with it. I wonder if the next generations of X3D will overcome some of the downsides of the current generation. It would be nice if they could either allow overclocking or get the stack on both dies and better yet, do both.
I have already had 5800X, 5950X and 5800X3D before recently adding the 7950X3D.

And I remember also being initially disappointed not to have a variant with dual V-cache CCDs as option to purchase.

But with the testing that I've done across all of them, I begin to think that AMD has really gone for the genius option on the 7950X and we're likely to see a repeat on Zen 5.

Stacking chips will always cost top clocks, that can't be avoided. So there is no having both.
And overclocking really is no longer an extra, it's part of normal operations.

Some workloads will always profit more from faster clocks while others will profit from bigger caches.

As long as either type doesn't exceed 8 cores, they could both be best served via this hybrid setup: you get a 7800X and a 7800X3D and can switch between them on-the-fly.

And that really covers a lot of territory.

Once you stray into the real workstation workloads that will load all 16 cores, the lower top clocks on the V-cache CCD won't matter, because the energy budget constraints won't allow you to reach them anyway: Watts per clock beyond the CMOS curve is exponential, so while a single core may reach top clocks at 25 Watts, 16x25 is just long past melt-down and cores have to stay below 5 Watts each, way below even where V-cache cores get too hot.

On all of these workloads, e.g. large compile jobs or load like Cinebench has all 16 cores clock the same, there is no V-cache speed penalty. I haven't tried to also get a 7590X(no 3D) to see if its on-paper 170 Watt TDP actually allows for slightly higher clocks. Depending on just how vector heavy the code is, my 7950X3D will actually suck 150 Watts, but still not clock the non-V-cache cores higher then the V-cache cores.

I can well imagine that it was a similar set of extensive tests which led AMD towards the Xc architecture or their variant of "compact" cores. Because by the time workloads become parallel enough to fill core counts beyond the "big" ones, TDP related constraints will reduce the clocks to the point where the compact design doesn't run into cooling limits, allowing for much less dark silicon for cooling and additional cores instead. Again the type of genius AMD is getting much too little credit for IMHO.

I don't know if we'll see V-caches that can go into some type of "passive power save mode", to enable slightly higher core clocks, but I'm pretty sure that too has been modelled at one point.

The only thing that seems to grow nearly without limits is the distinct number of compromises possible as numbers and styles of cores multiply.
 
  • Like
Reactions: A Stoner and Peksha
I feel like when AMD said they would extend supporting the AM5 socket to beyond 2027 from 2025, they really meant to switch over to the AM6(?) by 2025 while keeping general support of AM5 sort of like newer CPU released in 2026 or after will require the AM6(?) but they’ll release very minor improved 10700XT/XT2 beyond 2025, etc. So it doesn’t really matter as all the newest features like PCIe 6.0, NVMe 3.0 USB 4 Gen2, etc. will require the AM6(?). It’s not like the Ryzen 9 11900X will be supported to work with AM5 or anything so I’ll curtail my expectations.
Not saying it isn’t a good thing that they try to keep at least minimal backward support for so many years but just saying, people should be cautious with their optimism.
 
  • Like
Reactions: jordanbuilds1
My take on Zen 5 is this:

You can criticise them for not going all out on every part of the architecture CCD, IOD and chipset to achieve the best current state of the art could possibly achieve.

Or you can appreciate how they were able to extract the maximum value from the smallest change, changing nothing but the CCD dies and a couple of numbers on the marketing materials.

I'd focus on the latter and score another one for AMD doing something very smart, that Intel just can't match in terms of value/effort and might critically sap their ability to compete in a field where new players are entering with plenty of hunger.
 
Just posted the same thing. Was thinking this would be the most compelling reason to wait. Without it, it's just a slightly faster 7000 series, but given that a 7950x is now just over $500 now, you're getting no added value assuming the 9950x will have a similar msrp to the 7950x and a street price to match in the range of $799.
Ars Technica has much better coverage. In short, these have no NPU, only the new AI 300 laptop chips do.


Also this, for those it may apply to:

https://www.tomshardware.com/tech-i...gh-performance-gpus-for-ai-workloads-and-more
 
  • Like
Reactions: uwhusky1991
Given years of hype about Zen 5 being a more sweeping uarch redesign, a bit surprised at how incremental this improvement is. Oh well, looks like a solid enough upgrade for my 8700K at any rate.
 
  • Like
Reactions: NinoPino
Are you saying that since the TDP is lower then the overall performance will only be 5% higher? If not then I have no idea what you are trying to get at.
I'm saying that if the difference in PPT is the same then how are they going to be any faster?
If TDP is reduced by 30-40% and PPT is reduced by the same amount then the new CPUs will run with 30-40% less maximum power, but the ~15% IPC increase is only there if you run at the same power as the previous gen.

So for me this means that TDP will be lowered (if this report is true in the first place) but PPT will remain the same as now so that these CPUs can show some improvement in performance.
 
The 65w catch My eye... :) it's the TDP power I currently use.
Well let see how the new chipset behavior.
And what intel will have for us.
I'm running a 5700x - the reason I didn't upgrade to AM5 when I was looking up move up from my B450/Ryzen 7 2700 was the TDP of 1st Gen AM5.

I now know what my next upgrade will be when the 11000 series pops out in 18 - 24 months from now.
 
I'm saying that if the difference in PPT is the same then how are they going to be any faster?
If TDP is reduced by 30-40% and PPT is reduced by the same amount then the new CPUs will run with 30-40% less maximum power, but the ~15% IPC increase is only there if you run at the same power as the previous gen.

So for me this means that TDP will be lowered (if this report is true in the first place) but PPT will remain the same as now so that these CPUs can show some improvement in performance.
AFAIK PPT cannot be changed and is a fixed function of the CPU's TDP. However, that doesn't mean that a CPU cannot be faster while drawing less power. Remember this is Zen 5 and not Zen 4. While the 7600X could hit up to 125W draw, that doesn't mean that the 9600X will require the same amount of power for the same performance. Efficiency of a core comes into play here. Unlike the last few generations of Intel Core, the Ryzen CPUs actually are quite efficient. Even though the 7950X draws more power than the 5950X, it is at worst as efficient as the 5950X in benchmarks. This is also seen in that at a 65W TDP the 7950X is usually faster than the 13900k until it gets a 125W TDP. Then at 105W the 13900k needs to be unlimited to be faster, however, the 7950X only gains about 7% more performance going from 105W TDP > 170W TDP. That means the efficiency band for Zen 4 was in the 65-105W range and the higher TDPs were just to squeek out absolute top performance. This is why AMD can lower the TDP on Zen 5 AND get better performance.
 
  • Like
Reactions: bit_user
More surprising (but not really) is that it's pegged against 14900K and not KS.
KS is a specialty part. That provides some justification not to compare themselves against it, and using the regular K just makes them look better. So, why would they?

Of course, the showdown everyone is waiting for will be against ARL, but with its Q4 release, we'll have to wait some more months for that to happen.
Some leaked details about Arrow Lake suggested the P-cores are only a couple % faster on single-threaded workloads. If true, then this could be a more even match up than the R9 7950X vs. i9-14900K.

The other aspect I'm a bit surprised about is that desktop parts get no NPU love.
I'm not. In an interview with one of their VPs, last year, AMD said their NPUs were mainly about perf/W and thus aimed at laptops. If you compare their TOPS to desktop dGPUs, it's no contest. So, it makes sense for desktops to get NPUs primarily if they're using the iGPU (in which case you probably have an APU and not one of the chiplet-based CPUs).

I'm wondering if that's because XDNA2 (AMD's NPU) is tied to the RDNA3.5 iGPU? And since desktop only gets wimpy iGPU, then XDNA2 is a nonstarter?
No, almost certainly not. The XDNA NPUs are a fundamentally different architecture and a distinct block. I don't see that changing with XDNA2.

there are plenty of business desktop PCs w/o discrete GPUs.
And those will increasingly use APUs. AMD already released Phoenix for AM5, in case you didn't hear.
 
Last edited:
  • Like
Reactions: KnightShadey