News AMD announces Zen 5 Ryzen 9000 processors launches in July — four new Ryzen 9, 7, and 5 processors with a 16% IPC improvement

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Personally speaking I hope they never waste space on it, but if the cost is worth it to them it'll happen.
I still don't really see why an efficiency gap exists between NPUs and GPUs, at this level. I'd much rather see a beefed up GPU with tensor cores, so you can get more performance on either workload, as opposed to more specialization among blocks that seems a common theme among phone SoCs.
 
XDNA: Xilinx
RDNA: Radeon

If XDNA is similar to what Snapdragon did with their NPU, it likely doesn't occupy as much space as a GPU would for the same amount of TOPS performance.

Which is why Jim Keller said CUDA is mud.
 
  • Like
Reactions: bit_user
Its not the right way to go AMD, get the clock speeds higher,
Apple has done a good job of showing how much performance can be delivered even at far lower clock speeds.

Meanwhile, due to f/V scaling, higher clockspeeds will pretty much always be less efficient. Given that current AMD CPUs are somewhat thermally limited, it makes a lot of sense to me that they wouldn't push too hard on higher clocks (though I'm a bit surprised to see them not increase clocks at all).

It could turn out that they're holding something back and water cooling + PBO will be a killer combination on these CPUs. However, they will have had to sort out their IHS bottlenecks, for that to be possible.
 
As for the lower base clocks, if you have sufficient cooling and a demanding workload, you'll hit higher clocks instead.
Base clocks are about power limits, not thermal. If you have a sufficiently intensive workload, the only way to beat base clocks is via PBO or some other overclocking method. And yes, you then must also take care to have adequate cooling not to hit thermal limits.

They have lowered them to flex on Intel in power efficiency charts.
I wonder if it might have something to do with full width AVX-512. That had a huge power impact for Intel, compared with 256-bit AVX. Even then, if you're not running a heavy AVX-512 workload, I think your Zen 5 should stay well above base clocks.
 
  • Like
Reactions: Thunder64
still waiting for a ryzen 3 7300, 8300, when will budget ddr5 come!?!?

Not sure that is viable any longer. Ryzen 8500G is pretty cheap for a current generation chip. Yields just aren't there for a quad core. And the 8400F is more expensive right now, so that is about it.

Intel on the other hand, still has great quad cores and you can have your DDR5 too.

Intel build with 12100F / 13100F + 2x16GB DDR5 5600 CL36 + Motherboard = $270-$285

8500G + 2x16GB DDR5 5600 CL36 + Motherboard = $350

You can save like $20 by getting a 2x8GB 4800 CL40 kit, but not really worth it.
 
I can't argue on the cost/benefit case for you.

But as to the risk, I'm surprised, it's not been my experience for quite a few years now, very unlike the "good old days" of Windows NT or Unix, when hardware changes were much more trouble.

I've just done another round of upgrades, where three systems got the CPUs swapped around, a chunk of 128GB RAM got split into two and finally one 5950X got replaced by a 7950X3D with new DDR5-5600. The others were one Ivy Bridge i7 and one Kaby Lake i7 with 32GB DDR3-2400 each being replaced by two 5800X3D with DDR4-3200, while currently retaining a GTX 1080ti and an RTX 3070 on a THD and a 3k display respectively, the big system runs an RTX 4090 on 42" @4k with up to 144 Hz variable refresh via a dual DP KVM.

That meant all mainboards, CPUs and RAM got swapped, dropped, added or somehow moved, all GPUs remained, some storage transformed e.g. from SATA RAIDs to NVMe but generally stayed in place for an illusion of "personality" permanence.

Most were Windows10, some also dual boot to Windows 11, others have one or two extra Linux variants they run, e.g. Proxmox, RHV/oVirt or just RHEL8, Fedora 40 or Debian 12.

Most of my systems really have gone from Windows 7 via 10 to 11 with every piece of hardware replaced several times without problems but I keep those mostly lean and mean. The two smaller systems in this swap were from my kids, who run games with lots of complex patches and they really do not appreciate a fresh install.

Whether I swap the hardware or upgrade the Windows OS, as long as I don't do both at the same time, it's hardly ever been more than one or two reboots, extra drivers for RAID, extra sensors or power mangement. LAN and WiFi tend to require driver updates, but I always keep some really generic USB NIC around to download drivers I forgot to copy into a local folder before.

The nicest surprise is really things like Intel software RAID boot drives surviving even on AMD boards or easily being transplanted onto NVMe drives. A 10 year old Paragon disk manager license still just gets the job done, while a TrueImage from 2016 tends to complain about hardware changes but currently seems to accept them again (that was broken earlier this year). I use both to create backups one some big chunk of spinning rust, because that is a lesson I learned early more than 40 years ago now.

The biggest bother with Windows is licenses. While my OS use MAK keys, Office and various other applications would like to charge extra and it can be a bit of a bother to reactivate them.

With Linux I usually have to fiddle with the network setup since it's no longer just /dev/ethN. Sometimes blacklists need updates for GPU passthrough etc. it's generally a bit more involved but manageable as long as GPUs aren't changed.

I don't do storage transplants quite as often on Linux, because it typically self-contained on swappable SATA or USB-SSDs, but these days even ZPOOLs tend to re-attach or migrate as long as you take care to manage versions and features correctly.

Was it worth doing?

Well, for the first time ever Microsoft's flight simulator no longer sucks on an RTX4090 using an HP Reverb VR headset.

Since gaming isn't the primary mission, that system used a 5950X before.

In case you don't use VR for FS: All your plane windows basiscally become screens and the outwide world view is being projected on them. And with that you have two rather de-coupled updates streams. Your head movement means the plane's inside need to reflect your movement and that's been fast enough for a long time. But the outside scenery viewed through the windows need to update, too, and in the past that's often been really stuttery perhaps even below 20 Hz, completely killing the illusion of flight, even if the head perspective is updated at 90Hz or better.

Instead the illusion is that of sitting in nice big commercial airplane simulator that just happens to be broken...

Evidently all that requires more horsepower than just driving a 4k display and I couldn't be sure if that was a CPU or a GPU bottleneck.

To me a flight simulator without VR makes no sense at all, so essentially it was a dud for years.

Swapping the 5950X for a 7950X3D did the trick for me, frame rates aren't always perfectly smooth for outside world view, but good enough to create the illusion I was aiming for.

I can't say how much of that improvement is due to 3D V-cache or the higher speed/IPC, the RTX 4090 and storage remained the same.

Of course now I just notice more just how badly Microsoft is generating the "ground truth": Unless you're in an area with hand digitized buildings, the generated stuff is completely wrong in building styles and the generated ground activity like cars moving around is totally nonsensical, cars going around in circles, driving into buildings or rivers, appearing and disappearing in the oddest manners etc.

Can't really beat the uncanny valley only move it.

Every other game I've tried also runs pretty smooth now, ARK ASCEND is the next most demanding title in my library and there with DLSS frame rates generally fall somewhere between the 60 - 144 FPS my 4k monitor is capable of.

Would a 5800X3D have been good enough?

For gaming, quite possibly. But again, that's not the use case which pays for the machine. And for that stuff the extra cores were just more important.

I took a bet on the 9950X3D not coming out until next year and jumped for the 7950X3D instead. So far I have no regrets and an upgrade should be doable, if I need those extra few %.
I love your contributions, truly! However, even for me, many of your posts are just too long and meandering. I think it would both save you time and get more people to actually read your posts, if you could keep them more focused and to-the-point.

That's just my opinion. Feel free to disregard.
 
Apple has done a good job of showing how much performance can be delivered even at far lower clock speeds.

Meanwhile, due to f/V scaling, higher clockspeeds will pretty much always be less efficient. Given that current AMD CPUs are somewhat thermally limited, it makes a lot of sense to me that they wouldn't push too hard on higher clocks (though I'm a bit surprised to see them not increase clocks at all).

It could turn out that they're holding something back and water cooling + PBO will be a killer combination on these CPUs. However, they will have had to sort out their IHS bottlenecks, for that to be possible.
I agree; just an unending march for higher clock speeds no matter the cost has already pushed AMD and Intel to uncomfortable levels this generation, especially for Intel. We're also taking about base clocks and not anything under a load; turbo is boosted in most cases, and regardless, the CPU will jump to whatever clock speeds are needed to accomplish the workload. And anyone remember the Intel Netburst (Pentium 4) debacle? Clock speeds flew thru the roof, yet AMD chips clocked 1+ GHz slower were outperforming their Intel counterparts.

I'm impressed that we're looking at ~16% IPC on those efficiency gains. It also leaves room for AMD to introduce "fresh" models later that are clocked higher and have higher TDP's. They're obviously using the 'XT' suffix know on CPU's, so there's one option.
 
Given years of hype about Zen 5 being a more sweeping uarch redesign, a bit surprised at how incremental this improvement is.
16% IPC increase is way bigger than any generational improvement Intel delivered in the entire decade from Sandybridge to Alder Lake, or since! Plus, given their stated efficiency gains, if it does a better job of boosting, then we should see a greater net improvement than just the IPC improvement.

Geez... people are so spoiled, these days.
 
>Ars Technica has much better coverage. In short, these have no NPU, only the new AI 300 laptop chips do.

Ars and THW have different focus. Ars tech pieces are geared toward "all-around" techies, while THW is intended more for chipheads, delving deep into chip architecture that are basically esoterica for non-chipheads. This is readily apparent in their pieces covering AMD's Ryzen AI 300 announcement.

https://arstechnica.com/gadgets/202...laptop-cpus-are-its-first-for-new-copilot-pcs

https://tomshardware.com/pc-compone...sity-cores-come-to-ryzen-9-for-the-first-time

The Ars piece is more succinct, covering main points of interest to normal users (and techies). It also ignores the hype to read between the lines better. While the THW piece is all about chip architecture. There's so much chip esoterica that it leaves the general reader in the dust, while leaving out aspects that matter to them.

So, yes, the Ars piece is "better" for regular people.

Speaking of AMD's mobile vs desktop announcements, I'm struck by how much more advancements are made to the mobile parts vs desktop parts. Ryzen 9000 can be best summed up by "incremental perf bump" and "more power efficiency," the latter of which arguably doesn't get much traction on desktop. In a word, underwhelming.

This is what competition brings. As with desktop GPUs, desktop CPUs are now stuck in the backwater of development. It's an afterthought. Hopefully Arrow Lake won't be as staid.

I'm more excited about AMD's new mobile parts in action. It should go head-to-head against Lunar Lake. These parts will be in premium segment, so not directly pertinent to me personally, but they will give a good outlook on what we can expect for mainstream mobile come 2025.
 
  • Like
Reactions: bit_user
XDNA: Xilinx
RDNA: Radeon

If XDNA is similar to what Snapdragon did with their NPU, it likely doesn't occupy as much space as a GPU would for the same amount of TOPS performance.
The XDNA block in Phoenix was tiny, but also not fast. They're promising 3x increase, over and above the ~50% bump they got from clocking it higher. At that point, I doubt you can argue that its silicon footprint won't be significant, if not substantial, next to the iGPU. Below, you can see it's going to be like 2/3rds the size of the iGPU!

tPy52X6uH7Fh3ZSiCV2wch-970-80.jpg


The best I can guess about why XDNA is more efficient goes like this. GPUs rely very heavily on SMT and rapid context-switching for latency-hiding, because rendering involves tons of random memory accesses and long, unpredictable latencies. With NPUs, data access patterns are a lot more regular, making latencies more predictable, and thus the software can better accommodate them. This saves you having to burn so much die space on thread context (i.e. register sets, for all the threads in flight) and the supporting infrastructure.

If it's not that, then I'm truly flummoxed.
 
Last edited:
I've come to hate when manufacturers start citing IPC because they always obfuscate what they're even talking about and their benchmarking is suspect.

For example at least a few of the game were titles AMD is already as good/better than Intel at with non-X3D Zen 4 parts. They also tested with MSI's "Intel Default" setting which has 125W PL1 and have VBS set to on (unsure how this impacts AMD vs Intel, but it's generally disabled for performance).
I wonder if it might have something to do with full width AVX-512. That had a huge power impact for Intel, compared with 256-bit AVX. Even then, if you're not running a heavy AVX-512 workload, I think your Zen 5 should stay well above base clocks.
This is the only thing I can think of since the 9950X base clock is lower than the 7950X despite being on a refined process node and having the same TDP. The only other thing that would potentially be using more power is the memory controller, but that's such a small number it would have no real impact.

The main thing I wonder about with the TDP reductions is where does this leave the single CCD non-X SKUs.
 
  • Like
Reactions: bit_user
I've come to hate when manufacturers start citing IPC because they always obfuscate what they're even talking about and their benchmarking is suspect.

For example at least a few of the game were titles AMD is already as good/better than Intel at with non-X3D Zen 4 parts. They also tested with MSI's "Intel Default" setting which has 125W PL1 and have VBS set to on (unsure how this impacts AMD vs Intel, but it's generally disabled for performance).
The second paragraph isn't an example of the first, I think. I've only seen AMD talk about IPC increases vs. its prior generation.

However, since you mention it, I do wonder whether Toms will benchmark Ryzen 9000 CPUs against Intel 14th gen only at the Extreme Intel-recommended settings, or will those be put up against the AMD PBO, while each CPU will also be tested with the latest default settings?
 
AFAIK PPT cannot be changed and is a fixed function of the CPU's TDP. However, that doesn't mean that a CPU cannot be faster while drawing less power. Remember this is Zen 5 and not Zen 4. While the 7600X could hit up to 125W draw, that doesn't mean that the 9600X will require the same amount of power for the same performance. Efficiency of a core comes into play here. Unlike the last few generations of Intel Core, the Ryzen CPUs actually are quite efficient. Even though the 7950X draws more power than the 5950X, it is at worst as efficient as the 5950X in benchmarks. This is also seen in that at a 65W TDP the 7950X is usually faster than the 13900k until it gets a 125W TDP. Then at 105W the 13900k needs to be unlimited to be faster, however, the 7950X only gains about 7% more performance going from 105W TDP > 170W TDP. That means the efficiency band for Zen 4 was in the 65-105W range and the higher TDPs were just to squeek out absolute top performance. This is why AMD can lower the TDP on Zen 5 AND get better performance.
Where are you getting these numbers from? Nothing you said is remotely true.

The 7950x is getting smoked by the 5950x in MT efficiency at stock.

Also the 7950x at 65w is nowhere near a 13900k at 125w. The 13900k is literally 50% faster in something like cbr23. Also an unlimited 13900k is faster than a 105w 7950x,its not even a contest.

I get it, we all like amd but let's just stop making up stuff, yes?
 
>Ars Technica has much better coverage. In short, these have no NPU, only the new AI 300 laptop chips do.

Ars and THW have different focus. Ars tech pieces are geared toward "all-around" techies, while THW is intended more for chipheads, delving deep into chip architecture that are basically esoterica for non-chipheads. This is readily apparent in their pieces covering AMD's Ryzen AI 300 announcement.

https://arstechnica.com/gadgets/202...laptop-cpus-are-its-first-for-new-copilot-pcs

https://tomshardware.com/pc-compone...sity-cores-come-to-ryzen-9-for-the-first-time

The Ars piece is more succinct, covering main points of interest to normal users (and techies). It also ignores the hype to read between the lines better. While the THW piece is all about chip architecture. There's so much chip esoterica that it leaves the general reader in the dust, while leaving out aspects that matter to them.

So, yes, the Ars piece is "better" for regular people.

Speaking of AMD's mobile vs desktop announcements, I'm struck by how much more advancements are made to the mobile parts vs desktop parts. Ryzen 9000 can be best summed up by "incremental perf bump" and "more power efficiency," the latter of which arguably doesn't get much traction on desktop. In a word, underwhelming.

This is what competition brings. As with desktop GPUs, desktop CPUs are now stuck in the backwater of development. It's an afterthought. Hopefully Arrow Lake won't be as staid.

I'm more excited about AMD's new mobile parts in action. It should go head-to-head against Lunar Lake. These parts will be in premium segment, so not directly pertinent to me personally, but they will give a good outlook on what we can expect for mainstream mobile come 2025.
I missed the article about the laptop processors on Toms, which bit_user helpfully linked.
 
  • Like
Reactions: bit_user
The second paragraph isn't an example of the first, I think. I've only seen AMD talk about IPC increases vs. its prior generation.
Look at their "IPC" slide and the weird way they're showing it along with testing the 9950X vs 7700X. Their claim of "~16%" might end up being accurate, but there's nothing really believable in the way they're presenting it.
However, since you mention it, I do wonder whether Toms will benchmark Ryzen 9000 CPUs against Intel 14th gen only at the Extreme Intel-recommended settings, or will those be put up against the AMD PBO, while each CPU will also be tested with the latest default settings?
I'm assuming AMD won't power limit their X SKUs and Intel currently has nebulous power settings so it'll be a mess no matter what.
 
Where are you getting these numbers from?
Professional reviews like Tomshardware and Anandtech.

The 7950x is getting smoked by the 5950x in MT efficiency at stock.
Not true at all.

Also the 7950x at 65w is nowhere near a 13900k at 125w. The 13900k is literally 50% faster in something like cbr23. Also an unlimited 13900k is faster than a 105w 7950x,its not even a contest.
Again not true at all. Read ALL the benchmarks and you will see that the 7950X at 65W TDP more times than not is faster than 13900k at 105W TDP. Every benchmark has the 105W TDP 7950X being faster than the 13900k at all TDPs EXCEPT stock (253W).

I get it, we all like amd but let's just stop making up stuff, yes?
The thing is I'm not making anything up. This is what professional reviews show. I trust them over some random forum post person who says "they have all of them and have tested them all and knows better than everyone."
 
>I missed the article about the laptop processors on Toms, which bit_user helpfully linked.

That's another issue with the THW layout, that "main" news pieces are quickly pushed off the front page by filler pieces, even on the same day. The reader has to hunt for what scrolled off--which few people bothered to.

Ironically, Ars has the opposite problem. Its content volume has dropped drastically. I check on it about once a week.
 
  • Like
Reactions: Co BIY and bit_user
Ironically, Ars has the opposite problem. Its content volume has dropped drastically. I check on it about once a week.
Same with Anandtech. Their layout allows for more to show on the front page, but the volume is so low that you could go a month between visits with very little risk of missing anything. They do still cover some things Toms doesn't, but I'm now down to a weekly visitor.
 
  • Like
Reactions: TheSecondPower