News AMD updates Zen 5 Ryzen 9000 benchmark comparisons to Intel chips — details Admin mode boosts, chipset driver fix

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Mar 10, 2020
420
385
5,070
Here is a slightly different perspective and take on these gaming benchmarks:
Back in the day, having a Pentium 60 or a Pentium 120 could make a difference between "completely unplayable" and "sufficiently playable" gaming.
I remember those days, 486 scaling from 20MHz to 120MHz+ from AMD, 100MHz from Intel … Pentium coming in at 60/65MHz, scaling to 233MHz… pentium pro/2/3 starting at 200MHz scaling to 1GHz. New exciting instructions, games even saw a bump! Huge overclocks that you could use daily without frying your chip..

The race was tense, who was going to get the crown? Clock bumps were regular, innovation was rapid.. on motherboard (l2), on chip (l1), on package (l2) and then on chip (l2) caches, parallelised instruction sets. MMX, then SSE, the now discontinued 3dNOW, die shrinks from 800nm.. the incredibly imaginative Transmeta Crusoe chip.. x86 to an unrelated instruction set via a translation layer, an instruction decoder if you wish.

There was real competition, 15 designers/manufacturers. All but one competitor is gone now.
 
Mar 10, 2020
420
385
5,070
The "Windows sux" narrative is popular with the hoi polloi, and spin doctors always reach for it. It's low-browed scapegoating.
To some extent Windows has sucked, 2 instances stand out as good (my own opinion) .
Windows 2000, relatively light, a well polished version. It was reliable, quick, an easy to use follow on from NT4/Win 9x. Good compatibility, Athlon and Pentium had few problems. It was all round good, not flashy, simply good.

Windows 7, Vista done right, It was differently architectured from NT4 and 2000, some 3rd party kernel mode drivers were now in user space.. graphics driver crashes no longer brought down the whole system (a throwback to win nt 3.51 but the overhead was too large for the 486 processors of the time). Windows 7 just worked and worked well enough for people, me included to not bother with windows 8. That it was going out of support pushed me to Windows 10.

The scheduler problem, if it is that NEEDS AMD, Intel and Microsoft to communicate. Communicate to get the software environment set to allow hardware to run at its best. That the operation of both Intel and AMD products is to some extent compromised means that customers do not get the full value of their purchases.
 

yankeeDDL

Distinguished
Feb 22, 2006
100
17
18,685
Meh, to me Zen 5 is still a bust. I'll be upgrading soon, just waiting for Arrow Lake and 9000x3d to see where I can get the most gaming performance at a good price. If it all stinks I'll just get a 7800x3d and be happy.
Watch out for power consumption too.
If even a few FPS make a difference to you, by all means. But having a "furnace" next to you while playing may not be ideal.
The 14900K reaches 260W vs 68W of the 7800X3D (https://www.tomshardware.com/news/intel-core-i9-14900k-cpu-review); dissipating those extra ~200W could mean noise, heat, or just a lot of money in a wafercooling system instead of the Wraith cooler which is whisper quiet.
 

Jagar123

Prominent
Dec 28, 2022
73
102
710
Watch out for power consumption too.
If even a few FPS make a difference to you, by all means. But having a "furnace" next to you while playing may not be ideal.
The 14900K reaches 260W vs 68W of the 7800X3D (https://www.tomshardware.com/news/intel-core-i9-14900k-cpu-review); dissipating those extra ~200W could mean noise, heat, or just a lot of money in a wafercooling system instead of the Wraith cooler which is whisper quiet.
Yes, power consumption is an important factor in the choice of CPU. If Arrow Lake doesn't improve its energy usage in a meaningful way then I won't be interested in it.

I am not interested in the current generation Intel CPUs, in part, due to their energy demands.
 
That’s fine, it would be wrong to compare windows ray tracing against linux raster? rendering. Also I’d guess that windows is using dx12 against Vulcan, though this is a guess and it isn’t stated in the video.

Assume that in cyberpunk the 205 fps is used as a target, a reference. What was demonstrated was that there was an improvement with the as yet unreleased windows 11 build. Further improvements were achieved with bios adjustments. That the performance could be improved shows problems for windows .

Yes it’s running a virtualised security instance and there are other overheads, different overheads for drivers, different overheads for the OS etc etc and it’s hard to do apples to apples comparisons. Note at the start of the video, he wasn’t claiming a cure for the woes, he was pointing out inconsistencies between 2 ASUS motherboards, then how on the quicker of the 2 boards how VBS affected cyberpunk… and then tweaks… None of that was needed (admittedly for a very limited selection of games) to achieve very good results on Linux. He doesn’t say Linux is perfect, just that he likes it but he isn’t evangelising.
If Microsoft, AMD and Intel actually talk and iron out their needs/limitations perhaps we can all have a better Windows.
You can't use DirectX in any way shape or form in Linux. Microsoft won't allow it. The only thing you can do is a translation layer into Vulkan or OpenGL which takes care of all calls into the driver.

Whatever shenanigans Windows is doing that is hurting performance, it's not like AMD did not know about them or could not take those into account. They mention as much in their communication. Wendell went and explained very well what all of this means (superficially) and how to change the settings, which do not exist in the Linux kernel. At leat, not the same exact way.

The last thing you mention is the critical element: these dumb companies need to talk to each other more and have better communication.

Regards.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
Watch out for power consumption too.
If even a few FPS make a difference to you, by all means. But having a "furnace" next to you while playing may not be ideal.
The 14900K reaches 260W vs 68W of the 7800X3D (https://www.tomshardware.com/news/intel-core-i9-14900k-cpu-review); dissipating those extra ~200W could mean noise, heat, or just a lot of money in a wafercooling system instead of the Wraith cooler which is whisper quiet.
The 14900k reaches 260w in MT workloads, when it's over twice as fast as the 7800x 3d in MT workloads. You can limit it to the same 68w and it will still be faster. Waiting for the 7800x 3d to finish your MT tasks while using more and more power, creating heat and all of that is not worth it.
 
  • Like
Reactions: KyaraM

SethNW

Honorable
Jul 27, 2019
40
22
10,535
What AMD fails yo get is that damage was already done. Zen5 architecture was mainly designed and done for Epyc in datatacenter, hence why benchmarks that test more data center focused load can get 20-30% boost and break kind if falls flat. And for resonate that can only make sense to AMD they rushed that thing out, without even having prepared reviewer guides and everything. Software was hugely unfinished and that from company that is already known about their software having issues. Like I have seen drivers not getting properly installed on systems, without core parking and quite few hoops they had to go through, like Revo Uninstaller to clean install driver. Plus motherboards that defaulted Auto setting for core parking to frequency instead of driver. Auto should automatically do driver and if that isn't prevent then have failsafe default. And if it needs clean install, that should be built into installer. Plus if you got feature like that there should be control panel that manages it.

And also how the hell they managed to not test like reviewers do is beyond me. One would think they would watch or read what those are doing to adapt. Many don't use built in benchmark because it isn't aleays really representative of actual performance you will get most time during playing.

But as I said, damage was already done. It us a lot of same attitude Radeon group had. Misrepresenting performance numbers on initial announcement. Then overpricing product on release so it can be slammed by negative reviews. Them doing damage control bu either finding issues in drivers abd stuff because they released unfinished product or price cutting. The few months latter they got better product and bunch of unnecessary negative reputation... Like why? I used to joke that monkey hitting random buttons would better manage AMD than AMD does itself. All those shenanigins aren't helping to prove it as just as joke.

Like they really need to start talking with their partners and Microsoft to make sure noth BIOSes and Windows are ready for new stuff. And game developers, so they make sure things run on their cards well. Like nVidia is doing, they are even willing to send their own developers out to make sure raytacing and DLSS are implemented. While AMD couldn't even be bothered to talk with developers about Anti-Lag+ and rather made it work like hack and do memory injection into game, triggering anticheat and getting people banned. This should never happen. Especially not with Ryzen, AMD really has huge issue of not knowing how to be top dog.

And second one is just, you need better driver and software developers. Things needed to just work. How you get there is AMD problem. I don't care what obstacles they gave, whether Windows isn't doing something correctly or whatever. Excuse how it isn't you never works, you need to deal with it or find a way whee it is not an issue. Because people won't blame Microsoft or whatever. They installed AMD driver, they will blame AMD. And AMD needs to stop acting like they are done kind if third part company to themselves and this isn't an issue. Make software that can deal with issues it encounters in the wild. Either deal with it l, because life isn't fair and you should stop expecting it to be. Or hire people who will.
 
  • Like
Reactions: KyaraM and Jagar123
I really hope this is a nothing burger, but I'll say it just in case in here: I was told by some peeps that got the Ry9K CPUs they're seeing SoC voltages hovering 1.3v and that is what caused the original issues with Ry7K and the "exploding" or "combusting" CPUs situation, so please do keep an eye out on those voltages, specially with high clocked RAM.

Regards.
 

EzzyB

Great
Jul 12, 2024
47
37
60
>I assume the point is that the results suggest maybe the 9000 is actually pretty decent but windows is such a mess that it's unreasonably handicapped in those results.

The "Windows sux" narrative is popular with the hoi polloi, and spin doctors always reach for it. It's low-browed scapegoating.
It reminds me of the typical enthusiast with a 15% "stable overclock" (because Prime95 is all things) and, "!#$% Microsoft.... #$@#$^ Blue Screen...."
 
This is exactly why I buy Intel. More cores at every price point means I can have some heavy stuff on the background running while enjoying a flawless experience. Basically a 300 euro 13700k offers you 16cores of raw performance with high framerates for all your games.

And having so many cores means you can pull the power draw back while still being much much faster than it's competition for those transcoding workloads.
You are not exactly getting "more cores at every price point", since half of the 13700K's cores are single-threaded low-power cores, whereas AMD's are all full cores with SMT. Just describing the processor as having "16 cores" without specifying that half them are e-cores is deceptive, and AMD's SMT implementation also gets more performance out of each dual-threaded core than Intel's, allowing them to match Intel's multithreaded performance using fewer total cores.

As a result, the closest comparison both price and performance-wise to an 8+8-core 13700K would be the 12-core Ryzen 7900X (or the 7900 with PBO enabled), which offer rather similar multithreaded and single-threaded performance, while generally drawing less power (and the 7900 drawing substantially less at stock settings, though with a bit lower performance). Either processor can perform faster than the other depending on the specific workload, and it's at least an exaggeration to claim that a 13700K will be "much faster" than the competition at multithreaded tasks, as they tend to perform roughly the same overall. And price-wise they all cost about the same, within about $20 of one another.
 
  • Like
Reactions: Peksha and bit_user

sjkpublic

Reputable
Jul 9, 2021
79
29
4,560
Intel controls TB4, yes, but USB4 is based on TB3, which Intel donated to USB-IF, royalty-free for USB4 specification.

AMD is a US-based company. Not sure where everyone is getting this misinformation about AMD. Intel, AMD, and Nvidia are all US companies that also do business in China, though there are US restrictions and export controls on US silicon to China.

As for needing the PPM/X3D driver on dual CCD chips, I think it's necessary given the regression in CCD-to-CCD latency vs Zen 4. It's better to simply stop any depedent threads from being placed there, as the penalty is even higher now, if scheduler accidentally allows a cross-CCD thread placement. The CCDs have to communicate through IOD at the IMC/RAM, so it's always been wise to prevent any cross-CCD processing. Workloads should be decently parallel with no dependencies between CCDs. Gaming generally doesn't fit that workload type.
Good post. Yes, all the major players are US based and they farm out the manufacturer process mostly to TSM. To me it seems AMD is more of a International player. Just me.

My point on Thunderbolt 4 vs USB 4 is that I have found most 40 GBps devices work ok with Thunderbolt and fail with USB 4. These include hubs and enclosures. My guess is that this is because the driver support comes from Intel on Thunderbolt and for for USB 4 it comes from Microsoft. Not sure why USB 4 companies on relying on MS. MS and USB 4 companies have yet to get it together in general. Although, the USB 4 attached flat panels seem ok. Video ok. Storage NOT.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
You are not exactly getting "more cores at every price point", since half of the 13700K's cores are single-threaded low-power cores, whereas AMD's are all full cores with SMT. Just describing the processor as having "16 cores" without specifying that half them are e-cores is deceptive, and AMD's SMT implementation also gets more performance out of each dual-threaded core than Intel's, allowing them to match Intel's multithreaded performance using fewer total cores.

As a result, the closest comparison both price and performance-wise to an 8+8-core 13700K would be the 12-core Ryzen 7900X (or the 7900 with PBO enabled), which offer rather similar multithreaded and single-threaded performance, while generally drawing less power (and the 7900 drawing substantially less at stock settings, though with a bit lower performance). Either processor can perform faster than the other depending on the specific workload, and it's at least an exaggeration to claim that a 13700K will be "much faster" than the competition at multithreaded tasks, as they tend to perform roughly the same overall. And price-wise they all cost about the same, within about $20 of one another.
Oh well, besides the fact that you are wrong, I agree with everything else.

The closest competitor to the 13700k (in price I mean) is the 7700x, they have a 15-20$ between them. The 7900x is 60$ more expensive. The 7900x is closer in price to the 14700kf (369$ vs 359$).
 

Hotrod2go

Prominent
Jun 12, 2023
217
59
660
I really hope this is a nothing burger, but I'll say it just in case in here: I was told by some peeps that got the Ry9K CPUs they're seeing SoC voltages hovering 1.3v and that is what caused the original issues with Ry7K and the "exploding" or "combusting" CPUs situation, so please do keep an eye out on those voltages, specially with high clocked RAM.

Regards
Are they getting this high SOC voltage when the option is auto in bios? If you recall, high SOC (over 1.30v) was a thing in the early days of Zen 4 before AMD fixed it with an agesa update.
 
Are they getting this high SOC voltage when the option is auto in bios? If you recall, high SOC (over 1.30v) was a thing in the early days of Zen 4 before AMD fixed it with an agesa update.

Yes, they were running "out of the box" configs after updating their BIOS for Ry9K. They manually set the voltage for the SoC after noticing it.

A quick Google search returned no reports of high SoC voltages. Have you got a link Fran?

No; anecdotal level unfortunately, but throwing caution to the wind just in case.

Friends from a Discord Server that upgraded to Ry9K and updated their BIOS'es right away on release day noticed the high SoC voltage. 3 people in fact, using 3 different board makers: Gigacrap, AssRock and Asux.

Regards.
 
  • Like
Reactions: Hotrod2go

DrDocumentum

Reputable
Apr 10, 2020
12
20
4,515
No. You don't think amd tested and developed this thing first and foremost for Windows? How can it be windows fault? Doesn't make any sense.
Zen5 is an architecture made primarily for server workloads. It has good game performance too but clearly it is not the best there.

The fact that on Linux the performance uplift is sizable shows how bad the Windows scheduler/kernel is. Also games are generally poorly threaded wich explains why Intel's higher clocked cores perform better than Ryzen but, at the same time, explains the superior performance that you can get having only real cores instead of the poor E cores that Intel provides on productivity workloads.

The hybrid architecture that Intel uses is nonsense on high performance desktop workstations (i7/i9 CPUs). It is a concept designed for mobile devices and has no justification on a desktop environment other than Intel P cores are so inefficient and hot that Intel can't provide CPUs with more than 8 of them.
 

Hotrod2go

Prominent
Jun 12, 2023
217
59
660
Yes, they were running "out of the box" configs after updating their BIOS for Ry9K. They manually set the voltage for the SoC after noticing it.



No; anecdotal level unfortunately, but throwing caution to the wind just in case.

Friends from a Discord Server that upgraded to Ry9K and updated their BIOS'es right away on release day noticed the high SoC voltage. 3 people in fact, using 3 different board makers: Gigacrap, AssRock and Asux.

Regards.
That's interesting cause' when I ran my 9700X on complete stock bios settings the first time, it never went that high. Those rigs must have had poor implementation of AGESA.
 
  • Like
Reactions: -Fran-

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
Zen5 is an architecture made primarily for server workloads. It has good game performance too but clearly it is not the best there.

The fact that on Linux the performance uplift is sizable shows how bad the Windows scheduler/kernel is. Also games are generally poorly threaded wich explains why Intel's higher clocked cores perform better than Ryzen but, at the same time, explains the superior performance that you can get having only real cores instead of the poor E cores that Intel provides on productivity workloads.

The hybrid architecture that Intel uses is nonsense on high performance desktop workstations (i7/i9 CPUs). It is a concept designed for mobile devices and has no justification on a desktop environment other than Intel P cores are so inefficient and hot that Intel can't provide CPUs with more than 8 of them.
Yeah, right. Every sentence you typed is wrong. That's something not easily achievable. Kudos to you sir.

It boggles the mind how people claim P cores are so inefficient and hot when in fact P cores not only are more efficient than ecores, Intel does indeed offer more than 8 of them. They actually offer 56+ of them on Xeon parts. But yeah, they can't offer over 8! :LOL:

If you actually think that a 16 P core cpu would run hotter and more inefficient than an 8+8 then you simply have no idea what the heck you are talking about
 

bit_user

Titan
Ambassador
I see you're back to your old tricks of hilariously mismatched CPU comparisons.

It boggles the mind how people claim P cores are so inefficient and hot when in fact P cores not only are more efficient than ecores,
Not at all power levels, or it would largely defeat the point of E-cores.

Intel does indeed offer more than 8 of them. They actually offer 56+ of them on Xeon parts. But yeah, they can't offer over 8! :LOL:
Let's give you the benefit of Emerald Rapids and look at the top spec model. You indeed get 64 P-cores, but with a base clock of just 1.9 GHz and a TDP of 350W.

By contrast, EPYC will give you 96 Zen 4 cores at a base clock of 2.4 GHz in 360 W. Or, you can have 128 Zen 4C cores at a base clock of 2.25 GHz at the same 360 W.

If you actually think that a 16 P core cpu would run hotter and more inefficient than an 8+8 then you simply have no idea what the heck you are talking about
The 16 P-core Xeon W5-2465X has a TDP of 200 W and a current street price of $1440.

It's sure not the E-cores killing the efficiency of Intel's K-series. I made this projection of Alder Lake performance with a different mix of core counts, based on the detailed perf/W measurements taken by ChipsAndCheese:

pEomQRf.png

At all power levels, the 8+8 configuration would outperform even 12P + 0E, on the x264 workload that the E-cores are worse at. If I did the same analysis for the 7-zip data, the 8+8 configuration would pwn the P-only setups even worse.

The data was taken from this article, but their conclusions are based on a flawed assumption of how multi-core frequency scaling works.

 
Last edited:
  • Like
Reactions: KyaraM and Peksha

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
Not at all power levels, or it would largely defeat the point of E-cores.
The point of ecores is performance / die.
At all power levels, the 8+8 configuration would outperform even 12P + 0E, on the x264 workload that the E-cores are worse at. If I did the same analysis for the 7-zip data, the 8+8 configuration would pwn the P-only setups even worse.

The data was taken from this article, but their conclusions are based on a flawed assumption of how multi-core frequency scaling works.
It's obvious from your graphs that the Pcores are more efficient than the ecores - and therefore ecores aren't there to increase efficiency or reduce temperatures or whatever the guy claimed. Since 12P are ~ to 8+8 it's obvious that 16P would be faster than 8+8 while consuming less power, therefore Intel has no issue with power or temperatures. It's not adding P cores cause they are huge.
Another nonsensical comparison. The 7800X3D is cheaper, plus each CPU has different strengths and weaknesses. They're not direct competitors.
I agree. Say that to the person that compared them. I didn't. I just replied to him.
 

bit_user

Titan
Ambassador
It's obvious from your graphs that the Pcores are more efficient than the ecores
No, you're misreading it then. The 8P + 8E achieves the best perf/W at all power levels.

Since 12P are ~ to 8+8
Huh? No, 10P are roughly equivalent to the area of 8+8. I just included 12P out of curiosity.

it's obvious that 16P would be faster than 8+8 while consuming less power,
How is it obvious? P-cores are only more efficient above 3.1 GHz (x264) or 3.7 GHz (7zip).

image-17-1.png



image-18-1.png


So, you can't really talk about efficiency without talking about performance. If you want higher performance levels, then the P-cores do become more efficient than the E-cores, before the E-cores' top end. However, if you're content to have less performance per core, then the E-cores are definitely the way to go. This is the thinking behind Intel's 144 E-core Sierra Forest Xeons.
 
  • Like
Reactions: KyaraM and Peksha

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
No, you're misreading it then. The 8P + 8E achieves the best perf/W at all power levels.


Huh? No, 10P are roughly equivalent to the area of 8+8. I just included 12P out of curiosity.


How is it obvious? P-cores are only more efficient above 3.1 GHz (x264) or 3.7 GHz (7zip).
image-17-1.png

image-18-1.png

So, you can't really talk about efficiency without talking about performance. If you want higher performance levels, then the P-cores do become more efficient than the E-cores, before the E-cores' top end. However, if you're content to have less performance per core, then the E-cores are definitely the way to go. This is the thinking behind Intel's 144 E-core Sierra Forest Xeons.

You are not comparing core vs core on efficiency, you are comparing die space / efficiency. Obviously ecores are better at that, that's the whole point of them. That's not the same as saying ecores are more efficient than p cores cause they are not.

My point is that since 12p cores have roughly the same efficiency as 8+8 then obviously 16 Pcores would be both faster and more efficient than 8+8.

At any reasonable wattage you will be using desktop chips at a P-Core is more efficient than an ecore. Even at let's say a very low power limit of 125w that's 8w per core. At the normal 250w these chips ship with there is no contest.

Again, guy is saying Intel doesn't add more than 8 pcores cause they run too hot and too inefficient, which is completely not true. 16 Pcores would be way faster and way more efficient than the 8+8 configuration at the power limits these chips are shipped at (around 250w). Why are you arguing otherwise? I don't get it...

It's a fundamental misunderstanding people have, they think adding more cores will increase power draw, temperatures and reduce efficiency when the exact opposite is true. If that was the case then that would mean that a 12900k would be more efficient than a 64 pcore xeon. Obviously that's not the case