News Asus 'Turbo Game Mode' arrives on its AM5 motherboards — second CCD and SMT toggles arrive for up to a 35% performance boost on X3D chips

setx

Distinguished
Dec 10, 2014
263
233
19,060
This approach allows users to quickly toggle between game-focused performance and multi-threaded setups for other tasks.
You have a funny definition of "quickly". Having to reboot and change BIOS settings is anything but.

This option is completely useless as you can do exactly the same by changing 1 already existing option on 1-CCD CPU or 2 options on 2-CCD ones...
 

bikemanI7

Reputable
Jan 9, 2020
194
14
4,595
Suppose i won't really need this Uefi bios update with Agesa 1.2.0.2a for my Ryzen 7 7700X when its final from MSI? or would it be advised that i install it anyways?

Still kinda new to the AMD platform, but have updated to previous 1.2.0.2 Agesa Firmware when it went final
 

rluker5

Distinguished
Jun 23, 2014
901
574
19,760
Daniel Owens tested SMT off on his 7800X3D a year or so back and it exacerbated AMDip in CPU intensive games. But on average games should do better, just with a lot more variance and stutter in some. Kind of like CFX.
 
Unless there's a software interface for this feature it doesn't sound like anything that users can't already do pretty simply with the BIOS. I suppose AMD will probably release more information when the 9800X3D launches.
 
  • Like
Reactions: Phaaze88

awake283

Proper
Jun 23, 2024
109
72
160
I understand the logic and mechanism, but I have no desire to toggle bios settings back and forth depending on if Im going to play a game or not. AM5 systems boot so slow already (yes I know about MCR). Maybe I lost the chip lottery but my 7800X3D is already very finicky about things like PBO, ASUS performance mode, messing with voltages, etc. Last thing I want to do is introduce another variable.
 

criticaloftom

Commendable
Jun 14, 2022
29
10
1,535
Can't help thinking, that any game old enough to only run in a single thread, is not going to need any help running on an AM5 Ryzen CPU.
Dwarf Fortress and yes I would buy an enthusiast level CPU and overclock it if it could handle such 'an old game' [16x16 embark].
 

Hotrod2go

Prominent
Jun 12, 2023
217
59
660
1.2.0.2a AGESA was not implemented well on my Asrock X670E with 9700X. 3.10 bios was up for a day or so then pulled & now 3.08 is the current bios at time of writing this comment. When running Zentimings v1.32, the power table from which it derives voltages turned into a mess with erroneous data. I've never seen this before in yrs already using Zentimings with AM5 & no wonder Asrock pulled 3.10 bios down already!
 
It would have been better if there was more granular control like CCD optimisation and/or SMT optimisation.

And it would have been even better with charts and fps data for games. Poor marketing like always by AMD.
 

bigdragon

Distinguished
Oct 19, 2011
1,142
609
20,160
The last time I engaged with a turbo feature it slowed the computer down. This isn't something I want to mess with, especially coming from ASUS -- a company known to have melted AMD CPUs and their sockets!
 
You have a funny definition of "quickly". Having to reboot and change BIOS settings is anything but.

This option is completely useless as you can do exactly the same by changing 1 already existing option on 1-CCD CPU or 2 options on 2-CCD ones...

They should have integrated an external switch on the motherboard as a plug on the back that you can plug in and run to the top of you desk. Then you can just boot using the switch on top of your desk at a certain position. Not hard to implement, if they knew this was coming, could have easily added it. Then I would clearly support the words "easy to switch."
 

abufrejoval

Reputable
Jun 19, 2020
582
422
5,260
So this fixes "game developers holding it wrong"?

I own a 7950X3D, because I believe it's a near ideal compromise: while my hardware is funded via using it professionally in a CUDA workstation for machine learning experiments with an RTX 4090, I also use it to game after-hours.

Supposedly each workload will find and know the resources available and adapt itself.

So if it's a game with just a few threads, but too much relevant data being managed, it should seek out the V-chache cores and stick to them.

And if it's an HPC or general compute workload that simply has too little code or data locality to fit any cache, it will just try to keep as many cores (or threads) busy as it can to solve the problem.

The fact that V-cache cores won't scale to the max permissible clocks one or few threads could achieve on a non-V-cache core doesn't matter, because it will either:

a) prefer the higher-clocking cores on the non-V-Cache CCD
b) be using so many cores, that the permissible Watts/heat allocation will already have sunk below the max-clocks available on the V-cache cores and thus all clocks runs at Wattage levels far below the V-cache cut-off

And that's generally what I'm seeing: all-core workloads operate at clocks where the V-cache clock limits no longer matter, while most games naturally choose V-cache cores first while the other CCD idles.

But if a game were to not look at the CPU and its core topology at all, there is a chance it will simply add some threads that Windows might then schedule on the high-clock/no-Vache CCD. And if that thread shares lots of data with another running on the V-Cache CCD, that would incur latencies that might be noticeable.

But that's a game "holding it wrong".

Likewise when it comes to SMT vs full real cores.

SMT has been around for many, many years. A piece of software that doesn't understand or adapt sufficiently to the CPU resources available quite simply is a bad piece of software.

And there is a good chance, it won't understand the difference between P-cores and E-cores either, which might hamper its performance.

If adapting to that "bad software" means I essentially loose the ability to have those SMT resources and the other CCD with its 8 extra cores available for work without a reconfiguration and a reboot, there is zero chance I'd consider that a valuable improvement, if only, because you could have done that with BIOS settings from the start of the platform.

If I get a tool better than 'numactl' on Linux or Lasso on Windows, perhaps that is nice too have.

But if it means crippling the SoC by disabling half of its CCDs and all of its SMT resources, it's a hard pass, because it's about fixing a software issue in hardware.

Game developers: get real, get to know your stuff and adjust!

And most of the time, I'd just say that improving games from 300 FPS to 400 FPS won't have me lift a single finger: it's already quite beyond where I'd care.

The fact that the 7800X3D seems to outpace its 7950X3D cousin, can only reflect defective software.

The bin on both the V-cache and the "normal" CCD on a 7950X3D need to be better than on either a a 7800X and a 7800X3D, because their max clocks and their average Wattage per clocks aren't as good.

Or from the reverse angle, if you were to take CCDs from average 7800X and 7800X3D CPUs and merge them in a 7950X3D carrier, that combination is most likely worse or even acceptable: they need to bin both variants of the 7950X3D significantly better.

And that means that any 7950X3D which has its second non-V-cache CCD disabled in the BIOS should beat or match a 7800X3D (as would a 7950X with the V-cache CCD diesabled beat a 7800X): that's simply binning logic!

If that's not what's measured by reviewers (and I'm not accusing any of them!), then that's a software issue that needs fixing by game authors.

Any encouragement on the hardware side only means that software developers will be even less motivated than they are today.

And they may be right (300 vs 400 FPS), or simply negligent. Nobody should encourage the latter.
 
Last edited:

phxrider

Distinguished
Oct 10, 2013
102
54
18,670
I did some screwing around with FC6 on my 7950X3D, with the built-in benchmark:

Numbers are avg/max/min. All are with memory set to EXPO1 with Corsair 6000 CL30 RAM, 7900XTX Hellhound at default clocks @ 3440x1440, ultra + all DXR on.

Turbo Game Mode:
135/158/116

Turbo Game Mode + SMT enabled:
128/154/103

16 cores "stock"
132/158/108

16 cores + SMT disabled
130/155/109 - frequent stutters

Turbo Game mode was the best, but oddly, 2nd best was just leaving it "stock". Enabling SMT in Turbo Game Mode was the biggest hit, surprisingly... And just disabling SMT but leaving all cores active was a disaster, the FPS was good (although not as good as with SMT on) but it introduced frequent stutters and hiccups.

So bottom line, for absolute best FPS use Turbo Game Mode, but if you're like me and use the computer for work during the day and take an occasional 15 break or play a game on lunch, don't feel like you're losing a ton of performance by just leaving it with all cores and SMT active.

FWIW, I think the difference in actual gameplay is bigger than the benchmark shows - probably why so many of the tech sites don't use the built-in one when they test it for reviews.
 

phxrider

Distinguished
Oct 10, 2013
102
54
18,670
ASUS, what we need is a simple Windows app that can toggle the BIOS setting on or off and then prompt to reboot the computer to make the change effective. Going into the BIOS is a pain and will definitely limit the use of this setting, but that would make it as easy as I think is possible to make it (since I highly doubt this would be possible to hot-switch).
 
But if it means crippling the SoC by disabling half of its CCDs and all of its SMT resources, it's a hard pass, because it's about fixing a software issue in hardware.
While a logical assumption you're actually wrong about it just being a software problem. TPU did testing with a 7950X3D to simulate what a 7800X3D might look like (since it was launching later) and the results vary from game to game sometimes software solutions were best sometimes hardware was. This would indicate it's far more complex than just blaming one or the other. That being said I'd love to see updated testing with all of the AGESA updates and the new turbo mode.

https://www.techpowerup.com/review/ryzen-7800x3d-performance-preview/16.html
 

abufrejoval

Reputable
Jun 19, 2020
582
422
5,260
While a logical assumption you're actually wrong about it just being a software problem. TPU did testing with a 7950X3D to simulate what a 7800X3D might look like (since it was launching later) and the results vary from game to game sometimes software solutions were best sometimes hardware was. This would indicate it's far more complex than just blaming one or the other. That being said I'd love to see updated testing with all of the AGESA updates and the new turbo mode.

https://www.techpowerup.com/review/ryzen-7800x3d-performance-preview/16.html
I see them confirming pretty much what I said as well: there are some bad apples in the games basket, which will deliver worse results with both CCDs enabled, because they don't choose their cores optimally.

Of course plenty of those games might have simply been written with stictly monolithic CPUs long before dual CCDs or even distinct variants became a thing.

But it's definitely a software or rather an application problem, because they could instead control which cores to use and if those include SMT cores or not: Windows provides APIs to let applictions control that, but if an application doesn't Windows normally will just spread things evenly.

AMD is trying to fix that with a scheduling driver that attempts to steer known game (and productivity?) workloads towards the "better matching CCD", but I guess that only controls affinity, and won't actually prohibit the use of extra cores on the other CCD, which could again in some borderline cases result in a worse gaming experience than scheduling that extra task or thread on same single CCD where the rest of the game is running because of the extra cross CCD latencies.

And I'm pretty sure that AMD driver won't try to keep games from using SMT. I don't know if the Windows scheduler on its own will prefer free full cores over free SMT cores in any case applications can choose to control that themselves.

To me the claim "a 7800X3D CPU is better than a 7950X3D for gaming" has always been wrong, because that somehow seems to imply that the 7800X3D contains better hardware.

I believe it doesn't and it's only a software issue with games that don't adjust properly to the hardware resources available. I'd claim that in fact the two CCDs on a 7950X3D are nearly guaranteed to be better bins than any CCD you'll find on a 7800X or a 7800X3D respectively.

Of course you can fix the defective choice by just buying a 7800X3D and enjoy better gaming with bad apple games and similarly you can now cripple your 7950X via the BIOS and most likely get even better gaming yet (because of the better bin) with those bad apple games, too.

But that's a bad solution for all those who do more than gaming on their 7950X3D system and most importantly it's far too static as a BIOS setting, which essentially simulates different hardware to the OS and applications: who wants to reconfigure and reboot every time they switch games or workloads?

It's the software that needs fixing, some of that might simply be added to the OS via some type of Project Lasso integration, some of it can only be fixed by the game or potentially a wrapper/patch, which lies to the game about hardware topology so it makes the optimal choice.

BTW: while I don't have a 7800X or 7800X3D chip to test against my 7950X3D, I do have 5950X, 5800X and 5800X3D (and 5800U) based systems and I have observed that the 5950X clearly has two CCDs which are better binned than the 5800X: either one of those CCDs in the 5950 will clock higher and use less Watts than the single 5800X CCD and they need to, because otherwise there is too little usable headroom in the TDP budget on a dual CCD chip (of course you could get lucky and get an optimal bin 5800X[3D] but that's because bins and customer demands don't always match.

I'm pretty sure it's the same for the Zen 4 CPUs and that both the 7950X and the 7950X3D receive better binned CCDs than their single CCD brethren.

So is hardware not matching software expectations a hardware or a software problem?
I'd say hardware is what it is and software needs to adjust.
 
Last edited:

saint_craig

Distinguished
Nov 11, 2008
19
0
18,510
Can't help thinking, that any game old enough to only run in a single thread, is not going to need any help running on an AM5 Ryzen CPU.
Most games only utilize 4 cores, and the newer ones really don't utilize more than 8. SO turning off one of the chiplets, and turning off SMT insures you get real "cores" not a hyperthreaded one and lowers the processors TDP allowing you to boost your Cores to a higher frequency and depending on cooling for longer so most games would see benefit even a single threaded one as it would be running at much higher clock rates.

is this beneficial for someone who has adequate cooling and can already run at OC clock rates all day all the time not really, but it would be of benefit to someone who doesn't overclock and just relies on the the built in precision boost that the processor does out of the box.

Remeber kids, not everyone OC's their machine or has massive WC loops.

but if you do... so you can hit high clock rates on all cores, simply turning off SMT would probably do the same for you as this bios config would.

For reference I don't game I do video encoding and my software really can't use more than 12 cores, so my 16c/32t cpu shouldn't really show any benefit when I upgraded for 12c/24t. But it does.

because it's not all about what the cores are doing for the application, its also about what the cores are doing for the OS which has to handle esoteric things like disk I/O. So my application has 12 real cores to play with and the OS has 4 real cores to handles all the Os'ey things it needs to do and I still have 16 SMT cores to handle other nonsense and of course since now the overall taxing of my system is 80% on 16c/32t as opposed to 100% on 12c/24t the Heat generated is lower so clock speeds boost higher and everything because of can finish faster. Same theory goes fro what they are implementing for gaming.