News Intel's New Application Optimizer Yields Up to 31% Higher Frame Rates On i9-14900K

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Its just like Optane, worked perfectly with 12th gen (H10 hybrid nand optane device). But when I upgraded to 13th gen on platform, gooddbye optane acceleration, would not boot properly, had to recover to find out it purposely broke the ability.
Wait... so that just connects to the system like a regular NVMe drive and it stopped working on 13th gen CPUs? I always thought the "Optane memory support" was about Optane DIMMs - not NVMe drives.

Here's someone using an Optane SSD with a i9-13900K:


They had a problem with their adapter not supporting PCIe 4.0, but otherwise it seemed to work fine.
 
Last edited:
Wait... so that just connects to the system like a regular NVMe drive and it stopped working on 13th gen CPUs? I always thought the "Optane memory support" was about Optane DIMMs - not NVMe drives.
Optane Memory support is referring to the ability of the RST driver to allow hybrid or caching drives to work properly. Intel wonderfully named that stuff "Optane Memory" and the DIMMs were "Optane Persistent Memory" (not confusing at all right?).
 
Originally I thought this might be an actual hardware change, perhaps on the packaging or some feature not fused off, I don't believe this anymore and think it is purely if cpu-id=x enable feature... SO someone might hack in the feature to enable 12/13/14 gen support in windows 11, would love to see if there is a way to enable the gains in Windows 10 but as thread director does not exist there, more of a stretch.
Given that it interacts with thread director and Intel made some changes there between ADL and RPL so I could see 12th gen not being as straightforward. It does also sound like it's customized to the SKU configuration, but that should still make 13th gen very simple.
 
Optane Memory support is referring to the ability of the RST driver to allow hybrid or caching drives to work properly. Intel wonderfully named that stuff "Optane Memory" and the DIMMs were "Optane Persistent Memory" (not confusing at all right?).
Thanks for clarifying. I was worried Optane SSD support might somehow get disabled by newer Intel CPUs. It seemed almost unfathomable to me, but I just wanted to be sure.
 
  • Like
Reactions: thestryker
Wait... so that just connects to the system like a regular NVMe drive and it stopped working on 13th gen CPUs? I always thought the "Optane memory support" was about Optane DIMMs - not NVMe drives.

Here's someone using an Optane SSD with a i9-13900K:

They had a problem with their adapter not supporting PCIe 4.0, but otherwise it seemed to work fine.
As thestryker correctly pointed out, it broke the ability to do its psuedo RAID to have the 32GB of optane accelerate the QLC NAND. I run a 905P drive now, it can still use and work with the tech, but all the software magic no longer works.

I had been running that same H10 stick through all generations of Intel platforms (9900KF→10850K→11700K[Same mobo]→12600k→13700k[same mobo]) I got pretty adept at figuring out how to re-enable cache as well as recover data from Nand portion (m.2 adapters don't like these weird double devices on a single m.2 each taking up 2x lanes). I was quite confused that most of the tool and functionality vanished when 13gen was installed.

In Hindsight, H10/H20 were too late to the market and add a lot of complexity as well as additional points of failure that overweighed the slight Q1 response benefit they brought compared to flagship NVME drives. As a novelty they were fun to play with but now with support essentially dead and locked to legacy platforms, they are essential e-waste. I don't trust them in an enclosure, and very few platforms see both separate drives. If I see a 1TB H20 being flogged for cheap I probably will pick it up but I regret spending countless hours attempting to make them work.
 
Given that it interacts with thread director and Intel made some changes there between ADL and RPL so I could see 12th gen not being as straightforward. It does also sound like it's customized to the SKU configuration, but that should still make 13th gen very simple.
Based on what you previously quoted:

"Why did Intel only choose to enable Intel® Application Optimization on select 14th Gen processors?
Settings within Intel® Application Optimization are custom determined for each supported processor, as they consider the number of P-cores, E-cores, and Intel® Hyperthreading Technology. Due to the massive amount of custom testing that went into the optimized setting parameters specifically gaming applications, Intel chose to align support for our gaming-focused processors."

It sounds to me like the model is managing both thread assignment and per-core frequency. The optimal frequency distribution will depend on several parameters specific to a given CPU model.

If you consider a normal, multi-threaded workload, the frequency of the CPU's cores is a simple function of how many are busy vs. the power envelope of the CPU. Once the applicable power limit is reached, there's an optimal freqency-scaling curve that ramps down the clock frequencies of the cores to ensure the greatest overall throughput. However, the shortcoming of this approach is that it treats the priority of threads on each core as equal. In the event that they're not equal, what you'd rather do is place the most latency-sensitive threads running on a P-core (ideally without another sibling thread) and crank up the clock frequency on that core, at the expense of some of the others. Since prioritization of threads isn't going to be a binary thing, that means clock frequency shouldn't be distributed as an all-or-nothing, but proportionally doled out.

It's a very complex optimization problem. Of course, if you had enough cooling, achieving the highest all-core boost can greatly simply matters. Better yet, if you can remove the E-cores from the picture. However, this optimization feature seems aimed not at the type of gamers who would do that, but perhaps people gaming on OEM-built machines. You can definitely see how OEMs would appreciate software to improve the gaming experience on the non-exotic spec machines most people buy.
 
I got pretty adept at figuring out how to re-enable cache as well as recover data from Nand portion (m.2 adapters don't like these weird double devices on a single m.2 each taking up 2x lanes).
Oh wow. That's freaky. I never looked at those drives in detail. So, they're just like 2 independent NVMe drives that happen to share a PCB and M.2 slot?

If I see a 1TB H20 being flogged for cheap I probably will pick it up but I regret spending countless hours attempting to make them work.
OMG, why? How much Optane do those have? I see 100 GB P4800X selling for not much more than that.


100 GB is enough for a boot drive. The main downside (other than cabling and bulk) is probably power. I don't know how much that one uses at idle, but it's definitely one of the tradeoffs of using a P5800X.

If you want to risk ebay, here are some supposedly new 375 GB P4800X for < $200:

Some are available for slightly less than that, but ship from China. I've ordered stuff direct from China, but nothing costing > $30.
 
Last edited:
  • Like
Reactions: cyrusfox
Oh wow. That's freaky. I never looked at those drives in detail. So, they're just like 2 independent NVMe drives that happen to share a PCB and M.2 slot?
Basically yes, and the H20 was the last one with 32GB Optane and 1TB QLC. It was a way to boost the performance of QLC while mitigating some of the power issues of Optane. Interesting idea, but I don't think it was ever going to be viable long term and was most likely a way for Intel to use up the Optane intended for the cache drives.
OMG, why? How much Optane do those have? I see 100 GB P4800X selling for not much more than that.

100 GB is enough for a boot drive. The main downside (other than cabling and bulk) is probably power. I don't know how much that one uses at idle, but it's definitely one of the tradeoffs of using a P5800X.
For that type of thing I got P1600X drives and have another I need to do testing on a router box for power consumption and depending on how that goes may just pull it for usage in my next primary machine.

 
For that type of thing I got P1600X drives and have another I need to do testing on a router box for power consumption and depending on how that goes may just pull it for usage in my next primary machine.

$71 for 118 GB. Power consumption is 1.7 W idle; 5.2 W active:


It wants some airflow, however. Max temperature is 70 C.

Not bad. Maybe I'll grab one.
 
$71 for 118 GB. Power consumption is 1.7 W idle; 5.2 W active:
Yeah I just need to see real world how it compares operationally to a QLC NAND drive in my new router box and I've been super lazy about disconnecting my power meter from my server.
It wants some airflow, however. Max temperature is 70 C.

Not bad. Maybe I'll grab one.
I'm running 6 in my server box and had no heatsinks on 2 of them originally and those two ran upper 50s to lower 60s. This was also with minimal airflow, but the ones with heatsinks ran pretty much mid 40s (originally had a pair in PCIe to M.2 adapters which had heatsinks). I bought Thermalright TR-M.2 heatsinks for all of them and currently the OS drive is 51C and the rest are 36C, 39C, 41C, 42C, and 42C (all rounded up) they're used as ZFS log/metadata.

edit: perhaps needless to say, but I'm a big fan of these drives for what they are
 
Status
Not open for further replies.