Discussion Intel's upcoming Core i9-13900K Raptor Lake CPU benchmarked in GAMES ! *leak*

Hello,

It appears that we are getting several leaks/rumors about next-gen CPUs and GPUs from Intel and AMD. This time we have some GAMING benchmarks of INTEL's upcoming Raptor Lake flagship SKU, 13th GEN processor. The first gaming and synthetic performance benchmarks of Intel's Core i9-13900K Raptor Lake 5.5 GHz CPU have been leaked by Extreme Player at Bilibili (via HXL).

The Intel Core i9-13900K Raptor Lake CPU tested in the leaked benchmarks is a QS sample that features 24 cores and 32 threads in an 8 P-Core and 16 E-Core configuration. The CPU carries a total of 36 MB of L3 cache and 32 MB of L2 cache for a combined 68 MB of 'Smart Cache'. It also comes with a base (PL1) TDP of 125W & an MTP of around 250W.

In terms of performance, we have more detailed gaming and synthetic benchmarks with the Intel Core i9-13900K (5.5 GHz) and Core i9-12900K (4.9 GHz) running at their stock frequencies on a Z690 platform with 32 GB of DDR5-6400 memory and a GeForce RTX 3090 Ti graphics card. The Core i9-13900K already has a 12.2% clock speed advantage over the Core i9-12900K so it should be faster by default even if the architecture is the same. The extra uplift comes from the increased cache as it gets over a 50% bump (68 MB vs 44 MB).

The gaming performance was tested in various titles at 2160p, 1440p, and 1080p resolutions. The average performance improvement seems to be around 5-10% across all three resolutions for the Intel Core i9-13900K Raptor Lake CPU versus its Core i9-12900K Alder Lake predecessor.

There are only a few cases where the chip showed huge gains. The cache and higher clocks really seem to be benefitting the minimum frame rate with around 25-30% jumps in a few titles such as PUBG, Forza Horizon 5, and up to 70-80% gains in Red Dead Redemption 2. o_O:D

One interesting comparison that has been made is the POWER consumption figures where the Intel Core i9-13900K consumes up to 52% higher in games than the Core i9-12900K and an average of 20% higher power consumption across all three resolutions tested.

This means that the next-gen Raptor Lake CPU lineup is going to be more power-hungry than Alder Lake, even in games. :eek:(n)

The breakdown of average FPS at each resolution is as follows:
  • Intel Core i9-13900K vs Core i9-12900K at 1080p: 4.22% Faster Performance on Average
  • Intel Core i9-13900K vs Core i9-12900K at 1440p: 6.97% Faster Performance on Average
  • Intel Core i9-13900K vs Core i9-12900K at 2160p: 3.30% Faster Performance on Average
  • Intel Core i9-13900K vs Core i9-12900K All Res Avg - 4.83% Faster
As for the minimum FPS, the breakdown at each resolution is as below:
  • Intel Core i9-13900K vs Core i9-12900K at 1080p: 27.93% Faster Minimum FPS
  • Intel Core i9-13900K vs Core i9-12900K at 1440p: 21.83% Faster Minimum FPS
  • Intel Core i9-13900K vs Core i9-12900K at 2160p: 12.82% Faster Minimum FPS
  • Intel Core i9-13900K vs Core i9-12900K All Res Min Avg: 20.86% Faster
The maximum FPS breakdown for each resolution as listed below:
  • Intel Core i9-13900K vs Core i9-12900K at 1080p: 6.29% Faster Maximum FPS
  • Intel Core i9-13900K vs Core i9-12900K at 1440p: 4.42% Faster Maximum FPS
  • Intel Core i9-13900K vs Core i9-12900K at 2160p: 2.58% Faster Maximum FPS
  • Intel Core i9-13900K vs Core i9-12900K All Res Max Avg: 4.43% Faster
Breaking down the POWER consumption figures for each resolution tested, we get the following results for the Intel Core i9-13900K vs Core i9-12900K:
  • Intel Core i9-13900K vs Core i9-12900K at 1080p: 19.1% Higher Power Consumption
  • Intel Core i9-13900K vs Core i9-12900K at 1440p: 19.8% Higher Power Consumption
  • Intel Core i9-13900K vs Core i9-12900K at 1440p: 26.2% Higher Power Consumption
2022-07-17_17-55-17-1480x877.png


2022-07-17_17-55-28-1480x881.png


2022-07-17_17-55-40-1480x880.png


2022-07-17_17-55-47-1480x871.png


2022-07-17_17-56-05-1480x872.png


2022-07-17_17-56-16-1480x870.png


2022-07-17_17-56-26-1480x872.png


2022-07-17_17-56-36-1480x889.png
 
Last edited:
  • Like
Reactions: Endre and alceryes
One interesting comparison that has been made is the POWER consumption figures where the Intel Core i9-13900K consumes up to 52% higher in games than the Core i9-12900K and an average of 20% higher power consumption across all three resolutions tested.

This means that the next-gen Raptor Lake CPU lineup is going to be more power-hungry than Alder Lake, even in games. :eek:(n)
2022-07-17_17-55-17-1480x877.png
Look at the top of any of the gaming benches. It's 12900k overclocked to 4.9 compared to 13900k overclocked to 5.5, you would have to see an 12900k overclocked to 5.5 to be able to say if it draws more or less power at 5.5.

Also you have to be stupid enough to do an extreme overclock to play games...especially at 4k!
 
Good info!

Unfortunately, currently, E-cores are bad for gaming, so much so that reputable reviewers are recommending you turn off the E-cores if you want to ensure max performance out of your 12th-gen Intel CPU. They definitely makes some great P-cores though! ;)
View: https://www.youtube.com/watch?v=wqB96Bdsb4M

That sucks if the E cores are of no use in gaming. Thanks for the video. I will watch it soon. Though, it might also be possible that most of the games need to be re-coded by the developers to support the HYBRID core architecture of Alder Lake CPUs, and to take advantage of it ?

Maybe some future game patches might improve some performance, but that remains to be seen. Btw, Windows 11 OS has a much better scheduler support as well, which might help in some perf gain IMO.

Look at the top of any of the gaming benches. It's 12900k overclocked to 4.9 compared to 13900k overclocked to 5.5, you would have to see an 12900k overclocked to 5.5 to be able to say if it draws more or less power at 5.5.

Also you have to be stupid enough to do an extreme overclock to play games...especially at 4k!

Yes, I noticed that now. Makes sense, because like you have just mentioned, not every gamer is going to do some extreme OC on their processor. just to gain some few extra FPS. And on 4K, most of the games would be GPU-bound as well.

So it seems that the performance delta would be less for the Raptor Lake, if we remove the 5.5 GHz clock value, and drop it down to match the 12900K's speed.

EDIT: It appears the 5.5 GHz is indeed the default/stock clock speed value out of the box though ?
 
Last edited:
EDIT: It appears the 5.5 GHz is indeed the default/stock clock speed value out of the box though ?
For up to 2 cores, not for all of them.
And this second part is just an assumption but when the 2cores boost to 5.5 the rest of them would run slower clocks than normal to keep the chip as a whole within TDP, unless you have it set to unlimited TDP.

Sure maybe the difference will be smaller if not overclocked but then again most games where you get a noticeable difference, the ones that play at 100-150FPS and not at 300FPS, run one or two main threads instead of running the whole game as a multithread workload.

Also the minimums where RL showed a great increase should not be affected.
 

KyaraM

Admirable
Mar 11, 2022
1,494
661
6,790
Final Fantasy 9 EndWalker? I always thought I play Final Fantasy 14 Endwalker!

Good info!

Unfortunately, currently, E-cores are bad for gaming, so much so that reputable reviewers are recommending you turn off the E-cores if you want to ensure max performance out of your 12th-gen Intel CPU. They definitely makes some great P-cores though! ;)
View: https://www.youtube.com/watch?v=wqB96Bdsb4M
Outdated video is outdated. Also, if your concern is latency, I hope you are prepared to deactivate a chiplet on your Ryzen CPU for gaming, since that latency is a lot worse and even AMD recommends doing it and won't fix the issue, either. But, sure, Intel's the only one with issues, as always.

That sucks if the E cores are of no use in gaming. Thanks for the video. I will watch it soon. Though, it might also be possible that most of the games need to be re-coded by the developers to support the HYBRID core architecture of Alder Lake CPUs, and to take advantage of it ?

Maybe some future game patches might improve some performance, but that remains to be seen. Btw, Windows 11 OS has a much better scheduler support as well, which might help in some perf gain IMO.



Yes, I noticed that now. Makes sense, because like you have just mentioned, not every gamer is going to do some extreme OC on their processor. just to gain some few extra FPS. And on 4K, most of the games would be GPU-bound as well.

So it seems that the performance delta would be less for the Raptor Lake, if we remove the 5.5 GHz clock value, and drop it down to match the 12900K's speed.

EDIT: It appears the 5.5 GHz is indeed the default/stock clock speed value out of the box though ?
E-cores don't have to provide a direct benefit for games, that's not what they are there for. No idea why so many people have issues understanding that. They are there for multi-core performance and background tasks, not to run games on. And for that, they run fine. I see no reason to deactivate the e-cores on my 12700k. It got a lot more performance than I need already, we are GPU-bound anyways.

For up to 2 cores, not for all of them.
And this second part is just an assumption but when the 2cores boost to 5.5 the rest of them would run slower clocks than normal to keep the chip as a whole within TDP, unless you have it set to unlimited TDP.

Sure maybe the difference will be smaller if not overclocked but then again most games where you get a noticeable difference, the ones that play at 100-150FPS and not at 300FPS, run one or two main threads instead of running the whole game as a multithread workload.

Also the minimums where RL showed a great increase should not be affected.
Literally every news outlet places the 13900K at 5.5GHz all-core, 5.7-5.8GHz on 1-2 cores. No, it's not OC'd.
https://www.techpowerup.com/cpu-specs/core-i9-13900k.c2817
https://www.gizbot.com/computer/news/intel-core-i9-13900k-specifications-benchmarks-082057.html

4.9GHz all-core on the 12900K is also not OC'd. The chip has around 300MHz higher turbo clocks than my 12700k, which runs an all-core boost of 4.7GHz. That would land the 12900k at 4.9GHz, same as in this test.
 
  • Like
Reactions: shady28
Outdated video is outdated.
Is there a newer video (or other proof) that refutes this video's claim? If there is, I'd love to see it. Please post!

I hope you are prepared to deactivate a chiplet on your Ryzen CPU for gaming, since that latency is a lot worse and even AMD recommends doing it and won't fix the issue, either. But, sure, Intel's the only one with issues, as always.
If it's possible with the new Ryzen 7000 CPUs I may do so, for testing. ;)

E-cores don't have to provide a direct benefit for games, that's not what they are there for. No idea why so many people have issues understanding that. They are there for multi-core performance and background tasks, not to run games on. And for that, they run fine. I see no reason to deactivate the e-cores on my 12700k. It got a lot more performance than I need already, we are GPU-bound anyways.
No they don't handle background tasks efficiently enough to allow the P-cores to work more on the game. The problem is that they DON'T currently do this.

I hope you are prepared to deactivate a chiplet on your Ryzen CPU for gaming, since that latency is a lot worse and even AMD recommends doing it and won't fix the issue, either. But, sure, Intel's the only one with issues, as always.
It's not about latency with the 12th (and 13th) gen Intel E-cores. It's about -
  1. The extremely poor performance of E-cores with gaming loads, and/or...
  2. Intel's Thread Director not knowing how to use the E-cores properly to benefit overall CPU performance, again, with respect to gaming.
My guess is that it's a little of both. It is well know and discussed, by many different computer reviewers, tweakers, and enthusiasts, that having Intel's E-cores actually hurts gaming performance. In every game that I've seen (please point me to evidence showing otherwise), you will get better overall performance (higher FPS) if you just disable all your E-cores. Currently, I see Intel's P-cores as some of the best out there but the E-cores as kinda just thrown in for core count numbers. I am absolutely sure this will improve as the Thread Director, games, and OSs get better at utilizing the E-cores properly, but we're not there yet.

I also don't want this thread to devolve into another Intel vs. AMD tennis match, so I won't be answering any more posts here, regarding Intel vs. AMD.
 
  • Like
Reactions: Metal Messiah.
4.9GHz all-core on the 12900K is also not OC'd. The chip has around 300MHz higher turbo clocks than my 12700k, which runs an all-core boost of 4.7GHz. That would land the 12900k at 4.9GHz, same as in this test.
The default behavior is for every core to clock as high as it can within the TDP limit, which means that it can run at 4.9Ghz all core under some loads, but if you have it locked down to always run at those clocks then it's still overclock because you don't let it run at lower clocks to stay within TDP.

When intel had fixed clocks for a fixed amount of cores and a fixed TDP they would publish turbo tables to inform people.
 

KyaraM

Admirable
Mar 11, 2022
1,494
661
6,790
The default behavior is for every core to clock as high as it can within the TDP limit, which means that it can run at 4.9Ghz all core under some loads, but if you have it locked down to always run at those clocks then it's still overclock because you don't let it run at lower clocks to stay within TDP.

When intel had fixed clocks for a fixed amount of cores and a fixed TDP they would publish turbo tables to inform people.
Do you have any proof that it was locked down? For me, in a, let's say, 3h gaming session, even with the occasional dip in clock speeds, the average clock rate is still close enough to 4.7GHz on my 12700k that it's within rounding distance, for example, 46878MHz on average, no locked clock rate, no OC. And yes, that is an actual average I have seen. So even if the average clock on that 12900k is, like, 4877MHz or something because there were a couple drops, it would still be rounded to 4.9GHz essentially everywhere in reviews; especially especially since gaming (and other) benchmarks, from my experience, seem to be more stable than actual games, with fewer, if any, drops. Therefore, again, it's not OC'd. It's simply rounding.
 
No they don't handle background tasks efficiently enough to allow the P-cores to work more on the game. The problem is that they DON'T currently do this.

I can agree on this point. There is still some confusion regarding the purpose of these E-cores though, especially in games. I think INTEL's Thread director might also play a huge role, on how these P and E cores are assigned for the OS to utilize.

E-cores still aren't as useful in games as additional P-cores. I think a 6+0 configuration outperforms a 4+8 (in gaming), and a 4+0 configuration outperforms 2+8. Perhaps if you have heavy background tasks going on, then 4+8 could pull ahead, but most people usually don't.

Found another Video (still going through it).

View: https://www.youtube.com/watch?v=RMWgOXqP0tc&ab_channel=HardwareUnboxed
 
  • Like
Reactions: alceryes

shady28

Distinguished
Jan 29, 2007
425
297
19,090
It comes down to this chart - i.e. it depends on what GPU you have and which game. This is far more thorough testing than the mainstream sites do.

If you have a 2080 Ti or lesser GPU, the e-cores improve performance vs nvenc in most games as those GPUs benefit from offloading the x264 encoding onto the e-cores.

BUDbNtL.jpg



Reference :

View: https://www.youtube.com/watch?v=s20HLQ-JbpU&t=464s
 

Endre

Reputable
Hello,

It appears that we are getting several leaks/rumors about next-gen CPUs and GPUs from Intel and AMD. This time we have some GAMING benchmarks of INTEL's upcoming Raptor Lake flagship SKU, 13th GEN processor. The first gaming and synthetic performance benchmarks of Intel's Core i9-13900K Raptor Lake 5.5 GHz CPU have been leaked by Extreme Player at Bilibili (via HXL).

The Intel Core i9-13900K Raptor Lake CPU tested in the leaked benchmarks is a QS sample that features 24 cores and 32 threads in an 8 P-Core and 16 E-Core configuration. The CPU carries a total of 36 MB of L3 cache and 32 MB of L2 cache for a combined 68 MB of 'Smart Cache'. It also comes with a base (PL1) TDP of 125W & an MTP of around 250W.

In terms of performance, we have more detailed gaming and synthetic benchmarks with the Intel Core i9-13900K (5.5 GHz) and Core i9-12900K (4.9 GHz) running at their stock frequencies on a Z690 platform with 32 GB of DDR5-6400 memory and a GeForce RTX 3090 Ti graphics card. The Core i9-13900K already has a 12.2% clock speed advantage over the Core i9-12900K so it should be faster by default even if the architecture is the same. The extra uplift comes from the increased cache as it gets over a 50% bump (68 MB vs 44 MB).

The gaming performance was tested in various titles at 2160p, 1440p, and 1080p resolutions. The average performance improvement seems to be around 5-10% across all three resolutions for the Intel Core i9-13900K Raptor Lake CPU versus its Core i9-12900K Alder Lake predecessor.

There are only a few cases where the chip showed huge gains. The cache and higher clocks really seem to be benefitting the minimum frame rate with around 25-30% jumps in a few titles such as PUBG, Forza Horizon 5, and up to 70-80% gains in Red Dead Redemption 2. o_O:D

One interesting comparison that has been made is the POWER consumption figures where the Intel Core i9-13900K consumes up to 52% higher in games than the Core i9-12900K and an average of 20% higher power consumption across all three resolutions tested.

This means that the next-gen Raptor Lake CPU lineup is going to be more power-hungry than Alder Lake, even in games. :eek:(n)

The breakdown of average FPS at each resolution is as follows:
  • Intel Core i9-13900K vs Core i9-12900K at 1080p: 4.22% Faster Performance on Average
  • Intel Core i9-13900K vs Core i9-12900K at 1440p: 6.97% Faster Performance on Average
  • Intel Core i9-13900K vs Core i9-12900K at 2160p: 3.30% Faster Performance on Average
  • Intel Core i9-13900K vs Core i9-12900K All Res Avg - 4.83% Faster
As for the minimum FPS, the breakdown at each resolution is as below:
  • Intel Core i9-13900K vs Core i9-12900K at 1080p: 27.93% Faster Minimum FPS
  • Intel Core i9-13900K vs Core i9-12900K at 1440p: 21.83% Faster Minimum FPS
  • Intel Core i9-13900K vs Core i9-12900K at 2160p: 12.82% Faster Minimum FPS
  • Intel Core i9-13900K vs Core i9-12900K All Res Min Avg: 20.86% Faster
The maximum FPS breakdown for each resolution as listed below:
  • Intel Core i9-13900K vs Core i9-12900K at 1080p: 6.29% Faster Maximum FPS
  • Intel Core i9-13900K vs Core i9-12900K at 1440p: 4.42% Faster Maximum FPS
  • Intel Core i9-13900K vs Core i9-12900K at 2160p: 2.58% Faster Maximum FPS
  • Intel Core i9-13900K vs Core i9-12900K All Res Max Avg: 4.43% Faster
Breaking down the POWER consumption figures for each resolution tested, we get the following results for the Intel Core i9-13900K vs Core i9-12900K:
  • Intel Core i9-13900K vs Core i9-12900K at 1080p: 19.1% Higher Power Consumption
  • Intel Core i9-13900K vs Core i9-12900K at 1440p: 19.8% Higher Power Consumption
  • Intel Core i9-13900K vs Core i9-12900K at 1440p: 26.2% Higher Power Consumption
2022-07-17_17-55-17-1480x877.png


2022-07-17_17-55-28-1480x881.png


2022-07-17_17-55-40-1480x880.png


2022-07-17_17-55-47-1480x871.png


2022-07-17_17-56-05-1480x872.png


2022-07-17_17-56-16-1480x870.png


2022-07-17_17-56-26-1480x872.png


2022-07-17_17-56-36-1480x889.png

The 5.5GHz frequency, paired with bigger cache sizes, boosts both the performance and the power usage by a lot!
 
  • Like
Reactions: Metal Messiah.
The 5.5GHz frequency, paired with bigger cache sizes, boosts both the performance and the power usage by a lot!

Yeah, that's true for most of the games, but the power usage with Raptor lake has indeed skyrocketed a lot, even more than the gain in FPS, imo. But the question is, will an average gamer care about all this , or will they jump into this bandwagon without worrying much about power/heat, and the CPU's temp value. ? Is it worth the upgrade ?

I guess hardcore gamers who want the absolute best in performance, be it on 4K, or any other resolution, won't mind shelling out cash on this next-gen platform, although these Raptor lake chips, are backwards compatible with existing 600 series mobo chipsets.
 
It comes down to this chart - i.e. it depends on what GPU you have and which game. This is far more thorough testing than the mainstream sites do.

If you have a 2080 Ti or lesser GPU, the e-cores improve performance vs nvenc in most games as those GPUs benefit from offloading the x264 encoding onto the e-cores.

BUDbNtL.jpg



Reference :

View: https://www.youtube.com/watch?v=s20HLQ-JbpU&t=464s
Interesting video. Thanks.
The differences in FPS numbers here is way too close to call wins though. If you watch the video, a few seconds before he pauses to claim victory (either the first victory for just the P-cores or the subsequent ones for the E-cores) the FPS victor actually switches 3 times. This also doesn't test precisely what I mentioned. To make this test at least a bit more conclusive, a third test should've been run with completely disabled E-cores and running NVENC and game on just the P-cores. It has already been shown that disabling E-cores nets a general performance increase so the proper next test would be to test this.

Sorry, but I don't see any evidence to back up your claim of the 'E-cores helping with streaming with any card less than a 2080Ti'.
 
Last edited:
  • Like
Reactions: Metal Messiah.

Endre

Reputable
Yeah, that's true for most of the games, but the power usage with Raptor lake has indeed skyrocketed a lot, even more than the gain in FPS, imo. But the question is, will an average gamer care about all this , or will they jump into this bandwagon without worrying much about power/heat, and the CPU's temp value. ? Is it worth the upgrade ?

I guess hardcore gamers who want the absolute best in performance, be it on 4K, or any other resolution, won't mind shelling out cash on this next-gen platform, although these Raptor lake chips, are backwards compatible with existing 600 series mobo chipsets.

Being on the latest technology is always cool 😎.
People will go for it, even if they don't need it.
 
Yeah, there are peeps that are certainly willing to upgrade, regardless of the platform cost and other outcomes. I too want to upgrade to any of the next-gen platform, be it AMD or INTEL. but I'm not an early adopter though.

I would rather wait for the new platform to mature. After reading reviews, and user feedback on the web. Any new hardware might be prone to some instability, especially AMD's AM5 platform, along with DDR5 and PCI Gen 5, among other things.
 
  • Like
Reactions: KyaraM and Endre

Karadjgne

Titan
Ambassador
I don't get it. 13900k (or even a 12900k) paired with a 3090Ti. Why was 1080p even a consideration?

Nvenc is inconclusive. Using nvenc, that's using gpu power only, unless you are encoding x264 in which case that's cpu mostly with just a small amount of gpu. If using x264 encoding, the e-cores will be an advantage in Win11, but not in Win10. At 1080p, a 2080ti should have more than enough headroom in any game to use nvenc and not affect cpu performance, not so at 4k. Anyone who games heavily, while simultaneously encoding using x264, and expecting to get max fps with no changes, is a fool.

So I don't get it. Who really cares that CSGO gets 700+ fps at 1080p, a 4.22% advantage over a 12900k and whether or not nvec will cost you fps or not, which it shouldn't seeing as CSGO only uses 2x P-cores to begin with, leaving 6x P-cores to deal with a pathetically simple encode.

And was that all done with ddr4, or ddr5, at what speeds, what ram sticks, using Gen4 NVMe or Gen3....

Seriously. There's enough variables that'll change those results that they only become reliable if you use the exact same setup. Which nobody does. A 4% gain? That's barely outside any margin of error. It's a 12900k tweaked better than the 12900ks and called a 13900k. AlderLake ++.
 
I don't get it. 13900k (or even a 12900k) paired with a 3090Ti. Why was 1080p even a consideration?
Why would less information be considered a good thing?
He did 1080 but also 2k and 4k so why not be glad about the additional info even if you don't think it's relevant?

CSGO is a very popular title and people will want to see results for it, again why would less info be better?.
 
  • Like
Reactions: KyaraM and shady28

Karadjgne

Titan
Ambassador
Why would less information be considered a good thing?
He did 1080 but also 2k and 4k so why not be glad about the additional info even if you don't think it's relevant?
If I told you you have a flat tire, and plugging won't work because the tire is cut in the sidewall, how much more info do you realistically need? If I went into the full explanation of how a tire was manufactured, how pressure affects rubber at different temps, how going around corners flexes sidewalls, tire ratios and performance etc, is that then going to affect any results or decisions?

Flooding ppl with semi-useless info, to justify a position isn't necessarily always a good thing. The games were chosen in order to justify the numbers, cherry picked. Not a single one shows any equity or actual loss of fps, like you'll find with pro reviews using standardized game test suites, which did show such results comparing 10th and 11th gen i9's.

CSGO at 1080p with that pc? Really? A 4% gain in fps that's already so far beyond any capability of refresh is useless. It's not going to change latency, is too small to have any affect on game play whatsoever.

A single graph showing average gains at 4k,1440p, 1080p over an entire test suite of games would only show 1 thing. It's basically pointless to upgrade from a 12900k, but including a 12400F in the mix would show a more viable reason, since that's the single most popular 12th gen cpu atm.

If you are going to flood ppl with info, at least make it relevant.