News Intel 13th-Gen Raptor Lake Specs, Rumored Release Date, Benchmarks, and More

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
new socket?!
here goes another motherboard pricey and expensive upgrade...
again...
There is no new socket for Raptor Lake, still the same LGA1700. However, being pin compatible may not mean that the chip may work with every 600 series board. You don't have to look too far when Intel infamously said that despite using the same socket, the B460/ 410 are not compatible with Rocket Lake. If you look further back, you will see that this is not a remote case.
 
Pinning affinity for every process was a lot of effort, but simply disabling the E cores fixed all the problems. Now what I have is a very expensive and power thirsty 8 core processor.
Was it a lot of useless work or was the result better than just disabling the e-cores.
Also if your CPU with disabled e-cores is power hungry then you are doing it wrong.
 
  • Like
Reactions: KyaraM
Was it a lot of useless work or was the result better than just disabling the e-cores.
Also if your CPU with disabled e-cores is power hungry then you are doing it wrong.
You missed the point by half of the Ecuador's line length.

I do VR and I absolutely understand what he's talking about perfectly. Having the CPU throwing a process to a CPU that will give you latency spikes can be, quite literally, vomit inducing. It's not a good thing. His point is he paid a lot of money for the whole platform and 8 additional cores he is just not going to use for gaming and he'll have to tax the 8 p-cores only. Plus, no AVX, so a lot of money spent for a task he is not getting the most of his purchase. Plus, enabling and disabling those is just a pain to do each time to want to do one or the other.

You can discuss semantics a lot, but that's a big problem with Intel's bigLITTLE approach.

I'll stop here, since I know you'll just keep dodging the critique and defend Intel.

Regards.
 
You missed the point by half of the Ecuador's line length.

I do VR and I absolutely understand what he's talking about perfectly. Having the CPU throwing a process to a CPU that will give you latency spikes can be, quite literally, vomit inducing. It's not a good thing.
So how exactly do you prevent this from happening by using fewer cores?
ELIM5




The problem is super obviously that a game thread is being thrown to the e-cores by mistakes or a "background" thread is thrown to the p-cores causing latencies.
Or a thread is being toggled between them making it run with different speeds causing nausea.

He did use pinning affinity to fix that and he also disabled e-cores to fix that, how is asking which had better performance vs which was just easier a bad thing?
 
You missed the point by half of the Ecuador's line length.

I do VR and I absolutely understand what he's talking about perfectly. Having the CPU throwing a process to a CPU that will give you latency spikes can be, quite literally, vomit inducing. It's not a good thing. His point is he paid a lot of money for the whole platform and 8 additional cores he is just not going to use for gaming and he'll have to tax the 8 p-cores only. Plus, no AVX, so a lot of money spent for a task he is not getting the most of his purchase. Plus, enabling and disabling those is just a pain to do each time to want to do one or the other.

You can discuss semantics a lot, but that's a big problem with Intel's bigLITTLE approach.

I'll stop here, since I know you'll just keep dodging the critique and defend Intel.

Regards.


There is technically nothing wrong with the big/little approach. What you're describing is an issue with thread scheduling, not an issue with the CPU design. I use my 12900KF for VR as well and have zero frame time spikes. I'm kind of curious if he is using Windows 11 or still using 10. 11 has proper thread scheduling for a big/little architecture. A proper thread scheduler should be handing high importance/high load tasks to the performance cores and then having the efficiency cores handle lower/less utilized threads for the efficiency. 8 cores, with 16 logical processors is enough for any VR game out there, they tend to be more single thread based rather than multithread limited, just like most games out there. Even if you factor in the extra thread load for VR specific computations. To date, I haven't actually had a need to disable efficiency cores, and doing so is actually hurting performance, not helping. I think there is a misunderstanding on how threads are handled and how P/E cores are utilized... or he has some odd system issue causing incorrect thread handling, but not sure why that would be.
 
Last edited:
  • Like
Reactions: shady28 and KyaraM
There is technically nothing wrong with the big/little approach. What you're describing is an issue with thread scheduling, not an issue with the CPU design. I use my 12900KF for VR as well and have zero frame time spikes. I'm kind of curious if he is using Windows 11 or still using 10. 11 has proper thread scheduling for a big/little architecture. A proper thread scheduler should be handing high importance/high load tasks to the performance cores and then having the efficiency cores handle lower/less utilized threads for the efficiency. 8 cores, with 16 logical processors is enough for any VR game out there, they tend to be more single thread based rather than multithread limited, just like most games out there. Even if you factor in the extra thread load for VR specific computations. To date, I haven't actually had a need to disable efficiency cores, and doing so is actually hurting performance, not helping. I think there is a misunderstanding on how threads are handled and how P/E cores are utilized... or he has some odd system issue causing incorrect thread handling, but not sure why that would be.
Honestly, I think this is the case. Even while performance was fine under Windows 10, I did notice the CPU, a 12700k, acted more erratic in a way; more clock fluctuations, higher temperature spikes caused by it, more frametime variability, more variabe threax usage, etc. All that is essentially gone in 11 now, and I can reliably predict which cores will be used in games and in what order. Which is kinda what Intel themselves announced, though. Microsoft just didn't bother to include the Thread Director in 10, so for full performance, go 11; or Linux, I guess. Average FPS are quite similar, but the rest does make a difference.
 
There is technically nothing wrong with the big/little approach. What you're describing is an issue with thread scheduling, not an issue with the CPU design. I use my 12900KF for VR as well and have zero frame time spikes. I'm kind of curious if he is using Windows 11 or still using 10. 11 has proper thread scheduling for a big/little architecture. A proper thread scheduler should be handing high importance/high load tasks to the performance cores and then having the efficiency cores handle lower/less utilized threads for the efficiency. 8 cores, with 16 logical processors is enough for any VR game out there, they tend to be more single thread based rather than multithread limited, just like most games out there. Even if you factor in the extra thread load for VR specific computations. To date, I haven't actually had a need to disable efficiency cores, and doing so is actually hurting performance, not helping. I think there is a misunderstanding on how threads are handled and how P/E cores are utilized... or he has some odd system issue causing incorrect thread handling, but not sure why that would be.
You're not wrong on the whole, but the problem here is simple: bigLITTLE introduces issues with scheduling. If you happen to be using a lot of hard threads, then the OS will inevitably move some to the lesser cores. That's just how it works. What types of threads you ask? Take Beat Saber + Mods with OBS Streaming for instance (not even taking into account Discord calls or simultaneous streaming in it). The Composite for the external view port in Beat Saber uses a lot of CPU alongside the multitude of in-game effects plus OBS streaming in the background with either software encoding or even HW. The OS can't tell if the threads from OBS are more important than the ones from Beat Saber (or its mods), so if it decides to give the main Beat Saber thread to one of the E-cores, you're in a world of hurt. This doesn't happen with my 5800X3D or even older Intel CPUs, for obvious reasons. Even if the P-cores aren't saturated, I'm sure the OS will throw some threads to the E-cores if the P-cores are all utilized, because it can't know for sure what threads are meant to run in the P-cores and never in the E-cores. Also, knowing that (which is doable) is even more load on the CPUs themselves, which is kind of ironic. You remove all these (potential) issues by not having E-cores (or disabling them).

I'm not saying bigLittle is bad, but it has its limitations and trade offs.

Regards.
 
  • Like
Reactions: watzupken
I started off with Win11 (fresh install) which had its own issues early on with VR irrespective of hybrid CPUs, but I had specific problems with the pimax compositor and their support recommended a wipe and fresh install of Win10. I do realize Win11's thread scheduler works better with Intel thread director. That said, both Win 10 and 11 worked fine with general computing tasks. If there was additional latency when moving threads around.. who could notice in such tasks? In VR its a different matter entirely. I wish I had a screenshot of the CPU frametime graph. It would immediately make it clear.. 3ms,3ms,40ms,3ms,3ms,40ms, and so on. Actual CPU usage was very low 10-20%. Setting affinity did help with latency issues, but its a pain in the butt. Disabling E cores is easy and works great to solve that specific issue, and 8p cores are plenty for gaming. I do miss the E cores when generating point clouds for assets, etc. which can use all the CPU it can get. If I'm doing a lot of that, then a quick trip to the bios is all it takes to re-enable.
 
Let me reframe it this way... the fact that ADL works so well in so many applications is a testament to some amazing engineering at Intel. However, if Zen4 is the same ballpark as RPL for price, performance, and platform, then I would avoid the complexities of a hybrid cpu.
 
  • Like
Reactions: watzupken
still this super old deprecated photo...

And I think you reversed the names.. S <-> P
  • Raptor Lake-S (65W to 125W desktop) and Raptor Lake-P (15 to 45W mobile) confirmed
 
You're not wrong on the whole, but the problem here is simple: bigLITTLE introduces issues with scheduling. If you happen to be using a lot of hard threads, then the OS will inevitably move some to the lesser cores. That's just how it works. What types of threads you ask? Take Beat Saber + Mods with OBS Streaming for instance (not even taking into account Discord calls or simultaneous streaming in it). The Composite for the external view port in Beat Saber uses a lot of CPU alongside the multitude of in-game effects plus OBS streaming in the background with either software encoding or even HW. The OS can't tell if the threads from OBS are more important than the ones from Beat Saber (or its mods), so if it decides to give the main Beat Saber thread to one of the E-cores, you're in a world of hurt. This doesn't happen with my 5800X3D or even older Intel CPUs, for obvious reasons. Even if the P-cores aren't saturated, I'm sure the OS will throw some threads to the E-cores if the P-cores are all utilized, because it can't know for sure what threads are meant to run in the P-cores and never in the E-cores. Also, knowing that (which is doable) is even more load on the CPUs themselves, which is kind of ironic. You remove all these (potential) issues by not having E-cores (or disabling them).

I'm not saying bigLittle is bad, but it has its limitations and trade offs.

Regards.
I agree. The introduction of big/little config is not new since our phones have been using this technology for much longer. However this technology of switching will introduce an additional layer of software solution, which may be another point of failure. In an ideal situation, the right cores or all cores should be used varying on the workload. I am not clear what is the difference between the software deployed on Android or iOS vs Windows that makes them switch between the big or small cores efficiently/ accurately, but it is cowboy land for Windows. I don't believe most of the software are optimized to choose the right cores to use, and if we are depending on Microsoft's thread director to create that additional layer, then good luck in getting this right. MS don't have a good track record of doing something properly if you look at all the periodic releases where something gets broken.
 
I don't believe most of the software are optimized to choose the right cores to use, and if we are depending on Microsoft's thread director to create that additional layer, then good luck in getting this right. MS don't have a good track record of doing something properly if you look at all the periodic releases where something gets broken.
It's not really that bad, even on Windows 10 where typically the active window/app usually is prioritized on P-cores. The issue is with encoding or any task that auto calls for low-priority (These always end up on E-cores). Process lasso fixes that. With this tool anywhere Microsoft can't be bothered to fix it the user can. Also the E-cores are no slouch with skylake level performance. For everyday computer activity (Web, video, office) E-cores show no stutter.

Interested to hear if Linux is handling this appropriately yet but for Windows lots of custom solutions to work around software not automatically assigning cores as a user would prefer.(I am a 12900k and a 12600k user).
 
  • Like
Reactions: KyaraM
I hope that in the upcoming CPU benchmarks, Tomshardware will release 'offline benchmarks with all security patches disabled' by requesting special BIOSes to both Intel and AMD as possible as They can.
 
I'm glad I got my i5-11400. I'll be able to comfortably skip this experimental phase with AMD and Intel attempting to one-up each other by trying a bunch of different things that may or may not stick and get re-proportioned with each generation to find balance along with teething (and cost) issues on new IOs.
 
Gotta get that 300 fps in CS:GO and Fortnight!!!!.,,, despite playing on a 160Hz screen and with a 15 ms internet latency which makes that utterly pointless.
Having a faster game engine tick rate still yields a slight improvement in overall responsiveness even if most frames never get displayed. Also, competitive gamers often run with vsync off, in which case extreme frame rates reduce apparent tearing since there is less overall movement between frames.
 
I think you should have used a WDC Raptor X . That was a unique and very cool drive.
I didn't see this article, when first posted. I assume the drive is there, now?

...because I noticed it and wanted to comment that I thought including a Western Digital Raptor HDD was a nice touch. I bought a 10 k RPM SATA model, in the final generation or so. It's a 2.5" drive in a big 3.5" heatsink/adapter, like the one in the pic. Good drive, but it's got nothing on SSDs, obviously.

Ironically, I bought it when SSDs were in the SLC/MLC era, and I was (needlessly, it turns out) concerned about their endurance & data retention. Non-issues, back in those days.
 
Now what I have is a very expensive and power thirsty 8 core processor.
Your E-cores are using < 20% of die area and virtually no power, if disabled. So, at least they're not costing you much, on either count.

If you want more P-cores, you could check out Sapphire Rapids workstation CPUs, when they finally hit the market. Those will have only P-cores.
 
  • Like
Reactions: KyaraM
I'm finally upgrading from an i5 4670k that I've had since 2016 so this is all very exciting stuff to me. I've waited this long so a couple more months is nothing, and it sounds like the wait is going to be well worth it. The 13900k is what I have my heart set on.
Heh, I'm upgrading from Sandybridge I've had since 2012. I'm going for an air-cooled i9-13900, but I will have to wait and see if it supports ECC RAM. I might have already gotten an Alder Lake, by now, if I could've sourced ECC DDR5 UDIMMs, but they've been very hard to find.
 
@PaulAlcorn The efficiency benchmarks you ran on the Zen4 flagship processors was really impressive. (I think you set the power target to 45w) I hope you'll be able to do the same with Raptor Lake as well for comparison!
 
  • Like
Reactions: bit_user
As I promised myself that my next custom built pc will have Intel’s latest ECC RAM supporting processor I have been patiently waiting for Alder Lake’s successor. While Raptor Lake processors are said to be available for purchase 10/20/22 only three of those six SKUs support ECC memory. But all of those are 125 W base power. Of course, today's Intel support chat gave no word on when lower base power Raptor Lake SKUs will launch.

And even for Alder Lake 65 watt SKUs I can only find ONE micro-ATX ECC RAM enabling W680 board that ever launched! https://www.asrockind.com/en-gb/IMB-X1314

Of course, what didn't help was that Intel never released the Raptor Lake supporting W790 chipset
https://videocardz.com/newz/intel-c...ts-hedt-sapphire-rapids-xeon-workstation-cpus

Now it may take months for available 65 watt Raptor Lake and compatible ECC RAM enabling micro-ATX motherboards. And as workstation motherboard releases invariably follow Intel’s release of ECC RAM supporting processors, I can thank Intel for further delaying my build plans.
 
  • Like
Reactions: bit_user
As I promised myself that my next custom built pc will have Intel’s latest ECC RAM supporting processor I have been patiently waiting for Alder Lake’s successor. While Raptor Lake processors are said to be available for purchase 10/20/22 only three of those six SKUs support ECC memory. But all of those are 125 W base power. Of course, today's Intel support chat gave no word on when lower base power Raptor Lake SKUs will launch.
Did you find any DDR5 UDIMMs? Newegg doesn't even list them.

CDW has Crucial's listed, but not in stock. Crucial doesn't let you buy them from their website.

And Kingston recently listed some, but I can't find it for sale anywhere.

And even for Alder Lake 65 watt SKUs I can only find ONE micro-ATX ECC RAM enabling W680 board that ever launched! https://www.asrockind.com/en-gb/IMB-X1314
Keep looking. Maybe there are some late-comers, or some that will appear following Raptor Lake.

I was planning on getting a Supermicro WS680 board, but ATX-size. But not finding ECC RAM has delayed things, a little. I'll probably see if I can wait and get a 700-series equivalent. Faster RAM speeds and a little more I/O might be worth it, if I don't end up needing to upgrade sooner.

Of course, what didn't help was that Intel never released the Raptor Lake supporting W790 chipset
https://videocardz.com/newz/intel-c...ts-hedt-sapphire-rapids-xeon-workstation-cpus
Uh, that's a workstation-class chipset. It's for the bigger, P-core only CPUs, with a different socket. When they said it was planed to launch along side, that didn't mean it would support Raptor Lake, just that the new HEDT/workstation platform was to roll out at the same time.

Now it may take months for available 65 watt Raptor Lake and compatible ECC RAM enabling micro-ATX motherboards. And as workstation motherboard releases invariably follow Intel’s release of ECC RAM supporting processors, I can thank Intel for further delaying my build plans.
Good luck!
: )