News The CPU Core Wars return — Intel Nova Lake leak teases monster 52 cores, DDR5-8000, and 32 PCIe lanes rumored, would rival AMD's finest

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I've been reading good things about Nova Lake, specially on the platform side.
It's the same as ARL with four differences:
  • 24 lanes of PCIe 5.0 from the CPU instead of 20 lanes + 4 PCIe 4.0.
  • DMI is PCIe 5.0 x4 instead of PCIe 4.0 x8 so bandwidth doesn't change though the link type does
  • 8 lanes of PCIe 5.0 off the chipset, instead of all PCIe 4.0, which is going to be limited by the aforementioned DMI bandwidth.
  • Primary PCIe slot can be split into 4/4/4/4 instead of being limited to 8/4/4
Otherwise USB/SATA are identical and we don't know details regarding integrated TB/networking.
 
  • Like
Reactions: -Fran- and bit_user
It's the same as ARL with four differences:
  • ...
  • DMI is PCIe 5.0 x4 instead of PCIe 4.0 x8 so bandwidth doesn't change though the link type does
Boo!!!

Dang it! They did the wrong thing! They should've made the DMI link PCIe 5.0 x8. If they had, I think they really wouldn't have had to add any more CPU-direct lanes or upgrade any to PCIe 5.0.


  • 8 lanes of PCIe 5.0 off the chipset, instead of all PCIe 4.0, which is going to be limited by the aforementioned DMI bandwidth.
Exactly!

  • Primary PCIe slot can be split into 4/4/4/4 instead of being limited to 8/4/4
Yawn.

I guess some high-end NAS box could use this to have 6x PCIe 5.0 x4 NVMe slots, then get away with using a cheapy chipset.
 
competition is always good for consumers, finally intel entry level CPU (3 series) can say goodbye to Quad cores, very good news for budget PC.
 
  • Like
Reactions: vongole
People who have a LGA1700 and want a modest upgrade for it will have the option to drop in Bartlett Lake. It should have up to 12 Raptor Cove cores on a monolithic die. It should be like what Raptor Lake would've been, if it were P-only.
But I like the idea of e-cores.
It sounds increasingly like Intel and Microsoft are totally uncoordinated on this.
I've seen Intel is turning out chips with every variation on p-cores and e-cores.
 
But I like the idea of e-cores.
Okay, that wasn't clear to me from your post.

It sounds increasingly like Intel and Microsoft are totally uncoordinated on this.
I've seen Intel is turning out chips with every variation on p-cores and e-cores.
Well, Intel is mostly following a strategy where laptop and the bigger desktop CPUs are hybrid (except for the Chromebook tier, which are E-only). For the most part, it's the lower-end desktops and the server/workstation CPUs that are P-only.

In Arrow Lake and beyond, I think those low-end P-only desktops are gone. So, it's only be workstation and some server CPUs that are P-only.

Intel Thread Director? The HARDWARE DECIDES?
It doesn't really decide. What it does is classify, and then the OS kernel is the one who decides.
 
  • Like
Reactions: thestryker
Well, you're not gonna do it....
Devs aren't gonna go back to old software and fix it, they are barely doing it for new software.

So automatically it is or it wouldn't be at all.
Great, so my Win95 will benefit, ...

But seriously folks, ... I suppose the idea is mildly clever but on the back of this here napkin I think it needs to track and rate several factors, so where does this run, if it's so automatic it can't even be like a driver that loads at boot?

I guess if it *allows* newer software at the OS or even app level to request this or that core, the rest can be left to automagic assignments. It's like almost AI!
 
But seriously folks, ... I suppose the idea is mildly clever but on the back of this here napkin I think it needs to track and rate several factors, so where does this run, if it's so automatic it can't even be like a driver that loads at boot?

I guess if it *allows* newer software at the OS or even app level to request this or that core, the rest can be left to automagic assignments. It's like almost AI!
It's like I said: Thread Director merely classifies thread activity. The kernel can use this information to help it make better-informed thread scheduling decisions.

The kernel ultimately makes all thread scheduling decisions. So, if a kernel is unaware of Thread Director (such as in your example of Win 95, or in fact even up to and including Win 10), then the information it can provide is simply ignored.
 
  • Like
Reactions: thestryker
I'd be really interested in seeing the same graph with a Ryzen AI Max 395 mini PC.

That is pushing DDR5-8000.. LIke to see that with 128GB of Ram... interesting to see if they alledged on board memory controller issues appear at those speeds on that platform.

I do think my next machine is a Ryzen AI MAX 395 128GB DDR5-8000 if it can be had on a mITX board that has a good number of DP/HDMI and NVMe ports/slots. I'm not worried about obsolescense or lack of upgradability. Seems like it will be a screamer that will see me through the remainder of my professional working career (6.5 years and counting).

I've been using the same Intel I7-3770K CPU on an ASUS mITX board w/ 2 x 8GB ram and 2 x 512GB sata3 SSDs and 2 x 1920x1200 monitors since 2012. It runs Win11 (not sure why my installation isn't puking on the lack of TPM module) and has remained perfectly relevant for the intense programming/engineering that I do.. I have only replaced the slim sata3 DVD+RW drive (which I never use) and nothing else in almost 13 years.

As tempting as this new Intel CPU sounds, I think it would be quite a bit more expensive to build (CPU and Motherboard) than the Ryzen AI MAX solution I listed above. It's been a while since I've had an AMD machine (Athlon XP 3200+ 'barton' and Athlon64 3200+ 'venice' being the last). I haven't intentionally sat out Ryzen, it's just this 'ivy bridge' based system of mine keeps on going. The Intel I5-655k 'clarkdale' before that didn't go 3 years; it was the Gigabyte mITX motherboard in that case.
 
That is pushing DDR5-8000
It's LPDDR5, which is much higher-latency than regular DDR5.

.. LIke to see that with 128GB of Ram... interesting to see if they alledged on board memory controller issues appear at those speeds on that platform.
It uses a different I/O die.

I've been using the same Intel I7-3770K CPU on an ASUS mITX board w/ 2 x 8GB ram and 2 x 512GB sata3 SSDs and 2 x 1920x1200 monitors since 2012. It runs Win11 (not sure why my installation isn't puking on the lack of TPM module) and has remained perfectly relevant for the intense programming/engineering that I do..
You're in for a treat, then. When I switched from a 6-core Broadwell Xeon workstation to Alder Lake i9, the performance difference in everyday usage was quite obvious.

For work, I'd always opt for ECC memory. With the i9, I could get that on a W680 chipset board. With Ryzen AI Max, you might need a Pro version, although I can't find confirmation on AMD's website that it supports ECC memory.
 
Lower PCIe 5 speeds for SSDs is a MASSIVE non-issue though.
For most people, yes.

However, you should also consider that they put it there, so it would be reasonable to expect that they designed it properly. Especially when you compare it with a product from 2 years and several nodes earlier that performs much better.

To have a feature that's not performing up to spec is a bad look, and I'm sure it's relevant for someone. Maybe they're not using those PCIe lanes for a SSD? The CPU doesn't actually know or care what those lanes are used for, that part is up to the motherboard designer.
 
  • Like
Reactions: PixelAkami
To make it a true water fight, leak versus leak, AMD has Medusa Ridge coming:
still AM5, 2 nm Zen 6 in '25, 128MB L3, 32 cores, expected to run at over 6.5Ghz -

sources: aida64, igor and MLID.