News Intel Core Ultra Series 3 CPUs could finally answer AMD's V-Cache — Nova Lake could boast massive 144MB L3

Is it just me?
I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.

Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
 
Last edited:
  • Like
Reactions: abufrejoval
Is it just me?
I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.

Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
I wonder if Intel added e-cores to desktop to squeeze out a little bit extra multi-threaded performance while maintaining certain power envelopes?
 
Is it just me?
I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.

Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
All desktop CPUs (and GPUs for that matter) have been power-budget - and thus thermal-budget - limited for the better part of a decade. Making components of a CPU die consumer less power means you have more power available for other components. If you can shove background tasks that are time-insensitive (i.e. that will keep running regardless of how long execution takes, like OS background operation) onto a core that uses x watts less power, then that's x extra watts available for your primary core(s) to burst up to to complete time-sensitive tasks faster.
 
Is it just me?
I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.
No it's not just you and I'd argue very much the same.

But Intel had to use E-cores on the lower power devices, because their P-core performance collapsed at low single digit Watts, while Zen can reach much lower.

And since they had their Atoms ready for cut & paste they ran the numbers and found that they could even pull ahead in some benchmarks with P and E, which was extremely crucial for Intel: they couldn't afford to loose the #1 spot.
Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
The efficiency crown is a very complex topic, because the energy use of all that stuff beyond the SoC starts to enter the picture, including the power supply itself. And when you also want to allow for high peak performance (and Wattage), the low-end will suffer. And then there is still the purchase price, even if in the long run electricity might be more important, again depending on how you use the machine...
 
You can say that Nova Lake it will have more 3d cache than AMD's "current" CPU's, but Nova Lake will come out around the same time as Zen 6, which MILD leaked that Zen 6 is designed to incorporate as much as 240 MB, so a lot more than Nova Lake will have.

"Designed" in this case means that they can do this max, or less. Which Zen 6 CPU's will get full 240 MB isn't yet known.
 
After a few years of having systems with and without V-cache side-by-side, my personal experience is that it's overrated, most of the time.

Of course I prefer playing at 4k and ultra settings using an RTX 4090 and at that point everybody seems to agree that the main bottleneck is the GPU.

But I wonder how many mainline gamers will actually care about 400 vs 200FPS?

The main attraction about buying a V-cache chip was resting assured that you'd get the best no matter what, while it was certainly good enough for a bit of browsing and office work.

I know, because that's why I bought one, too: a 5800X3D, to replace a 5800X and with a 5950X side by side for some time.

It's also why I kept buying 5800X3D for the kids even after it had officially become last-gen tech, because it was still in the lead pack and by far good enough as the main bottleneck remained the GPU.

And when I do a full Linux kernel compile or an Android build, I get a cup of coffee anyway, a few seconds more or less don't really matter, while 8 extra cores mean I won't drink two. As it turned out the 5950X really wasn't bad enough at gaming to notice, but those extra cores were really cheap (at one point in time), and sometimes as useful as V-cache could be as well. So of course I went with a 7950X3D next to get both :)

So while Intel knew their CPUs were really good enough for gaming even without V-cache, AMD was able to use the same top performer laurels relentlessly against them as Intel had been using their #1 spot to push AMD into 2nd fiddle. And #2 simply isn't a good place to be, as AMD knows full well from long suffering. Running hot and burning down didn't help Intel, either.

Yet I just can't see Intel claw back that #1 slot even with that cache, because they won't be able to sustain that, given their cost structure. AMD didn't just get where they are today because they managed to beat Intel once: they showed that they could consistently beat Intel generation after generation and even using the same socket for the longest time.

And they did that at a price that didn't break the bank, I've seen estimates of $20 extra production cost for a V-cache CCD.

V-cache did as much as double the performance on some specific HPC workloads and I've also heard EDA mentioned. And that's where it originated, the consumer market was a skunkworks project that turned out a gamer crown guarantee, while V-cache EPYCs helped pay for the R&D and production scale.

And that may be missing again from Intel: the ability to scale their variant of V-cache far and wide for economy, or they just risk doing another Lunar Lake, a great performer with the help of niche technology, but not a money maker across the board because it's too expensive to make.

What most people do not appreciate is that AMD won and is winning the x86 battle not just on a performance lead, but at price/performance at production. And without similar or even lesser production cost for better performance, Intel doesn't stand a chance catching up.
 
Is it just me?
I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.

Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
The main customer of intel for client PCs are big corporations that have thousands of PCs running all day long and most of the time there is just somebody bored out of their mind just looking at the screen or doing very simple things.
After a few years of having systems with and without V-cache side-by-side, my personal experience is that it's overrated, most of the time.

Of course I prefer playing at 4k and ultra settings using an RTX 4090 and at that point everybody seems to agree that the main bottleneck is the GPU.

But I wonder how many mainline gamers will actually care about 400 vs 200FPS?

The main attraction about buying a V-cache chip was resting assured that you'd get the best no matter what, while it was certainly good enough for a bit of browsing and office work.

I know, because that's why I bought one, too: a 5800X3D, to replace a 5800X and with a 5950X side by side for some time.

It's also why I kept buying 5800X3D for the kids even after it had officially become last-gen tech, because it was still in the lead pack and by far good enough as the main bottleneck remained the GPU.

And when I do a full Linux kernel compile or an Android build, I get a cup of coffee anyway, a few seconds more or less don't really matter, while 8 extra cores mean I won't drink two. As it turned out the 5950X really wasn't bad enough at gaming to notice, but those extra cores were really cheap (at one point in time), and sometimes as useful as V-cache could be as well. So of course I went with a 7950X3D next to get both :)

So while Intel knew their CPUs were really good enough for gaming even without V-cache, AMD was able to use the same top performer laurels relentlessly against them as Intel had been using their #1 spot to push AMD into 2nd fiddle. And #2 simply isn't a good place to be, as AMD knows full well from long suffering. Running hot and burning down didn't help Intel, either.

Yet I just can't see Intel claw back that #1 slot even with that cache, because they won't be able to sustain that, given their cost structure. AMD didn't just get where they are today because they managed to beat Intel once: they showed that they could consistently beat Intel generation after generation and even using the same socket for the longest time.

And they did that at a price that didn't break the bank, I've seen estimates of $20 extra production cost for a V-cache CCD.

V-cache did as much as double the performance on some specific HPC workloads and I've also heard EDA mentioned. And that's where it originated, the consumer market was a skunkworks project that turned out a gamer crown guarantee, while V-cache EPYCs helped pay for the R&D and production scale.

And that may be missing again from Intel: the ability to scale their variant of V-cache far and wide for economy, or they just risk doing another Lunar Lake, a great performer with the help of niche technology, but not a money maker across the board because it's too expensive to make.

What most people do not appreciate is that AMD won and is winning the x86 battle not just on a performance lead, but at price/performance at production. And without similar or even lesser production cost for better performance, Intel doesn't stand a chance catching up.
AMD is not making their CPUs for cheap, last year they had ~12% margin from their desktop department and this year tsmc US is going to be anywhere from 5 to 20% more expensive, amd is one step away from having to pay on top of every desktop CPU they are selling.
 
  • Like
Reactions: usertests
But I wonder how many mainline gamers will actually care about 400 vs 200FPS?
I've wondered about this for some time. The community has largely judged the "best" gaming CPU on benchmarks with a 4090 or 5090 running games at 1080. The FPS in most every game is way off the scale to the point where, even though real games are used, the benchmark is essentially synthetic. No one is going to play CS at 1080 and 600FPS.

There's a point here somewhere when the human eye can't tell the difference, not sure where that is, for me it seems to be MUCH lower, around 120FPS. That said I talked to a guy on a forum yesterday who was threatening to throw his system in the trash because it would break 130. 😜 Mind you I'm really old and remember when 30 FPS was the holy grail of 3D gaming.
So while Intel knew their CPUs were really good enough for gaming even without V-cache, AMD was able to use the same top performer laurels relentlessly against them as Intel had been using their #1 spot to push AMD into 2nd fiddle. And #2 simply isn't a good place to be, as AMD knows full well from long suffering. Running hot and burning down didn't help Intel, either.
I'd take it one step further. The AMD X3D chips were designed specifically to win those benchmarks. As much as everyone seems to pan the latest intel chips that were actually quite efficient and better performers in most things other than gaming. They were marginally better chips, just without the cache.

So while I understand the reason Tom's tests CPU's the way they do (under other conditions there just isn't enough separation in performance) it's really not as useful as it could be. I'd like to see them, as you say, also run at 1440/4k ultra with perhaps 1440 being the new "standard". The results would be more useful.
 
AMD is not making their CPUs for cheap, last year they had ~12% margin from their desktop department and this year tsmc US is going to be anywhere from 5 to 20% more expensive, amd is one step away from having to pay on top of every desktop CPU they are selling.
Perhaps AMD is selling desktop parts at low margin, but that scale then still helps them make EPYCs much cheaper than they sell those. It's the 50% server margin market share that has Lisa Su smiling so brightly.

Intel looses or makes much less on both and TSMC can't ultimately price every CPU maker out of the market, even AI chips need a surviving CPU, Nvidia is working on working on making it Grace.

Of course TSMC isn't just greedy, they need the money for the next gen and that's the Moore's law discussion, where not everything technically possible is economically viable, ultimately even for the sole surviving #1.
 
Last edited:
Perhaps AMD is selling desktop parts at low margin, but that scale then still helps them make EPYCs much cheaper than they sell those. It's the 50% server margin that has Lisa Su smiling so brightly.
Yeah, it's going to be negative margin with the tsmc US prices. How does negative scale?!
tsmc US is going to be more expensive, not just for desktop CPUs.....amd servers made in the US could have 20% less margin next round.
 
  • Like
Reactions: abufrejoval
I'd take it one step further. The AMD X3D chips were designed specifically to win those benchmarks.
The V-cache CCDs were definitely designed for EPYC, not for gaming.

Putting them onto a desktop die carrier with a desktop IOD was AMD engineers having fun, while the ability to win benchmarks then became more of a "marketing design", even if there were plenty of engineers sweating blood to make that a product.

It definitely turned out a strategic weapon for AMD, but I remain sceptical that technical ability alone is enough for Intel to fight back successfully.
 
  • Like
Reactions: EzzyB
Is it just me?
I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.

Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
Agreed but what I also find odd is that I've seen some laptops with 8 E-cores and 2 P-cores (e.g. i3-1220P, i5-1235U). Like why would you need 8 cores to manage Windows? And have only 2 P-cores for doing actual work, even if it's just spreadsheets and multiple Chrome tabs open? The annoying part is that retailers like Best Buy and Office Depot advertise these laptops with a "10-core CPU" bullet point.

I'm also surprised CPU manufacturers haven't looked at the automotive industry regarding efficiency. Cylinder deactivation has been around for quite some time and it would be nice to be able to manually deactivate, say, half my CPU cores, if I know I'm traveling and need max battery life for checking email and/or watching Netflix.
 
Yeah, it's going to be negative margin with the tsmc US prices. How does negative scale?!
tsmc US is going to be more expensive, not just for desktop CPUs.....amd servers made in the US could have 20% less margin next round.
Sorry should have been 50% server market share.

No idea if AMD has 50% margin on EPYC, too. But it's EPYCs that swell AMD's pockets, desktops help with scale and hanging Intel out to dry.

But V-cache EPYCs certainly sell at much higher margin than the $20 per CCD production cost I've seen quoted for the 1st generation V-cache.
 
Last edited:
Agreed but what I also find odd is that I've seen some laptops with 8 E-cores and 2 P-cores (e.g. i3-1220P, i5-1235U). Like why would you need 8 cores to manage Windows? And have only 2 P-cores for doing actual work, even if it's just spreadsheets and multiple Chrome tabs open? The annoying part is that retailers like Best Buy and Office Depot advertise these laptops with a "10-core CPU" bullet point.
The issue is the minimal Wattage that you need to run a P-core... or have it run fast enough to beat an E-core.

The relationship between clocks and Wattage in CMOS isn't linear, but has a knee shaped curve. But the kneecap's shape, size, point etc. follows very different power-voltage/clock-performance curves for different cores. And it also cuts off at different points for different designs both at the top and at the bottom of the curves.

So say at 2 Watts a P-core may simply not run or deliver less than an E-core at that Wattage. And that means that if your laptop is a 10-15 Watt design there simply isn't enough Watts around to make 4 or 8 P-cores turn over or deliver, while E-cores a) run at all and b) actually do better at that point, because the curves intersect.
I'm also surprised CPU manufacturers haven't looked at the automotive industry regarding efficiency. Cylinder deactivation has been around for quite some time and it would be nice to be able to manually deactivate, say, half my CPU cores, if I know I'm traveling and need max battery life for checking email and/or watching Netflix.
They absolutely do! First of all CMOS transistors mostly use power on state transitions, so zero clock means close to zero power. But then cores, cache blocks and everything else are stopped and even drained and turned off in lots of different manners, but that costs some energy as does turning them back on, so it's not a freebie.

Compared to the automotive industry power management on modern CPUs is way more advanced and layered, even if normal users typically don't get direct control and stupid applications may choose to drain your batteries anyway. But if you really want you can control the core usage for each of your applications via the task manager, APIs or somewhat more comfortably with tools like Lasso. On Linux you have even more control and can change power limits on the fly.
 
  • Like
Reactions: snemarch
Sorry should have been 50% market share.

No idea if AMD has 50% margin on EPYC, too. But it's EPYC that swell AMDs pockets, desktops help with scale and hanging Intel out to dry.

But V-cache EPYCs certainly sell at much higher margin than the $20 per CCD production cost I've seen quoted for the 1st generation V-cache.
They had ~27% margin on servers the last year, if tsmc increases prices by 20% in the US they will barely make any money on servers unless people keep buying them at 20% higher prices.
https://ir.amd.com/news-events/pres...-quarter-and-full-year-2024-financial-results
revenue 12,579
operating income 3,482
 
The V-cache CCDs were definitely designed for EPYC, not for gaming.

Putting them onto a desktop die carrier with a desktop IOD was AMD engineers having fun, while the ability to win benchmarks then became more of a "marketing design", even if there were plenty of engineers sweating blood to make that a product.

It definitely turned out a strategic weapon for AMD, but I remain sceptical that technical ability alone is enough for Intel to fight back successfully.
Ahh, didn't know it had appeared in Epyc first.
 
They had ~27% margin on servers the last year, if tsmc increases prices by 20% in the US they will barely make any money on servers unless people keep buying them at 20% higher prices.
Well, that's basic business management 101. If it costs more to produce something, lets' say as a result of higher costs from TSMC, then that cost will be passed on to the consumer. So there margins will be more or less the same.
 
The question is if people will still buy them at the raised price or if they will just wait for prices to come down.
Yes, that is the question. But, with the cost of new nodes being much higher than they have been, it pretty much leaves TSMC as the only real player in town. Waiting for pricing to come down could take a long time, if ever. Prices will keep going up, specially with prohibitive tariffs in place, and the increasing costs of R&D.

Edit: I do hope Intel remain in the foundry business. Otherwise it's all TSMC. Samsung have crap yields on their advanced nodes.
 
  • Like
Reactions: snemarch
I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.
No, I want the highest performance desktops to use as little power as possible while idling. It's a good thing.

That's the job of LPE-cores, possibly. If they're located on the I/O chiplet, other chiplets can be turned off completely if they can manage background threads.

E-cores aren't for that. They're for boosting multi-threading/area. Maybe (perf/area)/watt.
You can say that Nova Lake it will have more 3d cache than AMD's "current" CPU's, but Nova Lake will come out around the same time as Zen 6, which MILD leaked that Zen 6 is designed to incorporate as much as 240 MB, so a lot more than Nova Lake will have.
On a newer video he says a second layer to the cache chiplet is unlikely for consumers. So Zen 6 X3D would have 144 MiB per CCD (likely only one again), similar to Nova Lake.

I'd be more worried about Intel cancelling theirs. I wanted to see Adamantine L4/VRAM and that never happened.
Honestly I won't go back to Intel anytime soon.

They have a LOT of trust to rebuild. Even if they make a comparable performant product I feel more trust in team red than team blue atm.
You know what Intel's good for? Getting cheap refurbished PCs from offices to you. I'd love to use a Core Ultra 5 245T in the future. But I don't want to pay more than $200 for the whole system.
 
  • Like
Reactions: snemarch
Is it just me?
I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.
If you got what you're suggesting here every Intel desktop CPU released from ADL on would have been worse MT than AMD's, a lot more expensive and/or had a much higher power budget. The primary benefit of E-cores in desktop is performance per area. A 4-core cluster of E-cores gives quite a bit more performance than a single P-core. While the E-cores are getting bigger their performance has been scaling accordingly which is probably where the rumors about a core convergence in a few years have come from.

As for LPE-cores there's no good reason for them to exist in desktop that I can discern except for re-use of the SoC tile across mobile and desktop.
 
  • Like
Reactions: Roland Of Gilead
The rumors surrounding NVL are so all over the place it's hard to even reasonably speculate as to what it will and won't be. It does make sense that Intel would do something to address the performance gains of additional cache with some games now that they're tiles across the board though.
 
As for LPE-cores there's no good reason for them to exist in desktop that I can discern except for re-use of the SoC tile across mobile and desktop.
If the scheduler uses them properly, lowering idle power is a great reason to include them in desktop. Lots of energy and money will be saved across tens of millions of users. AMD is likely going to do this starting with Zen 6. A decade from now we will not be having the "I don't care how much power my desktop uses at idle" conversation because it will work seamlessly.

If there is a scheduling issue, or someone doesn't want them active in future HEDT or servers, disabling LPE/LP cores in BIOS could be the quick fix, without sacrificing much or any performance. Of course, E-cores intended to boost multi-threading performance are a completely different story.
 
  • Like
Reactions: snemarch

TRENDING THREADS