News Intel announces Arrow Lake and Lunar Lake will arrive in 2024 with 3 times more GPU and AI acceleration performance

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I'm confused. I want to upgrade my laptop to the Dell XPS 16 with Intel Core ultra 9 185H (Meteor Lake) coming out in February, but now a couple of weeks later they announce that Lunar Lake is coming to laptops this year! Why would anyone buy Meteor Lake, which is coming out right now on high-end (and other) laptops when Lunar Lake will have 3 times the AI processing power of Meteor Lake?
Well, I just built a top-end PC a month ago using the refresh chip from Gen 13 (i9 14900K) when I know a generational change (Arrow Lake) for the PC is probably 8 or 9 months away. Same thing, I guess. But that Meteor Lake to Lunar Lake window is very tight. Why get Meteor Lake now?
Because it moves the goal to thinking about "the next thing" and forget how lame the "current thing" is.

For all the "AMD is in the rearvew mirror" Mr. Pat said, there's a lot of "wait for the next thing" going on with Intel.

Regards.
 
>I hope the NPU can be used for other purposes, especially for something like AI frame-gen.

I think image-gen (eg Stable Diffusion etc) will be a big draw, and a big push by MS as having a local NPU should lessen the load on its end, both mitigating its costs and maximizing the appeal of a subscription.

The ultimate goal of Windows is to become a service with recurring revenue, and at this point AI looks to be the driver. Before people freak out, I don't think Windows itself will require a sub, but features inside it, like a premium-level of Bing Image Creator. MS has already implemented MS Rewards which is the first step toward freemium.


>Why do we need AI hardware in a CPU?

Because it's the most practical way to get gen-AI to run on end-user machines. With it built into CPUs, there's no extra cost, which will hasten adoption. If it were a discrete part (add-in card), adoption will take much longer.

That, and vendors are perpetually looking for ways to add value to their products. With the massive revenue bump Nvidia got for its AI wares, every chip vendor would want to get in on the game. Intel/AMD are no different.

MS has already invested 13B+ on AI, and will likely spend tens of billions more. Like it or not, Windows will go AI. AI requests now costs money, and getting some of the processing to run on local PCs is the best way to mitigate cost (and keeping it free for a base level of service).


>Same 6 Pcore limit? No Thanks. Who cares about AI. They need to release a CPU that's at least competitive with the 7XXXX3D chips in gaming and desktop meteor is not it.

The world is a big place, with different wants and needs. Gaming is a popular driver for many, but so will AI. One thing that we learn as we pass adolescence is that not everything revolves about our wants.


>If Lunar Lake will have 3 times the AI processing power of Meteor Lake and the chips are already being shipped to Intel's partners, then what is the reason to buy Meteor Lake? The NPU was the only unique aspect and it's being superseded quickly.

Another thing I discover as I get older is that time is a valuable commodity, as we have less and less of it. "Opportunity cost" becomes more real.

That said, yeah, gen-AI is the bleeding edge right now, and few people would pay for it. That's why having NPU built into CPU is critical for mass adoption. At some point, however, I foresee NPUs being split out into dedicated add-in card as AI uses become more prevalent and demanding, just as GPUs are now.
 
Use cases for local NPUs have been talked about a lot. It includes Blurred backgrounds in video calls, foreground/background cropping, text-to-speech and speech-to-text, local translation, OCR, video and photo editing - these are fairly common tasks many people do routinely, and perf/watt increases quite a bit when using an NPU.
The NPU is not faster than either their CPU core or iGPU! Intel has been very transparent about this fact. However, it is much more efficient than either.

ZagXKPtqztMGcvr5vnpDva.jpg

Source: https://www.tomshardware.com/news/i...meteor-lake-architecture-launches-december-14

The main use case for it is extend battery life or reduce heat output, though it can also offload the CPU/GPU, for sufficiently light-weight tasks.
 
Meteor Lake to Lunar Lake window is very tight.
Probably about a year. Standard 1-generation interval. Meteor Lake launched in mid-December and I'd expect Lunar Lake to launch in Q4 of this year. I guess the weird part is that you have 3 consecutive generations that are each on a different process node (considering just laptop CPUs).

Why get Meteor Lake now?
Well, competition, right? Also, Intel's partners need Intel to keep releasing new products. Finally, Meteor Lake represents an opportunity for Intel to gain more experience with tile-based architectures and will hopefully make Lunar Lake that much better. They might even reuse certain Meteor Lake tiles in Lunar Lake.
 
The NPU is not faster than either their CPU core or iGPU! Intel has been very transparent about this fact. However, it is much more efficient than either.
ZagXKPtqztMGcvr5vnpDva.jpg

The main use case for it is extend battery life or reduce heat output, though it can also offload the CPU/GPU, for sufficiently light-weight tasks.

For sure it's a huge efficiency increase. But I never said NPU is there to make it more powerful, but to perform those tasks with better perf/watt (efficiency)

On a side note, I've seen a lot of narcissistic takes around the NPU. People here (and other forums) genuinely questioning why Intel would even bother with an NPU because they personally don't need one, as if Intel (or AMD) are supposed to custom tailor their chip design for their specific usecases.
 
Will the Arrow Lake introduction finally fix the ‘over-heating’ issue once and for all?
Well, there were leaked benchmarks, where Intel supposedly tested it with the same PL2 as Raptor Lake. So, I'd say signs point to it using similar power.

I for one will not be inclined to utilize any free or cheap OEM offered CPU retro-fit-brackets for cross-support
LOL, wut? No, I don't see that happening. Those same benchmarks I mentioned above didn't show much gain in P-core performance, anyhow.

One thing is for sure however, my company will not be offering any raises this year and a hiring freeze has been in motion. [/I]
Is this still the job where you're driving forklifts on the 3rd shift? Or is this the job where you're a middle manager for a crown prince of Dubai and get flown all over the world, with a generous relocation package? Somewhere, I seem to have lost track.

Maybe you could also tell us about the Special Boat Unit of Mare Island.
 
Last edited:
The NPU is not faster than either their CPU core or iGPU! Intel has been very transparent about this fact. However, it is much more efficient than either.
ZagXKPtqztMGcvr5vnpDva.jpg

The main use case for it is extend battery life or reduce heat output, though it can also offload the CPU/GPU, for sufficiently light-weight tasks.
That chart seems to show that the NPU is indeed much faster than using just the CPU (but still slower than running on iGPU). But you say the NPU is slower than either. Am I missing something?
 
  • Like
Reactions: bit_user
That chart seems to show that the NPU is indeed much faster than using just the CPU (but still slower than running on iGPU). But you say the NPU is slower than either. Am I missing something?
Yeah, sorry - I was thinking of AMD's Phoenix, where Ryzen AI isn't quite faster than the CPU cores.

Anyway, it's a weird chart, because the NPU case apparently isn't running the Text Encoder or VAE Decoder. Don't know how expensive those are, but that makes it not directly comparable to either the All CPU or All GPU cases.
 
I'm confused. I want to upgrade my laptop to the Dell XPS 16 with Intel Core ultra 9 185H (Meteor Lake) coming out in February, but now a couple of weeks later they announce that Lunar Lake is coming to laptops this year! Why would anyone buy Meteor Lake, which is coming out right now on high-end (and other) laptops when Lunar Lake will have 3 times the AI processing power of Meteor Lake?
Well, I just built a top-end PC a month ago using the refresh chip from Gen 13 (i9 14900K) when I know a generational change (Arrow Lake) for the PC is probably 8 or 9 months away. Same thing, I guess. But that Meteor Lake to Lunar Lake window is very tight. Why get Meteor Lake now?
If the pictures of LNL are accurate, and since they had one at CES likely are, it is being aimed at the lower power end of the market as it will have on package memory. That means LNL will be predominantly taking the place of the 14th gen U parts which are both RPL and MTL based. While I'd expect them to have a good efficiency and IPC boost it would take quite a bit for something designed around 15W (or less) operation to match let alone exceed that of something designed around 45W. It's also entirely possible that the higher power mobile CPUs will just be ARL based and may not come until 2025.
 
  • Like
Reactions: bit_user
I don't think it acts any different from main memory, and all of it will most likely be on the package. It may cut costs, increase achievable LPDDR5X speeds, and lower power consumption, all slightly.

I think I know the Xeon products you're referring to, where you can use the HBM without external DDR5 memory, or as a cache.
Ahh that's what I figured. Too bad though, 8-16GB of HBM on chip that laptops could use in place of main memory or desktops could use as a massive high speed cache (or as main memory for the iGPU) would be an interesting feature. Costs are likely still too high though. Maybe AMD will try something like this in a few years when even quad-channel memory isn't fast enough for their top igpus.
 
  • Like
Reactions: usertests
Ian has a segment on Keem Bay which is interesting for the history of Movidius, which was around in 2005 doing video processing.
The first time I heard of Movidius was when Google used them in Project Tango - an AR-enabled tablet. It wasn't doing any AI processing, but rather just finding keypoints and other low-level tasks to enable realtime SLAM. Only after that did Movidius pivot towards neural network inferencing, not long before Intel bought them.

It's interesting to see their SHAVE DSP cores appearing in Meteor Lake's NPUs. I wish they'd publish a SDK so we could program those DSPs, directly.

It seems a the Mobileye NPU might be used for OBS Studio broadcasting software processing, if it follows the Movidius history.
I thought Intel was in the process of spinning off MobilEye. According to Wikipedia:

"In December 2021, Intel announced its plan to take Mobileye automotive unit public via an IPO of newly issued stock in 2022, maintaining its majority ownership of the company. In October 2022, Intel offered 5–6% of outstanding shares raising $861 million by selling 41 million shares, valuing Mobileye at around $17 billion – more than what it originally paid in 2017."

I don't know why they'd be interested in such a niche market as that, especially when it seems like dGPUs probably pack enough horsepower to do whatever sorts of video processing you might want. The professional broadcasting market would seem too small and quite a distraction from their core focus of self-driving.
 
  • Like
Reactions: thestryker
Status
Not open for further replies.