News AI PC revolution appears dead on arrival — 'supercycle’ for AI PCs and smartphones is a bust, analyst says

"Qualcomm’s apparent struggles with its new Snapdragon X chips for Copilot+ laptops indicate that demand for AI PCs is probably not great."


I think it says more about the demand for non x86 Windows laptops than it says about the demand for AI PC's. Who's the genius that thought anyone would want a non x86 Windows machine again?
 
  • Like
Reactions: bigdragon
To me, having this so called local AI processing capabilities is simply a waste of die space. A beefier iGPU is beneficial to more people than having a NPU. This general lack of interest in AI may also be reflected for online AI solutions where most people only pivots to 1 or 2 popular AI service provider, and mostly I believe using it because it’s cheap or no cost. So a lot of so called AI providers, but only few have substantial users. It will blow up one day because there’s simply no or low ROI. Open AI can start to charge like 200 bucks a month, but not many individuals will pay this kind of money. Corporates may pay for it, but if idea runs out and still not enough user, they may scale down the licenses in a matter of time. So let’s see.
 
  • Like
Reactions: Sluggotg
A lot of poor sales has little to do with Qualcomm or Arm being in devices.

Its Windows.
WIndows is garbage and the single reason I have not bought a Qualcomm based PC.
Once there are good Linux distros available for Arm PCs and laptops then I will buy.
 
Not going to naysay, but I’m not surprised. It’s early days and only early adopters and devs are getting much out of “AI” (someday I’ll drop the quotes but it has a lot to prove, first). Its use in photo and video work, and probably in audio as well (not my wheelhouse) is already entrenched. Developers will gain tools that use it in creating apps, and apps will use it as well. Upscalers in gaming benefit, too. But absolutely no need to rush…

With Apple and nVidia’s announced ReDrafter being included in the TensorRT-LLM framework, it’s another leap. Hardware and software stacks still have inefficient methods of communication.

At the end of the day, I can only say “too soon” 😏
 
  • Like
Reactions: King_V and P.Amini
It’s early days and only early adopters and devs are getting much out of “AI”
That's kind of the problem: early adopters and devs have no use of "AI PCs" with hardware that is pitifully underpowered for training NNs and only barely able to run basic inference tasks. Those users are after devices with much more capable GPUs to work with, not the current crop of "AI PCs"
"AI PCs" are only useful to end users who will not be training any models themselves (and do not know what a model is in the first place), and those users have no need for an "AI PC" that doesn't actually do anything for them either.
 
It's good to know average users don't care much about AI gimmick as it is not a useful thing yet but I believe it will be in (maybe not so near) future.
 
Could someone please explain to me in what way an "A.I." PC (particularly an A.I. laptop) would be better than homemade PC with top-of-the-line components? What part of the A.I. PC is actually superior? What "secret sauce" am I not getting my hands on?
 
Could someone please explain to me in what way an "A.I." PC (particularly an A.I. laptop) would be better than homemade PC with top-of-the-line components? What part of the A.I. PC is actually superior? What "secret sauce" am I not getting my hands on?
The benefit of the NPU is better power efficiency (sometimes performance) for inference tasks. If you are building a desktop PC, you could actually pick up an M.2 or PCIe card with an NPU on it if you really wanted to not use the GPU.

There is a side beneift of the push: Microsoft mandated 16 GB RAM for Copilot+, pushing some 4/8/12 GB systems out of the market. I guess this hasn't had much of an effect based on the Micron demand info.

If the initial AI push fails, overpriced hardware will go on clearance, which is always good for the consumer. I don't think the die space is going to be reclaimed. 45 TOPS is here to stay until it's more widely used, and we may even see 100 TOPS become the default within a gen or two.
 
  • Like
Reactions: P.Amini
The truth is, most users could care less if "AI functions" run locally or in the cloud. It's hard to convince the average user why they should use one or the other when they don't understand or care about the problem to begin with. It's just as easy for them to type in the Chat GPT or Gemini or Gonk URL and go to town without buying a new Laptop/Desktop
 
It's a classic "chicken and egg" problem.

You can't expect developers to target the AI PC spec until there's a significant amount of installed base, but consumers don't want to pay extra for a hardware feature with virtually no apps.

Well, the hardware makers have taken the initiative and included the capability. As the article points out, this installed base will grow over time, almost by default. So, we have our egg. Now, to see if it hatches and what sort of creature emerges.
 
Yeah, what did analysts expect when a company as mediocre as Microsoft lead the ship?
All that Microsoft introduced was a sofisticated spyware.

And all that NPUs are is a higher price tag for useless piece of hardware for most people.

A very few would find a use for local SLM and a bunch of peoplewill run LLMs locally, but in a group of GPUs (and is freaking expensive)

I DO use local SLM for work in a humble 5700u for work, and works great. But is a very unique use case.

I'm an SEO and use it sometimes on Linux, with screaming frog for SEO analysis. And is way easier to connect it with OpenAI or Gemini anyway.
 
  • Like
Reactions: Peksha
This headline and micron's announcement on October 2024 make no sense. Micron released DDR5 CSODIMM and CUDIMMs. Essentially its a very specific type of ram that only works with Intel's 15th gen 200 ultra series products. The cudimm is for desktops, and csodimm is for laptops. The mention of AI workloads is hilarious.

Is the RAM faster because it has a clock generator on the chip? Yes. Does having 15% faster throughput on a DOA intel product justify buying micron stocks or a 200 series intel processor with a 800 series motherboard? No.

I think if they worked with nvidia to release faster memory on a cheap AI card with lots of VRAM, there would be alot of demand. Also, I have never seen anyone in the consumer market claim that 15% faster ram is why they're going to trash their current laptop/desktop to buy a new 200 series intel CPU with a new 800 series intel motherboard.

Nvidia is printing money with what they have now and has literally no reason to bother with cudimms. They don't make cheap consumer products. Not when they sell H100s for ~30,000 usd and can't even manufacture them fast enough to meet market demand from enterprise customers.

Most technically inclined people are terrified of intels' 12th thru 14th gen processor overvolting bug that melts the memory controller on intel CPUs. 15th gen failed because they linked 3nm compute tile on a super old 14nm interposer that needs a physical redesign (panther lake?). Even with a microcode update, intels latest and greatest can't compete with the 9800X3D or 5000 series AMD processor.

Most people don't use Microsoft's AI-driven copilot, let alone gone on intels' AI page to try a cobbled down AI for chatbot, text parsing summary, AI art, or similar. The software from intel and microsoft has nothing on bigger projects like cuda-driven Stable Diffusion.

It's no wonder this product isn't selling. The intended market doesn't exist and consumers like me don't want to switch to intel 15th gen when the AMD offering is both cheaper and faster on most common workloads.
 
I mean it's not like most people don't WANT to upgrade to the latest and greatest, or even to the latest but not greatest, but with the economy being what it is and prices being what they are people just can't do it. When your whole sales pitch is "Marginally faster than the previous model but with AI features!" it doesn't exactly make people line up for days outside Best Buy like they're iPhones of years past, especially when "AI" like Gemini Advanced or Copilot Plus are an additional subscription fee per month that easily adds hundreds of dollars a year to the cost.
 
It's a classic "chicken and egg" problem.

You can't expect developers to target the AI PC spec until there's a significant amount of installed base, but consumers don't want to pay extra for a hardware feature with virtually no apps.

Well, the hardware makers have taken the initiative and included the capability. As the article points out, this installed base will grow over time, almost by default. So, we have our egg. Now, to see if it hatches and what sort of creature emerges.
It's not this kind of problem at all. The software adds nothing, neither does the hardware. What exactly a device marketed as AI enabled provide for the user?
 
  • Like
Reactions: Sleepy_Hollowed
It's not this kind of problem at all. The software adds nothing, neither does the hardware. What exactly a device marketed as AI enabled provide for the user?
I think AI Generative Fill is a good example of the sort of feature that has practical value for end users:


There are lots of AI-powered audio filters, from restoration to instrument extraction or suppression, etc.

I might be willing to use a code editor that can point out errors or generate simple blocks of code, if it works well enough and is sufficiently unobtrusive.

I'm also looking forward to DLSS-like scaling and framerate enhancement, but for regular video files & streams.

I think there are plenty of examples, but it's a little hard to predict what sorts of ideas people will come up with. I think it would be a mistake to limit your thinking just to LLM-powered chatbots.