News AI PC revolution appears dead on arrival — 'supercycle’ for AI PCs and smartphones is a bust, analyst says

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Nvidia ... don't make cheap consumer products. Not when they sell H100s for ~30,000 usd and can't even manufacture them fast enough to meet market demand from enterprise customers.
Their manufacturing throughput seems to be limited by their HBM suppliers, not the rate of compute die production. So long as that remains true, their consumer products are supplementary to their server GPU business, not in competition with it.
 
  • Like
Reactions: thestryker
Yeah... who would have guessed that people don't want AI "features" such as "Recall" in their computers.
The problem is the Big Tech monopoly doesn't care what we want. They want to steal our data, under one guise or another. There is simply nothing I need AI for either now or in the future. For those who cannot think and write, which is more and more high school students, AI will be a boon. Otherwise, no so much.
 
  • Like
Reactions: Findecanor
No one has proven to the average consumer why AI is a must have on their computer or phone. This is because it simply is not and so the Big Tech answer is, "Look at this shiny new object. Don't you want to have it first?" Apparently, although the economy has much to do with it, consumers are not seeing any value added for more dollars dropped down the rabbit hole.
 
  • Like
Reactions: King_V
I'm no expert on the subject but I am genuinely worried about this whole AI business. I'd say the consumers for once have been wise. Exactly why would an average consumer need an AI PC? I don't see any killer functionality that would have arisen, the most useful one ought to be ChatGPT but, as stated, it doesn't run locally. As a gamer I like DLSS, I'm not saying AI is worthless just that the level of excitement and investment generated around it ultimately means that AI needs to generate a lot of money at some point. How? We don't yet know. Not talking about the ecological footprint of that huge bet. It seems extremely high risk and in case it does become a bubble someday, who's going to pay for all that adventurous spirit? The average Joe is going to have to pay for this, like in all economical catastrophe from the past.

What doesn't inspire a lot of confidence is that Microsoft for one genuinely thought it could just say 'new shiny AI' and have consumers jump on their brand new PC, as directed to do. Like, these guys are not the smartest in the bunch.
 
What doesn't inspire a lot of confidence is that Microsoft for one genuinely thought it could just say 'new shiny AI' and have consumers jump on their brand new PC, as directed to do.
If I indulge my cynical side, I think Microsoft was searching for a new strategy after their AR bid fizzled. With all the hype around AI, they panicked and thought they had missed the boat (like what happened with the internet, back in the 1990's) and jumped in with both feet.

Sadly, their AR tech seemed to be the best aside from Apple's, but they've now basically gutted that entire part of their company and product lines. It might not have been quite ready for the average consumer, but their AR tech was pretty solid. With Apple's commitment seeming shaky, I think Meta is pretty much the only big player left, pushing it forward.
 
I think AI Generative Fill is a good example of the sort of feature that has practical value for end users:

There are lots of AI-powered audio filters, from restoration to instrument extraction or suppression, etc.

I might be willing to use a code editor that can point out errors or generate simple blocks of code, if it works well enough and is sufficiently unobtrusive.

I'm also looking forward to DLSS-like scaling and framerate enhancement, but for regular video files & streams.

I think there are plenty of examples, but it's a little hard to predict what sorts of ideas people will come up with. I think it would be a mistake to limit your thinking just to LLM-powered chatbots.
None of this
I think AI Generative Fill is a good example of the sort of feature that has practical value for end users:

There are lots of AI-powered audio filters, from restoration to instrument extraction or suppression, etc.

I might be willing to use a code editor that can point out errors or generate simple blocks of code, if it works well enough and is sufficiently unobtrusive.

I'm also looking forward to DLSS-like scaling and framerate enhancement, but for regular video files & streams.

I think there are plenty of examples, but it's a little hard to predict what sorts of ideas people will come up with. I think it would be a mistake to limit your thinking just to LLM-powered chatbots.
This is why I asked. None of this has any relevance to a regural user. There's nothing what you mentioned I'd benefit browsing the web, listening to music, etc. Chat Bot is actually the only AI tech I find useful. It simplifies searching web, when search engines are useless. A local AI bot, which you could interact with, now that's something I find worth buying.
The other side is - users and tech companies have shown what AI should not be used for and that didn't rise trust in people. Media don't help either. I have yet to see an article explaining the dangers and benefits of AI and how we should proceed with it in our daily life.

I thought of one use case an AI would be beneficial. Ticket kiosks - train stations, metro. Elderly people are not proficient with tech and too many a times I saw badly designed interface.
 
Do you think the current demand for HBM has to do with anything other than AI? Literally 99% of the HBM demand is for AI stuff.
It's also used in HPC processors and some networking products.

Yes, the current demand spike is driven by AI training, but that's not all it's being used for.

Name one currently sold product with HBM not being marketed towards AI.
This is a bit difficult, because so many things are AI-adjacent, like Intel's Xeon Max (which happens to feature AMX), however I think I've found one.

Microsoft deployed a Zen 4 EPYC with 96 CPU cores, 128 GB of HBM3, and no GPU cores!

There's also this custom HPC processor (version 1 is already deployed, but only in a special partnership), which they swear isn't targeting AI but just the pure HPC market:


I don't know specifically which products they're talking about, but Juniper says they've been using HBM for a while:

"In the latter half of the decade (2010s), Juniper was among the first in networking vendors in adopting the in-package High Bandwidth Memory (HBM), further consolidating multiple slices and multiple processing cores onto the same dies."

https://www.juniper.net/content/dam...ncy-with-asic-architecture-and-technology.pdf

Cisco, as well:

"The Cisco Silicon One architecture employs a hybrid-buffer scheme that benefits from both worlds – internal memory bandwidth and external memory size. With efficient usage and smart management of the HBM interface, it helps enable the unification of both high-bandwidth switching and routing in a single device, as demonstrated by our Q200 – a 12.8Tbps router with 8GB of buffer in HBM."

https://www.cisco.com/c/dam/en/us/s...white-paper-sp-hybrid-buffer-architecture.pdf
 
Last edited:
For me AI is a lot like IoT.

But while the fast majority of "investors" who might have just joined for a "sure way to make a buck without effort" might loose their money, some others will manage to make healthy [to them] profits at their expense, by funding an amount of research on discoveries in a topic they just couldn't have funded in any other way.

And I see it becoming a pattern: create a broad hype to pay for developing a product you couln't have paid for and then sell it as "surivor" in the niche you had envisaged in the first place.

I'd just love to see so called analysts eat their IoT predictions they made years ago. And I'd like to see those analysts eat the even bigger predictions they made for AI.

But both do not mean that both are total failures. Those few who invest on harnessing the outcomes into sellable products will live well. And there is a good chance they had a hand in venting the hype. There little proof more convincing than Nvidia and what some of their customers are able to achieve.

I am not counting Microsoft as sure winners in that game. And since it would imply them hijacking my life via a bit of software I let them install on my own personal computers only to perform routine administration tasks, I certainly want them to loose quite badly soon.

But for any nomal John Doe to believe or invest into the hype without very critical thinking in advance and analysis all along is a recipe for loosing money, not making a profit.

Is being part of the seemingly blind hyping immoral? For me personally, the answer is yes, quite simply because it violates Immanuel Kant's cathegorical imperative.
 
To me, this seemed inevitable. Even the companies who are trying to sell us on it can't seem to figure out how. Search engines prioritize an AI answer that's of questionable accuracy.

Phone companies advertise that AI means your new phone can actually do new things, which sounds like they're admitting that there wasn't much need for upgrades before, then eh m the examples they give of what to do with AI are things people already do with their phones successfully WITHOUT the AI capability.

Where the general public wants doesn't need to be in a local machine, and even the advertising and marketing departments don't know how to come up with something to convince the public otherwise.

It would've been surprising if it DIDN'T fall flat.
 
The market for AI PC’s is underdeveloped. Most people think AI is ChatGPT delivered via the cloud. The reasons for hosting GenAI models locally are not apparent. At all. What’s being advertised as an AI PC is really watered down compared to what a real AI PC would be, such as a current-gen cpu with an RTX 4090 with 24G VRAM or a PC built to host a PCIe version of an nVidia A100 with 40 or 80G VRAM. You quickly get into tens of thousands of dollars but that’s the minimum for live hosting a 70b model. If you don’t care how slow it is you could run a SergeChat container with Hugging Face models in RAM instead of VRAM. 1 token per second gets old real quick.
 
Last edited:
  • Like
Reactions: castl3bravo
Not me. I'm ready for computers that one day return an actual file that exists on my computer, and another day it gives me a file it made up in a computer, or a file on someone else's computer it replicated!

It keeps my day interesting.

Ask it one day an answer for a mathematical problem, and one day it gives me a correct answer, the other day it makes up some stuff and I accept it as correct.

This is exactly what I want from my computer!!

/S
 
  • Like
Reactions: King_V
It makes me very happy to know that efforts of technology leaders (imbeciles) to advance AI are failing and I hope they continue to fail so that all companies abandon it because their financial losses are too great for them to continue pursuing something so stupid and undesirable. Not only am I pleased about that, I am equally pleased that it demonstrates a failure on the part of those companies to influence (manipulate) consumer mindset about this unwanted and unwelcomed technology. They will have to find a new way to advance their nefarious schemes and extortion plots to defraud the human race. I hope their losses are so painful and severe that they are reluctant and fearful of trying pulling another similar stunt in the future.
 
  • Like
Reactions: Sleepy_Hollowed