News Intel confirms Microsoft Copilot will soon run locally on PCs, next-gen AI PCs require 40 TOPS of NPU performance

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Gururu

Proper
Jan 4, 2024
105
65
170
i am concerned as all others, but maybe prematurely. Maybe MS and Intel won’t push AI in all apps such that they are crippled without it. Will AI be fundamental to office functions, web browsing or gaming? I am really hoping not because AI would be the final straw in having a company embedded into our lives.Despite what will be said, whatever AI is used or where, it will be an extension of the company behind it, who will reap whatever information they want from the AI.
 

NinoPino

Commendable
May 26, 2022
239
132
1,760
Clippy ran locally, everybody hated it.
Clippy was a joke and born when modems were 56k.

Cortana ran locally, everybody hated it.
Cortana continuosly do internet connections, you can see it if you are behind a proxy.

.... Microsoft really needs to get rid of whoever has been pushing this same stupid idea for the last 30 years.
This is the first time in history that a company try to integrate such AI system into its OS.

Also, doesn't it take, like, "all the RAM you have" to run a LLM locally?
RAM often stay here unused, like graphics memory or disk space.
 
Last edited:

NinoPino

Commendable
May 26, 2022
239
132
1,760
The one thing I don't want is a bloody Copilot key. It took Samsung and other phone manufacturers way too long to finally remove the hated dedicated Assistant key from their devices, Microsoft doesn't need to slap a new key on a keyboard, especially not when an easily assignable secondary function key or the existing Windows + C is by far fine enough.
Amen.
 

ihatewindowss

Prominent
Aug 9, 2022
3
2
515
Clippy ran locally, everybody hated it.
Cortana ran locally, everybody hated it.

.... Microsoft really needs to get rid of whoever has been pushing this same stupid idea for the last 30 years.

Also, doesn't it take, like, "all the RAM you have" to run a LLM locally?
It's ridiculous they are on the third round of this bloatware. Oh you want a search bar? What's a search bar? You mean a search bar that opens OUR browser and OUR assistant to tell you how to use a search bar? And you can't easily rip them out of your computer?
They did it again with Cortana and everyone disables it. The whole point is to add an LLM to monitor your token distribution of keylogging everything you write so feds can mass query it and see what people are writing without accessing their files and force an accelerator chip on an experimental generative model field that is going to change a lot more over the next two decades. You should be able to buy AI accelerators on USB if you need an LLM. These companies are really going to garbage forcing open sourced solutions into mass production based on open python libraries.. you can download all these things if you need them. Besides LLMs actually being useless for real production, for boiler plating things they are ok.
 
  • Like
Reactions: NinoPino

ihatewindowss

Prominent
Aug 9, 2022
3
2
515
That way the CPU and GPU aren't mega taxed all the time just to have HAL tell you it's not gonna open the garage door.

They didn't read 1984 or 2001. It's frightening the "most advanced field" is run by authoritarian lunatics who can't be bothered with making comparisons to fictional works that describe the problems they are having directly.
 
  • Like
Reactions: painhertz

t3t4

Proper
Sep 5, 2023
103
42
110
Well if it's going to run local then it better not be some crippled version of AI. When I ask how to build a nuclear bomb, I expect step by step instructions! ;)
 
  • Like
Reactions: painhertz

usertests

Distinguished
Mar 8, 2013
550
517
19,760
So, the next gen Lunar Lake chips with triple the NPU performance, will fall 25% short of the TOPS goal of 40. LOL. Unless Co-Pilot can do one helluva lot more running locally, this is pointless hardware and software. Everything I've tried with it is either wrong, incomplete, or slower than doing it myself. Long way to go for both Intel and Microsoft.
If Microsoft allows combined TOPS to meet the 40 TOPS threshold, then it's no issue.

Now I'm thinking of Hawk Point that hits a combined 39 TOPS (16 TOPS XDNA1 + 23), just short of that.
 
Mar 29, 2024
2
0
10
Thank you for sharing this informative gem! It's a pleasure to learn from someone as well-informed as you.
 

JTWrenn

Distinguished
Aug 5, 2008
284
194
18,970
I get why you don't want to run that on a laptop but hopefully on a desktop they don't limit it to TPU only. Also, which TOPS? int4 or int8? Or something else?

This feels a little rushed and half baked.