News The next Cortana: Copilot on Windows is no reason to buy a new PC

Note: As with all of our op-eds, the opinions expressed here belong to the writer alone and not Tom's Hardware as a team.


No this is pretty much the opinion of most anyone who knows much of anything about tech. "AI" as it stands can be a useful productivity tool, and it does save some time, but it's a far cry from being a world changing sales driver for probably 99.99% of the population, and there's no reason to alter your upgrade plans because of it. Eventually, maybe, "AI" may reach the point at the consumer level to where it'll be able to take a 480p video and perfectly upscale it to 4K and be able to generate interactive content and other things we associate actual AI being able to do from Sci-Fi shows and movies, but as it is unless you're rocking something pre-Pandemic the word "upgrade" shouldn't be on your mind unless you went low end.
 
I have tried multiple AI's now, and these are common between all of them:
- Ask it a good way to get rich, and it will refuse to answer you and tell you to go see an investment specialist.
- Ask it to help with a medical issue, it will spew some things out that a search engine will show you, and tell you to go see a Dr.

So I guess in the end, it's good for reminding you to get milk. It's not intelligent at all, at best it's an interactive search engine.
 
I have tried multiple AI's now, and these are common between all of them:
- Ask it a good way to get rich, and it will refuse to answer you and tell you to go see an investment specialist.
- Ask it to help with a medical issue, it will spew some things out that a search engine will show you, and tell you to go see a Dr.

So I guess in the end, it's good for reminding you to get milk. It's not intelligent at all, at best it's an interactive search engine.
The neutering of LLMs for "safety" reasons is artificial. But even if they were unchained, they still probably wouldn't give useful answers to these questions, since they would be drawing from a stew of good, OK, bad, irrelevant, and outdated advice available in their training sets.

Maybe something "intelligent" will arise as parameter sizes explode and different approaches are taken, but nobody should count on it.
 
  • Like
Reactions: coolitic
The AI could be useful to solve problems with the OS, like when you don't remember or forgot how to use a command line.

I used chatgtp to write entire powershell scripts.

But the AI easily does mistakes, where it writes completely wrong commands, and use the wrong parameters. It's even worse than Linux, where every time you have a problem, somebody in Internet wrote a command line to "solve it" without explaining what it does, so you may end breaking more stuff instead of fixing it.

Anyways, I do not want an AI running feral in my OS. I uninstalled Cortana, and broke the parts that cannot be installed, so they don't even run, and I will never allow an AI from Microsoft to be running on my computer with admin permissions, and access to everything I do.
If I were willing to run such an AI, would make sure that is not the AI provided by the OS manufacturer.
 
I agree with AI LLMs not being world changing in their current form. I also agree windows copilot is a joke as far as a reason to upgrade your PC. I don't need AI to use Windows anymore than I need AI to drive my car. It's not a good application of the technology in its current rough around the edges state to speed up things unless you're a user that is scared to death of a keyboard/mouse. Frankly I am also aghast of how much money is being poured onto companies that simply append "AI" to any product they are selling to make themselves 2x more valuable to investors. It's a great example of how capital allocation in our markets is terribly inefficient.

However, I can also tell you this:
  • It helped me write a killer resume that got me an amazing job after being laid off.
  • It helped my wife develop a business plan that blew the doors off her employer.
  • I have used LLMs almost daily to spice up presentations and to do preliminary brainstorming on concepts.
  • I have a curious mind and I turn to LLM's at least a few times a week to answer simple "how does this work" type questions search engines absolutely suck at.
  • I ask complex questions that draw from different data sets and have found a couple answers that have blown my mind(once confirmed of course! see more below).
  • For anything AI generated that I use, I don't take anything the internet says as gospel, much less an LLM or search engine. Interpreting and validating data is still as important as ever, if not more so.
So, to the many cynical folks here on Toms, please keep downvoting and dismissing AI LLMs because you haven't adapted your workflow on ways to use them optimally. This works to the advantage of everyone else that has found them useful and, at admittedly rare times, to have downright mind blowing utility. To be fair, I know the cynicism is in large part due to the breathless hyperbole and gold rush mentality that is surrounding the word AI at present(like "cloud" was a decade or so ago... irritated me to no end). We're technologists and we have a finely tuned defense mechanism that is conditioned to be cynical of the next big thing. Just keep in mind, it's also the same feedback the web, video conferencing, web search, ecommerce, social media, cloud computing, youtube, and streaming ALL received when those things first rolled out to way too much early hype with little substance. The difference here is LLM's today provides a meaningful competitive edge for knowledge workers and content creators that are learning to wield them. For productivity/enterprise purposes, they also have some immediate use cases as well(although the ROI is still debatable). I agree the promise of LLM and AI in general is still a long way off from being realized the way they are talking about it like it is already here today, but I 100% believe it will be there in 10-20 years and I wouldn't have said the same of all the other things I listed above(even with some hindsight... seriously I still don't get youtube).
 
I don’t think there’s a huge market of people who, for example, prefer local background blurring to the blurring that’s built into Google Meet or Teams or Zoom. Sadly for Windows, even the other promising uses of local AI are not features of Copilot and are more likely to be part of third-party applications.
The market is companies like Zoom and MS itself with Teams who can offload costly processing power to the user. Which means this will mean mostly nothing for most users now, but in a few years you'll need it for these Zoom and Teams features.

So try to get people used to it today, and maybe appreciate one application for it here or there, so that they'll have the upgraded computer in the future and not realize that Zoom/Teams is now offloading the work to their computer.

It's today's version of Javascript and VBscript.
 
My biggest complaint is not with CoPilot as much as it is how it's implemented. MS is fixated on side panels rather than separate programs. I have a 16:9 27'' wide monitor, and it feels very annoying to interact with side panels. I can't imagine how much worse it would feel on a super wide 21:9 >34'' monitor. This also isn't the first time MS tried side panels and failed.
  • In Vista they tried widgets slotted in a side panel. That failed and they ended up bringing them out as moveable icons in W7.
  • In W8 they tried Charms as a side panel. That failed and they quickly ditched it (it didn't help that most people did not have touch screens but rather track pads which were terribly inconsistent at swiping from the side).
  • In W10 MS created the Action Center, which I've found rarely useable other than the occasional activation of Focus Assist, which I inevitably forget to turn off.
  • In W10 and 11 there is also the News and Interests toolbar, which annoyingly pops out as a panel. I've seen many mainstream users confused at why a massive panel jumps out at them after they inevitably (and accidentally) moved their mouse over the icon.
Now it's CoPilot. Side panels don't work. Nobody wants them. It should be a separate program, not a panel.
 
  • It helped me write a killer resume that got me an amazing job after being laid off.
It's antisocial to reward people for using AI in transactional, nominally zero-sum, interpersonal interactions such as hiring/firing. It's antisocial in general for interpersonal interaction except to facilitate communication (translation, medical issues, teaching, et cetera).

It's really pretty bad in resumes and interviews though. Resumes and hiring already make presentation, and more importantly self-selling, part of the measuring stick, and this just increases it. For sales and presentation oriented jobs this is fine (until a person needs to sell or present on the fly), but for other jobs it's measuring the wrong thing and doesn't guarantee the best fit between job-seeker and employer.
seriously I still don't get youtube).
Youtube is basically a source of millions of microchannels. It allows standard prettified sources of entertainment/news/learning, in much greater abundance than television. And it allows broad reach of barebones entertainment/news/learning. It is the acme of public access and infotainment television; far greater than could have been imagined way back when.

Your other points are quite decent.
 
A solution in search of a problem.
Oh Microsoft has a problem that this solves, don't discount that. There's not a search happening.

We just recently learned that Microsoft put the baseline default of an AI computer to 16gb of memory. So the problem that it solves is - need more revenue! Sell more computers! More licenses sold!

Microsoft has a bank account problem. You're going to make that account bigger by buying more stuff.

There are no illusions. Microsoft knows exactly what its doing. Now, who here doesn't need a new computer? Meh, buy one anyways. Cause "AI".
 
I feel like this is all a gimmick on the consumer level because it will definitely not give you an upper hand or much less put you on par with any corporation or government exploiting it, and find it peculiar that everyone essentially released a product all at once.
That aside, copilot is trash and seems silly to upgrade kit for it as compulsive as some of us are with that. The suggestions are useless and it's biggest fault is that it's programmed to think with bias to meet contemporary quotas these corporations committed themselves to- Google a prime example. If want something very particular, it will push MS's B.S. on foremost and obfuscate anything else. How are people supposed to trust something that is going to algorithmically lie to your face?
 
  • Like
Reactions: slightnitpick
For general queries like asking for a summary of a long passage/ write up, the likes of ChatGPT and CoPilot may help give us a quick understanding. But to be honest, how accurate is the summary, unless you read the write up yourself, it is something you may not be able to validate. My concern is around the algo and source black boxes which is opaque to end users. Over reliance on such technology will just make us lazy and over reliant on them, to the point that we can’t tell if it’s fact or fiction. For now, if you are using ChatGPT for coding for example, you may still have the knowledge to identify errors, but it may not be the case in mid to long term.
 
  • Like
Reactions: slightnitpick
So, to the many cynical folks here on Toms, please keep downvoting and dismissing AI LLMs because you haven't adapted your workflow on ways to use them optimally.
- I write code for living.
- I know how LLMs work.

With that out of the way, the current crop of LLMs can't write proper code to help me out no matter how good my prompt is -- there's always some detail that they miss or worse, misinterpret.

If the request is complex enough and you ask them to include the fix for what they botched on the first try chances are the new code will be totally different even if I use a LLM with 16K tokens context window -- in other words the attention span is worse than that of a sleepy toddler.

The worst part?

- It will argue with you when it is obviously incorrect and it will do so in the most annoying fashion.
- If what you are asking it to do is was not in training dataset it will invent stuff that looks like it might work instead of outright saying "I don't know".
To be fair, I know the cynicism is in large part due to the breathless hyperbole and gold rush mentality that is surrounding the word AI at present(like "cloud" was a decade or so ago... irritated me to no end).
Not only because of that, but because word "AI" is used to describe code which is basically doing statistical modelling.
Just keep in mind, it's also the same feedback the web, video conferencing, web search, ecommerce, social media, cloud computing, youtube, and streaming ALL received when those things first rolled out to way too much early hype with little substance.
That's not really true -- every single one of those things was genuinely useful from the start and not only for those who created them.
The difference here is LLM's today provides a meaningful competitive edge for knowledge workers and content creators that are learning to wield them.
In much the same way colonizing countries who plundered the rest of the planet for couple of centuries now have a meaningful competitive edge in capital, technology, and industry.

Current "AI" is just more of the same entitled colonizer mindset plundering.

Microsoft didn't ask me for consent before training Github Copilot on my code. Google added their chatbot exclusion for robots.txt after it already indexed my website. LAION 5B training set might have included photos I created without my consent. Reddit sold my posts for AI training without my consent, etc, etc.

Add to that the fact that current LLMs have inherent pro-western moral and cultural "values" bias because of data they were trained on and because of who has trained them and you realize that "free LLMs for everyone" are a form of trojan horse for less developed societies which are given away for free in order to indoctrinate them and make them part of the Borg collective.

So even if we disregard current technical shortcomings which may be addressed in the timeframe you are proposing I'd really like to see legal and moral issues addressed much sooner.

My inner cynic says that the likelyhood of getting a fair resolution to copyright infringement complaints considering the lopsided power distribution between AI model creators and users is the same as Ghana getting their looted gold back from London's British Museum and Victoria & Albert Museum.
 
  • Like
Reactions: ThomasKinsley
It's antisocial to reward people for using AI in transactional, nominally zero-sum, interpersonal interactions such as hiring/firing. It's antisocial in general for interpersonal interaction except to facilitate communication (translation, medical issues, teaching, et cetera).

It's really pretty bad in resumes and interviews though. Resumes and hiring already make presentation, and more importantly self-selling, part of the measuring stick, and this just increases it. For sales and presentation oriented jobs this is fine (until a person needs to sell or present on the fly), but for other jobs it's measuring the wrong thing and doesn't guarantee the best fit between job-seeker and employer.

Youtube is basically a source of millions of microchannels. It allows standard prettified sources of entertainment/news/learning, in much greater abundance than television. And it allows broad reach of barebones entertainment/news/learning. It is the acme of public access and infotainment television; far greater than could have been imagined way back when.

Your other points are quite decent.
We also have no idea if the poster got their job because the AI helped with their resume. As someone who has reviewed thousands of resumes over the course of my career, I can tell you that every manager is looking for particular things in a resume and the best resume is the one which highlights your ability to do that particular job.

And what does "helped" mean? If someone has an AI which reviews your copy to look for errors and inconsistencies, great. If you point it to your LinkedIn page and say "write me a resume," you might be disappointed.
 
The only problem here is that Copilot on Windows doesn’t do anything mainstream consumers or business users actually need. And it’s not clear what problems it solves, now or anytime soon.
Exactly my thoughts on this. Copilot appears to add complexity, lower system performance from the perspective of the user, and worsen results. I can't identify a way it improves my computing experience. Copilot's habit of providing incorrect or irrelevant information also makes it untrustworthy. When it writes code, Microsoft's AI loves to implement vulnerabilities. I really don't want to have this drain on my resources around as it seems like a liability rather than a helper.

The one thing I was using is the image creator. I'm not sure how they do it, but Microsoft's image creator tends to deliver superior results compared to Stable Diffusion and is leaps and bounds better than DallE. However, Microsoft's image creator seems to think generating character designs is offensive now. I see that mopey dog all the time making the tool far less useful.
 
  • Like
Reactions: slightnitpick
Part of the problem, I am content (have been for years) with how I use my computers. I have everything set up exactly how I want it, all my browser shortcuts, all my streaming logins, my gaming services and my remote/work software. I don't surf because everything I use and enjoy is already a shortcut. I don't develop or create anything digitally. It's very rare that I need to research something on the web or have a problem solved that I can't just quickly type in a search bar... (most of the time it's to make sure I am spelling something correctly). For my line of work (in the IT field) and my play time (gaming or streaming), there is zero need for CoPilot, just like zero need for Cortana or anything before it. I haven't even used Siri in years. Except to try it out at the beginning and say, "that's pretty neat"... and then never to use it again. I did the same thing with Chat GPT... had it write a code in BASIC for fun to animate a bouncing ball. Haven't really used it since. Maybe I'm wrong, just doesn't seem like so called AI is for the masses. CoPilot feels like Microsoft's latest way of getting us to try something new, different and pretty just to sell us on their products. It feels like their think tank is coming up with ideas just to justify change. Perhaps it's an attempt to lure the Mac Apple OS crowd to their side. True AI gives a specific human answer to something or a specific reaction to something (the best it can come up with) whether or not its right or wrong. I am already sold on Windows anyway. It's my OS of choice whether it has so called AI or not. I wish they weren't trying to sell me on something I already know and love to use. I understand there are certain applications in life where it could be useful and for certain people. But if CoPilot does not act like true AI... can we really call it AI??
 
  • Like
Reactions: slightnitpick
I have tried multiple AI's now, and these are common between all of them:
- Ask it a good way to get rich, and it will refuse to answer you and tell you to go see an investment specialist.
- Ask it to help with a medical issue, it will spew some things out that a search engine will show you, and tell you to go see a Dr.

So I guess in the end, it's good for reminding you to get milk. It's not intelligent at all, at best it's an interactive search engine.
Yes, as it's currently available in CoPilot, it is a glorified search engine, but it DOES do a good job of that, provided you ask a well phrased question. It's replies ARE thorough and DO leave out most of the irrelevant answers. But No ONE and no THING is going to tell you how to get rich quick or get a quick cure for cancer. Ask for a list of topics for a lecture to a specific group of people for a specific subject and in seconds, you WILL get a GOOD list to use for a basis of that lecture.
 
The article was a hoot, and I was verbally cheering with the authors opinions, nay, factual observations. But what sent a chilling wave down my spine was the looming threat to add a copilot key to keyboards. Every couple of years I get a new Dell laptop issued to me from my employer, and I'm always curious to see what keys they managed to remove to save money. I remember a few years ago losing the "pause/break" key (which is commonly needed with some software I use), so now I have to use a 3 key combination to emulate. The latest laptop pressed "reject" on the "page up" and "page down" keys and combined them with the "cursor up" and "cursor down" keys - which themselves were reduced to only half-sized keys. Why add a useless copilot key while in the midst of an extinction level event on keyboards? By the way, after 30 years as a gamer and software engineer, I have yet to use the "Windows" key which already wastes a very valuable location on just about every keyboard manufactured today.