News Windows Copilot will add GPU support in a future release — Nvidia details the advantages of high performance GPUs for AI workloads and more

Admin

Administrator
Staff member

KnightShadey

Reputable
Sep 16, 2020
135
78
4,670
The first Windows Copilot implementations have snubbed Nvidia, but that will change with a future runtime.

So say nVidia 🙄 , but then immediately removed any reference to Copilot+ from their material and keep referring to ' a form of Copilot' . 🧐 Which could be anything (the 7 flavours of Copilot including a version of Copilot Pro for all we know), but definitely/obviously NOT Copilot+ that is in the current spotlight and only available to Qualcomm, AMD, and intel being a given. This is more nV NEEDING that spotlight for investor interest/dollars from folks who don't know Ai from AR. 🤑

It's the Computex equivalent of a paper launch at CES to have people delay purchase while nVidia plays catch-up. That's why they had their paper launch the day before AMD's, because if it were tonight people would have much tougher questions about this supposed coming Copilot for nV solution. Up until last night they could ride on the coat-tails of AMD & intel's 'coming soon' Copilot+ platforms, that will work with/without nVidia hardware, but not vice versa.

Yes, they rock the Server and GPU Ai space, and have the most/bestest software tools for developers out there, but that is not what Copilot+ is. And nVidia sees their stranglehold on the tools of the trade slipping a small bit with these other players carving a chunk, so they make a noise as if they are also involved in one of the big growth areas for consumers they missed (stIll dwarfed by corporate Ai revenue.. for now, but what leads what in the sector users or tools?) .
They will maintain leadership for a long while, but any cracks of sunlight worry those same investors... "where is nV in this Copilot stuff?" same as those asking the same silly questions 2 weeks ago about AMD & intel when Qualcomm launched. Doom & Gloom for them because they weren't on the Day 1 announcement.... which also prompted nV to launch their 'Basic' reply about Copilot PCs, because can't not be in the news when Ai is mentioned.

Admittedly the Strix platform (especially if Halo is anywhere near expected) seems like a great platform to build from WITH a powerful nV GPU (although 4070 is just 'OK' but needs more than 8GB to be anything other than a helper to Copilot), which would allow you to use CPU/iGPU + NPU + GPU Ai tasks simultaneously (according to AMD and nV's comments) while giving you the ability to develop for all 3 levels of Ai power, and also allow for multi- tasking (like Ai security running in low-level background, Productivity Ai crunching a report/presentation/ etc , and the powerful GPU available for the occasional Ai generated content, all on device. Granted the memory requirements are more inline with portable workstation / desktop replacement class laptops than the mid-range of yesterday's 13-16" thin&light-ish offerings (not ultra portable, but not really HX other than name. 🤔

For the tasks THAT basic, that Strix , Lunar Lake and SDX-Elite are going to be doing via Copilot+ , why would M$ want to muddy the waters with unknown CPU? + ? GPU = Copilot Asterix* / Windows CoPilotRT2 ? It seems to be counter to the point of, standardization and focusing development within a specific framework. M$ should view WinRT as a perfect example of that, don't make Windows adapt to the subsets make the subset adapt to Windows, now ARM adapts to Windows not vice versa, same for CoPilotRT*, wasting resource for what ends up being a temporary nV roadbump to poison the well. Especially if nV is aiming to make it's own Copilot+ leVel ARM CPU/APU by the time any of this collaboration could bear fruit.

Making Copilot Asterix for nVidia also seems like mucking with your luxury car's infotainment system because it's not available in the V12 track version of your vehicle. Two different ecosystems that shouldn't be held up by the other. To me it's similar to the M1-4 situation for Apple, you don't need to totally rework the iPad or Laptop environments to make them work, but it would be nice to ADD the full features as a secondary focus, but remember the primary role of each. An iPad running iPadOS that can boot into MAC OS is far better than ruining the iPad by making it full MacOS or iOS by adding terrible multi-tasking and calling is some showy name... err.... well, perhaps lesson learned? 🤔

nV GPU already have full access to models for both desktops and laptops... but no one is buying a thin and light GTX4050/RX7500 to do that serious work, rather than just get their feet wet. So why would M$ change the environment for that now for weak laptops, when they didn't do it for much more powerful desktops?

As for the discussion of power and nVidia's prior generations, it highlights one of their challenges, look at the GTX4050's Ai throughput similar to the GTX2080Ti's.... then compare the price & power consumption. Sure nV's argument that they are currently orders of magnitude better than the first CoPilot+ PCs are , but the AMD CPU+NPU is currently ~100 TOPs even fore StrixHalo with more CPU cores, more NPU Ai tiles and double the memory bandwidth. That's 1st gen Copilot+ hardware, so why worry about Nv (or AMD) GPUs for this specific portion rather than make it so the CPU+NPU Copilot+ can work with Any GPU to offload heavy workloads

More resources put into scheduling etc in Windows likely results in better... well... results, than just wasting resources on a Copilot*Asterix for nVidia. Yes, make Ai GPU development tools (from all 3 IHVs) work better on Windows , similar to Linux, and make GPU TOPs available to more apps on Windows, BUT leave Copilot+ to it's specific roll. That's far better use of Windows dev time and better for users as a whole, rather than making an RT version of Copilot for nVidia.

That all Nvidia can say it that they are ' developing/working with M$ on an API ' , seems like that's pretty late to get something for this year that's anything more than a tack-on window(s) dressing.

Sure the GPUs can crunch the workloads, but as has been mentioned... so could the previous GTX and RX series GPUs and those didn't get any love from M$ when they had the chance; adding it now (or more realistically just saying they will add it , then dropping CopilotRT* 'development' when their own ARM CPU comes out next year) seems more like a pure PR paper effort rather than anything that could replicate Copilot+ in the area it would be best at, low-level/power always-on fluffy ' convenience-Ai' akin to the return of Clippy... but perhaps useful this time. 📎 😜

nVidia just needs to be comfortable with being good at 8 out of 10 things and not try to spoil the other 1 or 2 just because no one is talking about them for 10 seconds. 🤨
 
^Summary? :unsure::grinning:

Here's a quick summary by ChatGPT (Powered by Nvidia):

The text critiques Nvidia's recent announcements and strategies around AI and Copilot+. Here’s a summary:

  1. Nvidia's Announcement and Strategy:
    • Nvidia initially mentioned "Copilot+" but quickly shifted to referring to "a form of Copilot," creating ambiguity about their specific offering.
    • Copilot+ is currently exclusive to Qualcomm, AMD, and Intel, not Nvidia.
    • Nvidia's move appears to be aimed at maintaining investor interest and market presence amidst growing AI competition.
  2. Market Position and Comparisons:
    • The announcement is likened to a "paper launch," intended to delay consumer purchases while Nvidia catches up with competitors.
    • Nvidia is seen leveraging the hype around AMD and Intel’s Copilot+ platforms, which can operate independently of Nvidia hardware.
  3. Nvidia’s Current Strengths and Concerns:
    • Nvidia dominates the server and GPU AI market with superior developer tools, but Copilot+ represents a consumer growth area they are missing.
    • Investors are concerned about Nvidia's absence in the Copilot+ arena, prompting a response from Nvidia to stay relevant in AI discussions.
  4. Strix Platform and Future Prospects:
    • Nvidia's Strix platform, combined with powerful GPUs, could support diverse AI tasks.
    • The platform requires substantial memory, making it more suitable for high-end laptops rather than mid-range models.
  5. Microsoft’s Role and Standardization:
    • There's skepticism about Microsoft adapting to Nvidia’s requirements for a specific Copilot version.
    • The preference is for a standardized development framework, avoiding temporary or niche adaptations.
  6. Long-term Implications:
    • Nvidia’s efforts may seem more like a PR move rather than a substantive contribution to Copilot+.
    • Nvidia should focus on its core strengths and not try to overshadow competitors in every AI segment.
In essence, Nvidia is seen as trying to maintain relevance and investor confidence amidst AI advancements by other companies, but their strategy and announcements might be more about perception than practical, immediate contributions to the AI landscape.
 

KnightShadey

Reputable
Sep 16, 2020
135
78
4,670
Here's a quick summary by ChatGPT (Powered by Nvidia): ...
LOL! Not really quick or much compression on that summary (including stating it's a summary twice), 😜
For you youngins, scientific notation...errr... summary trick... Read the opening statement/hypothesis and conclusion, skipping the middle representing supporting data. 🤔😉

So say nVidia 🙄 , but then immediately removed any reference to Copilot+ from their material and keep referring to ' a form of Copilot' . 🧐 Which could be anything (the 7 flavours of Copilot including a version of Copilot Pro for all we know), but definitely/obviously NOT Copilot+ that is in the current spotlight and only available to Qualcomm, AMD, and intel being a given. This is more nV NEEDING that spotlight for investor interest/dollars from folks who don't know Ai from AR.
. . .
nVidia just needs to be comfortable with being good at 8 out of 10 things and not try to spoil the other 1 or 2 just because no one is talking about them for 10 seconds. 🤨"
 
...LOL! Not really quick or much compression on that summary (including stating it's a summary twice)...
First one was me, GPT text starts on the second paragraph.


For you youngins...
I'll take it as a compliment as I'm not that young. Here on the site I'm 13 years your senior.

But yeah, this was all in good sport. Your text was quite long for a forum comment, so in good spirit of AI bullshit and in answer to the previous request I fed it to ChatGPT.
 

KnightShadey

Reputable
Sep 16, 2020
135
78
4,670
I'll take it as a compliment as I'm not that young. Here on the site I'm 13 years your senior.

Not really, I was here much earlier than that, just not under this name (no longer have access to that email). Back then long posts were more common, even evolving into buyer's guides. 😉

Much of the old guard is gone (including Cleeve moving on to AMD), but a few familiar faces remain.

But yeah, this was all in good sport. Your text was quite long for a forum comment, so in good spirit of AI bullshit and in answer to the previous request I fed it to ChatGPT.

That's how it was received, just poking fun in return. 😜

According to the Genius Ai in my microwave, when I asked about nV and Copilot, it declared it was beep Beep BEEP, likely due to foul language filter. 🤪
Still better than your appliance trying to kill you I guess. 🤔🤣
 
  • Like
Reactions: Murissokah