The first Windows Copilot implementations have snubbed Nvidia, but that will change with a future runtime.
So say nVidia 🙄 , but then immediately removed any reference to Copilot+ from their material and keep referring to ' a form of Copilot' . 🧐 Which could be anything (the 7 flavours of Copilot including a version of Copilot Pro for all we know), but definitely/obviously NOT Copilot+ that is in the current spotlight and only available to Qualcomm, AMD, and intel being a given. This is more nV NEEDING that spotlight for investor interest/dollars from folks who don't know Ai from AR. 🤑
It's the Computex equivalent of a paper launch at CES to have people delay purchase while nVidia plays catch-up. That's why they had their paper launch the day before AMD's, because if it were tonight people would have much tougher questions about this supposed coming Copilot for nV solution. Up until last night they could ride on the coat-tails of AMD & intel's 'coming soon' Copilot+ platforms, that will work with/without nVidia hardware, but not vice versa.
Yes, they rock the Server and GPU Ai space, and have the most/bestest software tools for developers out there, but that is not what Copilot+ is. And nVidia sees their stranglehold on the tools of the trade slipping a small bit with these other players carving a chunk, so they make a noise as if they are also involved in one of the big growth areas for consumers they missed (stIll dwarfed by corporate Ai revenue.. for now, but what leads what in the sector users or tools?) .
They will maintain leadership for a long while, but any cracks of sunlight worry those same investors... "where is nV in this Copilot stuff?" same as those asking the same silly questions 2 weeks ago about AMD & intel when Qualcomm launched. Doom & Gloom for them because they weren't on the Day 1 announcement.... which also prompted nV to launch their 'Basic' reply about Copilot PCs, because can't not be in the news when Ai is mentioned.
Admittedly the Strix platform (especially if Halo is anywhere near expected) seems like a great platform to build from WITH a powerful nV GPU (although 4070 is just 'OK' but needs more than 8GB to be anything other than a helper to Copilot), which would allow you to use CPU/iGPU + NPU + GPU Ai tasks simultaneously (according to AMD and nV's comments) while giving you the ability to develop for all 3 levels of Ai power, and also allow for multi- tasking (like Ai security running in low-level background, Productivity Ai crunching a report/presentation/ etc , and the powerful GPU available for the occasional Ai generated content, all on device. Granted the memory requirements are more inline with portable workstation / desktop replacement class laptops than the mid-range of yesterday's 13-16" thin&light-ish offerings (not ultra portable, but not really HX other than name. 🤔
For the tasks THAT basic, that Strix , Lunar Lake and SDX-Elite are going to be doing via Copilot+ , why would M$ want to muddy the waters with unknown CPU? + ? GPU = Copilot Asterix* / Windows CoPilotRT2 ? It seems to be counter to the point of, standardization and focusing development within a specific framework. M$ should view WinRT as a perfect example of that, don't make Windows adapt to the subsets make the subset adapt to Windows, now ARM adapts to Windows not vice versa, same for CoPilotRT*, wasting resource for what ends up being a temporary nV roadbump to poison the well. Especially if nV is aiming to make it's own Copilot+ leVel ARM CPU/APU by the time any of this collaboration could bear fruit.
Making Copilot Asterix for nVidia also seems like mucking with your luxury car's infotainment system because it's not available in the V12 track version of your vehicle. Two different ecosystems that shouldn't be held up by the other. To me it's similar to the M1-4 situation for Apple, you don't need to totally rework the iPad or Laptop environments to make them work, but it would be nice to ADD the full features as a secondary focus, but remember the primary role of each. An iPad running iPadOS that can boot into MAC OS is far better than ruining the iPad by making it full MacOS or iOS by adding terrible multi-tasking and calling is some showy name... err.... well, perhaps lesson learned? 🤔
nV GPU already have full access to models for both desktops and laptops... but no one is buying a thin and light GTX4050/RX7500 to do that serious work, rather than just get their feet wet. So why would M$ change the environment for that now for weak laptops, when they didn't do it for much more powerful desktops?
As for the discussion of power and nVidia's prior generations, it highlights one of their challenges, look at the GTX4050's Ai throughput similar to the GTX2080Ti's.... then compare the price & power consumption. Sure nV's argument that they are currently orders of magnitude better than the first CoPilot+ PCs are , but the AMD CPU+NPU is currently ~100 TOPs even fore StrixHalo with more CPU cores, more NPU Ai tiles and double the memory bandwidth. That's 1st gen Copilot+ hardware, so why worry about Nv (or AMD) GPUs for this specific portion rather than make it so the CPU+NPU Copilot+ can work with Any GPU to offload heavy workloads
More resources put into scheduling etc in Windows likely results in better... well... results, than just wasting resources on a Copilot*Asterix for nVidia. Yes, make Ai GPU development tools (from all 3 IHVs) work better on Windows , similar to Linux, and make GPU TOPs available to more apps on Windows, BUT leave Copilot+ to it's specific roll. That's far better use of Windows dev time and better for users as a whole, rather than making an RT version of Copilot for nVidia.
That all Nvidia can say it that they are ' developing/working with M$ on an API ' , seems like that's pretty late to get something for this year that's anything more than a tack-on window(s) dressing.
Sure the GPUs can crunch the workloads, but as has been mentioned... so could the previous GTX and RX series GPUs and those didn't get any love from M$ when they had the chance; adding it now (or more realistically just saying they will add it , then dropping CopilotRT* 'development' when their own ARM CPU comes out next year) seems more like a pure PR paper effort rather than anything that could replicate Copilot+ in the area it would be best at, low-level/power always-on fluffy ' convenience-Ai' akin to the return of Clippy... but perhaps useful this time. 📎 😜
nVidia just needs to be comfortable with being good at 8 out of 10 things and not try to spoil the other 1 or 2 just because no one is talking about them for 10 seconds. 🤨