News Nvidia and MediaTek may unveil jointly developed 'N1' Arm chips for Windows PCs at Computex

I don't have a lot of faith in Mediatek windows drivers after what they did with... well just about every wifi/BT card they offered.
I'd use an AX210, if only the RZ608 was removable from my mobo.
 
I feel like I still don't understand why.

Nvidia already makes their own ARM CPUs, using the same reference ARM core designs that MediaTek has access to. Nvidia already has OEM connections and deals with foundries that can support a high-volume product. Nvidia's drivers have been going through a rough spot, but MediaTek's drivers seem to be consistantly considered poor. Nvidia's brand has clout, where MediaTek has always been a second-tier player.

The N1X "up to" spec of 10+10 Cortex-X925/A725 cores with Blackwell graphics is the same as GB10. Are GB10 and N1X the same chip? If not, why are they making two chips that are so, SO similar with one in-house and one as a joint development? (Would it make sense to have two different dies with the same CPU but different iGPUs?) If they are the same chip, why the obfuscation and the delay? Is it just Windows drivers not being ready?
 
I feel like I still don't understand why.

Nvidia already makes their own ARM CPUs, using the same reference ARM core designs that MediaTek has access to. Nvidia already has OEM connections and deals with foundries that can support a high-volume product.
I know, right? That's what I'm talking about!

As for the hardware specs... I'm going to sit tight and wait for more firm info. However, I think there's got to be some connection between GB10 and whatever Nvidia+Mediatek have been cooking up for Windows laptops. I just don't yet see where GB10 fits into the picture, but I'm sure it'll become clear.
 
Folks, all ARM-based Windows has to do is provide a better experience to the common Windows conumser than x86 provides. They aren't far off if 1) laptops are the most common Windows PC form factor and 2) battery life is noticeably better than their x86 counterparts.

Wendell from L1 Techs and Steve Burke from Gamers Nexus had a video on this recently. It was infuriating to me at first as an old-school x86 die-hard, but Wendell is right at the end of the day. So, by the time nVidia enters the market and Windows itself and Windows apps continue to get smoothed out with ARM or more broadly RISC ISA's, I think we'd all be surprised at the market penetration that nVidia will reach. After all, we're just the goons at home but by golly they know how to make $$$! Seriously, who's to stop them? Really, who, why, and how? Deeper integration is only a win for X company and Y organization and Z customer, so...

As mentioned by others already, it's a surprise that nVidia hasn't made a splash sooner. Soon of us also know that they tried and failed (ARM acquisition, etc.), but I think they're clever enough to also know that a massive takeover might jingle more alarm bells with the anti-trust regulators, so... (I did that again!? lollolol)

Call me some naive old-timer or fanboi or whatever you want, but nVidia will destroy the PC ecosystem as we know it once they establish firm ground in the Windows CPU space. I guess that's wonder under the bridge as I scatter over to Linux more and more, for, wel, everything, but that's not really a save-haven either, is it!? Like, really, wherever open standards and open source and the open tech community in general makes progress, the market leaders just run away with the stage coach full of money or gold or what have you.

Quite a tangent on my part but I insist these things must be said. Hopefully, it continues the spirit of X86S moving forward, even if Intel isn't able to continue contributing to the cause as even the amazingness of ARM and RISC-V needs real competition -- years and years of foreseeable competition going foward!
 
I wondered why Nvidia didn't just do this all on its own.
While they could the other part is that nvidia has been pretty slow at adopting Arm CPU IP, but especially slow at anything below Grace. I think the current Grace CPU is 2 architectures behind (with the latest Jetson being 2 behind that) Mediatek's current Dimensity CPUs. If all they're doing is providing the graphics cores that allows nvidia to be hands off for the design and I'm guessing the only software obligation would be graphics drivers.

While I think this is likely of minimal impact it also maximizes profit when it comes to nvidia's wafer buys since it won't be theirs making this SoC.
 
While they could the other part is that nvidia has been pretty slow at adopting Arm CPU IP,
Er... what? AGX Thor is allegedly based on Neoverse V3AE (Automotive/Embedded version of V3), which is the server equivalent of Cortex-X925. That's their latest and greatest P-core, and what's featured in GB10. The X925 was only announced 11 months ago. Since Arm is on an annual cadence, the next one should be announced towards the end of this month.

but especially slow at anything below Grace. I think the current Grace CPU is 2 architectures behind
Grace used ARM's latest server V-core that was available at the time, which was V2. It was supposed to launch in 2022, but got delayed. The successor to Grace, Vera, was announced last year to be using an in-house designed microarchitecture. Very exciting!

(with the latest Jetson being 2 behind that)
I think Jetson is really their lower-end embedded systems series, not the SoC's themselves. Thor will eventually make his way to the Jetson line, but is launching in their Drive series.

Mediatek's current Dimensity CPUs. If all they're doing is providing the graphics cores that allows nvidia to be hands off for the design and I'm guessing the only software obligation would be graphics drivers.

While I think this is likely of minimal impact it also maximizes profit when it comes to nvidia's wafer buys since it won't be theirs making this SoC.
Yeah, that's about the only sense I see in it. Nvidia is focusing on the high-value, low-effort aspect. That also lets it keep its hand in the edge inferencing market. You forgot about that part - they're not just providing graphics IP, but also the NVDLA engines, which are responsible for most of the TOPS in the GB10.
 
  • Like
Reactions: thestryker
Hopefully, it continues the spirit of X86S moving forward, even if Intel isn't able to continue contributing to the cause
APX is still going to happen. X86S was mainly just a bid to simplify the silicon and verification, but the real improvement in the cards is from APX and the return of AVX-512 (in the guise of AVX10.2, which will now be 512-bit everywhere).
 
  • Like
Reactions: thestryker
Er... what? AGX Thor is allegedly based on Neoverse V3AE (Automotive/Embedded version of V3), which is the server equivalent of Cortex-X925. That's their latest and greatest P-core, and what's featured in GB10. The X925 was only announced 11 months ago. Since Arm is on an annual cadence, the next one should be announced towards the end of this month.
Did this actually get released? I thought it was supposed to be out some time later this year.

I hadn't seen they were leading productization of V3AE which is quite a shift, but it makes sense due to the IP design.
I think Jetson is really their lower-end embedded systems series, not the SoC's themselves. Thor will eventually make his way to the Jetson line, but is launching in their Drive series.
Jetson AGX has been a thing as long as Drive AGX has as far as I'm aware and each Jetson generation has shared CPU IP. I'd be surprised if this one was any different, but who knows what their current strategy is.
You forgot about that part - they're not just providing graphics IP, but also the NVDLA engines, which are responsible for most of the TOPS in the GB10.
I'm curious how similar the client SoC will be to the GB10. I certainly assume the CPU cores will be the same.
 
We don’t want a SoC, made my mediatek….

We want a powerful Nvidia ARM CPU and compliant ATX motherboards to build non x86 gaming PCs!
 
Did this actually get released? I thought it was supposed to be out some time later this year.
Vehicles using it have been demo'd, so AGX Thor is clearly out there and in use by their partners.

Here's someone claiming a talk at this year's GTC included a statement that Jetson Thor was slated for a June release timeframe:

I'm curious how similar the client SoC will be to the GB10. I certainly assume the CPU cores will be the same.
A laptop SoC without a NPU would be considered non-viable for anything more than the most entry-level tier, in 2025. Any SoC from Mediatek and Nvidia will include NVDLA cores.
 
Last edited:
  • Like
Reactions: thestryker
A laptop SoC without a NPU would be considered non-viable for anything more than the most entry-level tier, in 2025. Any SoC from Mediatek and Nvidia will include NVDLA cores.
I think that will depend entirely on die size because Mediatek already has a NPU that's better than anything in laptop SoCs. I'm fairly confident when it comes to a client part they'll use whatever minimizes die size.
 
APX is still going to happen. X86S was mainly just a bid to simplify the silicon and verification, but the real improvement in the cards is from APX and the return of AVX-512 (in the guise of AVX10.2, which will now be 512-bit everywhere).
New forward-looking instructions for x86 is great and all, but there's still a lot of technical debt that needs to be addressed at some point; I believe more perf would be gained by netting back those transistors for purposes like this (APX) and other general CPU functions.
 
New forward-looking instructions for x86 is great and all, but there's still a lot of technical debt that needs to be addressed at some point; I believe more perf would be gained by netting back those transistors
It's a good question and one I can't answer. I think the abstraction layer provided by translation to micro-ops should probably enable them to optimize for the normal cases encountered in 64-bit operating systems and runtimes. The legacy stuff they wanted to drop with X86S has probably already been badly disadvantaged and could involve heavy amounts of microcode.
 
  • Like
Reactions: DS426
I wondered why Nvidia didn't just do this all on its own. However, I guess if Mediatek is going to shoulder most of the Windows support burden, then that would make sense.
Good question.

I guess it's mostly because no laptop would work without all those othe IP blocks that aren't CPU, GPU or media engine. Mediatek has everything to round out a full offer.

Perhaps most importantly, WIFI and mobile networks. Can't get them from Qualcomm when you're after them, Intel sold to Apple who botched it, and pretty near the only other source is Mediatek.

And then Mediatek very much wants to eat Qualcomm's cake, are really hungry to succeed in the consumer market, they speak the same language, they aren't a direct threat to Nvidia, I guess that's enough for now.

And Nvidia could alway buy them later, should things turn out uncomfortable.

Not duplicating the work against an adversary, someone else can do for you, sounds very much like Sun Tsu to me.
 
Last edited:
It's a good question and one I can't answer. I think the abstraction layer provided by translation to micro-ops should probably enable them to optimize for the normal cases encountered in 64-bit operating systems and runtimes. The legacy stuff they wanted to drop with X86S has probably already been badly disadvantaged and could involve heavy amounts of microcode.
I always find those discussions about "technical debt" around ARM vs x86 a bit funny.

It's not as if ARM was a clean sheet design done yesterday, it's barely seven years younger (1985 vs. 1978). And it's got plenty of ballast on its own, even if it may be less fixated on backard compatibility.

Yes, one might argue that 8086 was actually the 3rd rebrew after 4004, 8008 and 8080, but there have been ARM Thumb "atavistic retrogrades" that IMHO rain a bit on ARM's 'clean sheet' architecture parade, not quite to 4004 levels, but somewhat 8080, perhaps without the BCD support.

I'm really waiting for those RISC-V guys sticking it to ARM b***es but your off-the-shelves superscalar CPU design just isn't that much around the ISA you choose, just plenty hard to do on a limited budget.
 
It's not as if ARM was a clean sheet design done yesterday, it's barely seven years younger (1985 vs. 1978).
You really should be comparing x86-64 vs. AArch64, which is 1999 vs. 2011. I also think AMD wasn't as ambitious with 64-bit x86, partly because they wanted to make it easier to support all the legacy x86 instructions. Going to 64-bit, the only real changes they made were in register width and number.

By contrast, when ARM went to 64-bit, they did that and more. They got rid of predication of non-branches, added a 3rd operand to most instructions, which is something Intel's APX only now contemplates. AArch64 also got rid of some instructions.

So, to me, AArch64 looks pretty close to a clean sheet redesign, while x86-64 most certainly is not.

but there have been ARM Thumb "atavistic retrogrades" that IMHO rain a bit on ARM's 'clean sheet' architecture parade,
Only if they still supported it, which the most recent cores don't.

In fact, now that they've dropped all the legacy AArch32 stuff, ARM's new cores don't even need a mOP cache! Imagine that - even if x86S happened and they could somehow also drop a bunch of other 32-bit legacy, there's no way a modern x86 core could be competitive without their micro-op caches!

not quite to 4004 levels, but somewhat 8080, perhaps without the BCD support.
Pfft. Forget about BCD, let's talk about x87!
 
Last edited:
  • Like
Reactions: thestryker