News Nvidia Client Arm CPUs Are Not an Immediate Threat Claims Intel CEO

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
It seems very strange to me that some people seem to think Intel could never in its life ever figure out how to create, implement, and then retail package an ARM processor and compete in a market with other ARM processors. Or RISC-V(or etc.) for that matter.

I have confidence Intel could switch up and they could do this.
Thats assuming Nvidia will license the new chips out. If they refuse and Intel fights back, it might end Intel into camp anticompetitive or Monopoly and force them to create their own iteration (like the low performing Atom line).
 
Question, would a legacy stripped x86 CPU not be as efficient as ARM. I think it would be smart for AMD and Intel to partner on this one, and offer x86 64 bit only CPUs, where the legacy stiff would be emulated.

I know Intel has hinted at this, but maybe if a joint effort cannot be done, maybe it is something they could try to push in parallel on their own.
 
Your claim was that Nvidia doesn't want low-margin products. Why does it matter whether they're used in Chromebooks, consoles, or robots?
They don't want low margin products and my point of asking about client devices is that's where low margin products tend to lie. Every Tegra post the X1 has been much higher margin, and they've not appeared in any client devices. The price differential on the Jetsons (the only device regular folks can directly buy them in other than a car) should give you a clue as to the margin shift here.

cheapest example: $369 gets you a Tegra SoC with 40% of a 3050, 6 A78s and 8GB LPDDR5 (128bit) in module form whereas board form is $499.

Long story short what evidence do you have that nvidia is still interested in high volume low margin parts?
 
In a way he is taking Nvidia seriously. He's throwing shade to make people look away. In fact a proper response if Intel didn't take them seriously would be "Good luck to them.". The worst response "they are competition". The later scares investors.
You make a good point, though. Investors probably expect a serious answer, and being to glib about a situation warranting a more serious answer might be viewed negatively.

However, the real place to look for Intel's true feelings on the news will be in their SEC disclosures, which are required to detail the foreseeable risks to the business.
 
  • Like
Reactions: digitalgriffin
In fact Intel does have its own arm based designs. But they Re so heavily invested in the x64 system they don't want to be a "me too" in ARM
Uh, not custom cores, though. You mean just the embedded ARM cores in the Altera FPGAs and perhaps Habana's AI accelerators? All are using off-the-shelf IP from ARM - no custom cores, as far as I know.
 
Question, would a legacy stripped x86 CPU not be as efficient as ARM.
It's unclear what "legacy-stripped" even means. You can't do a whole lot to truly cleanup the x86 ISA without scrapping it and starting over.

...not that Intel isn't trying to do what it can, via X86S and APX.
 
  • Like
Reactions: thestryker
Every Tegra post the X1 has been much higher margin, and they've not appeared in any client devices.
How do you know what their margins are? Most get built into devices, and we can't know what price their manufacturers negotiate with Nvidia.

The price differential on the Jetsons (the only device regular folks can directly buy them in other than a car) should give you a clue as to the margin shift here.
Jetson is basically like a developer kit. Once you've prototyped a design, anyone making devices at scale would probably go on to design a custom board. Even if they do build around the module spec, you can negotiate the price of anything if the volume is big enough.

Long story short what evidence do you have that nvidia is still interested in high volume low margin parts?
No more evidence than you have that they're not interested in high-volume, low-magin parts.
 
I don't know if I would have any use for it but the Nvidia ARM situation would be what interests me most.
I imagine it wouldn't be something home users could use.
What we really need is a Windows replacement. And don't tell me Linux.
 
I imagine it wouldn't be something home users could use.
Why not?

What we really need is a Windows replacement. And don't tell me Linux.
You mean a replacement for the Windows OS, or a replacement for the hardware which can run it? Regarding the former, ChromeOS and MacOS are the only serious alternatives that are truly "consumer-friendly". For the latter, Qualcomm's exclusivity with Microsoft ends in 2024, hence the news.
 
Question, would a legacy stripped x86 CPU not be as efficient as ARM. I think it would be smart for AMD and Intel to partner on this one, and offer x86 64 bit only CPUs, where the legacy stiff would be emulated.

I know Intel has hinted at this, but maybe if a joint effort cannot be done, maybe it is something they could try to push in parallel on their own.
That's what itanium was, 64 bit only brand spanking new ISA, nobody cared and nobody wanted to adopt it, I think intel, and everybody else, learned from that.
Intel is not going to screw with their cores unless there is a huge customer that is going to buy humongous amounts of units.
 
Question, would a legacy stripped x86 CPU not be as efficient as ARM. I think it would be smart for AMD and Intel to partner on this one, and offer x86 64 bit only CPUs, where the legacy stiff would be emulated.

I know Intel has hinted at this, but maybe if a joint effort cannot be done, maybe it is something they could try to push in parallel on their own.

There are a very large percentage of transistors dedicated to backward compatibility. Namely addressing modes. And optimizing the decoder, dispatcher & scheduler for CISC variable length words is much more difficult.

The irony is once the the scheduler is done, Intel breaks CISC words into very arm like simple micro ops. It's been this way since the first Pent IV with HT.


Think of it like cars. How more harder is it to the extra 10 MPH when you already doing 190MPH? It takes a lot more internal complexity and power to overcome that last bit.
 
Last edited:
That's what itanium was, 64 bit only brand spanking new ISA, nobody cared and nobody wanted to adopt it, I think intel, and everybody else, learned from that.
Intel is not going to screw with their cores unless there is a huge customer that is going to buy humongous amounts of units.
Itaniums fault was not only a lacking eco system, but it was CISC on steroids. And it shifted the responsibility of optimization from the CPU core to the compiler due to the complexity. And as it turned out, software opt on Itaniums was a pretty hard thing to do. Itaniums promises couldn't be realized.

Emulating x86 and x64 is also a nightmare. Linus is genius. But even his brains couldn't solve the optimization issues while he worked at transmeta getting arm to execute x64. Nvidia will face similar issues if they aim for the everyday user. So this is not their strategy.

That said the Microsoft dev studio is the best ide on the planet bar none. And the .net core And python compiler is simply amazing being able to flip between architectures and redeploy for docker distros on different platforms. This works wonders for say web servers. Not so much for windows desktop users and gamers. And I believe NVIDIA is aiming for the web server market as it's incredibly lucrative.

Everybody else are plebians and tuppense to Nvidia now. They could care less. And given their stranglehold on the market I think it's time anti trust actions should be considered. Break up AI, gaming, and server business.
 
Last edited by a moderator:
it shifted the responsibility of optimization from the CPU core to the compiler due to the complexity. And as it turned out, software opt on Itaniums was a pretty hard thing to do. Itaniums promises couldn't be realized.
That's partly a misconception. Nothing about the ISA prevented speculative execution or even out-of-order execution, however Intel's plan was to rely mostly on compile-time optimization - at least, initially. If Intel had put anywhere close to as much effort into optimizing IA64 as x86, they could've gotten it to run a lot faster. They just didn't care, because AMD came along and extended x86 to 64-bit, so they could no longer walk away from that market.

Emulating x86 and x64 is also a nightmare. Linus is genius. But even his brains couldn't solve the optimization issues while he worked at transmeta getting arm to execute x64. Nvidia will face similar issues if they aim for the everyday user. So this is not their strategy.
It's funny you say that, because Nvidia's Denver cores were natively VLIW and merely emulated ARM ISA. There were rumors (maybe more) that they also planned to emulate x86 on them, but couldn't get an x86 license. And yes, Denver was actually designed for Tegra SoCs in phones, tablets, and laptops - client SoCs for everyday users. Google actually used it for their first Pixel-branded laptop.

Finally, you're also wrong to put the onus on Nvidia to worry about emulation. Windows/ARM already emulates x86, quite successfully. The only thing Nvidia has to do is design conformant ARM cores, and then Microsoft has taken care of the rest. All of the x86 programs they used for benchmarking the Thinkpad X13S were run under Windows' emulation:


And yes, I know those benchmarks aren't great, but the point is the emulation works. Also, the CPU is using old ARM X1 and A78 cores, so not really comparable to what we would expect to be on the market in 2025.
 
Last edited:
  • Like
Reactions: thestryker
That's partly a misconception. Nothing about the ISA prevented speculative execution or even out-of-order execution, however Intel's plan was to rely mostly on compile-time optimization - at least, initially. If Intel had put anywhere close to as much effort into optimizing IA64 as x86, they could've gotten it to run a lot faster. They just didn't care, because AMD came along and extended x86 to 64-bit, so they could no longer walk away from that market.
Absolutely this, and another chunk of Intel's problem is that they were taking over the server market with x86. If this isn't happening then Itanium might have stood a chance, but they couldn't afford to blow their own momentum. It's one of those things where I wonder what could have been if Itanium had launched when it was originally supposed to (if I'm remembering right it was around 3-4 years late).
 
  • Like
Reactions: bit_user
Itaniums fault was not only a lacking eco system, but it was CISC on steroids. And it shifted the responsibility of optimization from the CPU core to the compiler due to the complexity. And as it turned out, software opt on Itaniums was a pretty hard thing to do. Itaniums promises couldn't be realized.

Emulating x86 and x64 is also a nightmare. Linus is genius. But even his brains couldn't solve the optimization issues while he worked at transmeta getting arm to execute x64. Nvidia will face similar issues if they aim for the everyday user. So this is not their strategy.

That said the Microsoft dev studio is the best ide on the planet bar none. And the .net core And python compiler is simply amazing being able to flip between architectures and redeploy for docker distros on different platforms. This works wonders for say web servers. Not so much for windows desktop users and gamers. And I believe NVIDIA is aiming for the web server market as it's incredibly lucrative.

Everybody else are plebians and tuppense to Nvidia now. They could care less. And given their stranglehold on the market I think it's time anti trust actions should be considered. Break up AI, gaming, and server business.
I think NVIDIA / ARM will be a .NET only running PC - similar to the ARM laptops Microsoft is selling now. I can foresee that a lot of new app development will be on .NET Core and not the old .NET Framework, so CPU compatibility should be less of a problem for a lot of mainstream software. The problem is that a lot of legacy software, games, and engineering applications will likely still need x86, so I don't see those chips taking off.

X86S would solve the above problem by running everything that currently runs on X86-64, and maybe emulate the old stuff. I would sort of agree with most of the comments though that likely we are talking about 5% or so benefit, for a potential loss of perfect backwards compatibility
 
I think NVIDIA / ARM will be a .NET only running PC - similar to the ARM laptops Microsoft is selling now. I can foresee that a lot of new app development will be on .NET Core and not the old .NET Framework, so CPU compatibility should be less of a problem for a lot of mainstream software. The problem is that a lot of legacy software, games, and engineering applications will likely still need x86, so I don't see those chips taking off.

X86S would solve the above problem by running everything that currently runs on X86-64, and maybe emulate the old stuff. I would sort of agree with most of the comments though that likely we are talking about 5% or so benefit, for a potential loss of perfect backwards compatibility
Intel seems to think that x86-64 and more modern versions of x86 have been around long enough that the amount of people still running code that hasn't been migrated to something emulatible is small enough that they can trim some things out of the ISA. And frankly they are probably right. There is almost certainly very few programs written pre Pentium era around that have not already been forced to do emulation.

And if they can get the transistor budget, which is probably the most important part, and pick up some performance gains it seems like a good deal for everyone. Especially because the changes at a structural level would likely enable better energy performance.
 
Intel seems to think that x86-64 and more modern versions of x86 have been around long enough that the amount of people still running code that hasn't been migrated to something emulatible is small enough that they can trim some things out of the ISA. And frankly they are probably right. There is almost certainly very few programs written pre Pentium era around that have not already been forced to do emulation.

And if they can get the transistor budget, which is probably the most important part, and pick up some performance gains it seems like a good deal for everyone. Especially because the changes at a structural level would likely enable better energy performance.
I wonder if 8 bit guy knows how much legacy stuff is still around. He restores old systems for clients.

At my last job we had FORTRAN code from 1967 still being compiled. It would outsourced to another non friendly country which is possibly one of the dumbest moves they could have made.

I know some cobol and prolog systems for banks are still on old systems. Heck even pentagon only got off 8" floppy drives about 5 6 years ago.
 
There is no x86 license in the first place ...
There most certainly is a license for x86 and Intel controls the keys entirely which is why there are only 3 out there right now. They also effectively have a right of refusal for the usage of said licenses which is why there aren't more out there. This is why the Chinese attempts at x86 CPUs have all been licensing the chip design from another company (AMD and Via both did this). The status of Via's license is unknown due to the dissolution of Centaur (where they got it), but the assumption is that they still have it. SiS had one from an acquisition which they sold off, but the company they sold it to only releases very low power 32 bit CPUs. On top of Intel controlling x86 AMD controls the 64 bit instructions used in x86-64 so anyone wanting to make a compatible chip has to get AMD onboard as well.
 
  • Like
Reactions: bit_user
There most certainly is a license for x86
No, there most certainly is not!

There are licenses for designs of cores, be they x86, arm, or whatever else.
And nvidia already owns designs of x86 cores they could use, the thing is they are so old that they are useless, and neither intel or amd is stupid enough to give anybody else access to their current designs that are actually useful.
  • ALi (x86 products went to Nvidia through the ULi sale)
  • Nvidia (M6117C - 386SX embedded microcontroller)
Also open source....

Open source x86 cores[edit]​

  • ao486[18] open source FPGA implementation of the 486SX (currently targets the Terasic Altera DE2-115)
  • S80186[19] open source 80186 compatible FPGA implementation
  • Zet open source 80186 compatible FPGA implementation targeting the Xilinx ML403 and Altera DE1
 
And nvidia already owns designs of x86 cores they could use, the thing is they are so old that they are useless...
They bought embedded chip designs not a license you might want to read your source...
Also open source....
FPGA implementations not actual chips...
No, there most certainly is not!
Yeah... about that... I'm going to go lazy since apparently we're using wikipedia for sourcing so...
Nevertheless, of those, only Intel, AMD, VIA Technologies, and DM&P Electronics hold x86 architectural licenses, and from these, only the first two actively produce modern 64-bit designs, leading to what has been called a "duopoly" of Intel and AMD in x86 processors.

Intel has an architectural license just like Arm does, but they absolutely will not issue it anymore.
 
  • Like
Reactions: bit_user
They bought embedded chip designs not a license you might want to read your source...
Yes, they bought x86 CPU core designs and are legally able to produce them, that's what I said.
Maybe there is a language barrier or something but what do you think a x86 license is? would be?
FPGA implementations not actual chips...
They still would not be able to do that if they would step on actual licenses.