News Nvidia's Arm Acquisition Expected to Fail, Nvidia Would Lose $1.25B Deposit

I don't see why nVidia doesn't create their own ISA for CPUs or just, you know, continue using ARM or RISC-V. They just can't fool the people with the magnifier glass scrutinizing their real intentions, so may as well realize they've tried to chew something way bigger than their mouth.

Plus, whatever they develop, they still have a lot of money to just shove it to Marketing and fool people into using it. Like CUDA. Android can be used on pretty much any underlying ISA as long as they make the JVM work on it, so that's not too hard to do and it just takes time. So, I'm not sure why nVidia is so hard at trying to grab ARM, other than trying to get an unfair advantage over the competition.

Regards.
 
So, I'm not sure why nVidia is so hard at trying to grab ARM

you can stir the direction to your liking when you have the helm. just look at CUDA. nvidia can develop the API freely without any other company complaining it does not go well with their architecture. there are reasons why apple go with their own Metal API and intel with their OneAPI instead of doubling down their effort to support OpenCL more.
 
you can stir the direction to your liking when you have the helm. just look at CUDA. nvidia can develop the API freely without any other company complaining it does not go well with their architecture. there are reasons why apple go with their own Metal API and intel with their OneAPI instead of doubling down their effort to support OpenCL more.
Leaving your snark aside, I am pretty clear on what you mention, hence why mentioned I don't understand and the only reason that comes to mind is just get an unfair advantage.

Regards.
 
It's over with Arm acquisition.

They already are focusing on a new CPU development path in Israel, they're expanding there.

It was in the news a few days ago.
 
  • Like
Reactions: artk2219
Plus, whatever they develop, they still have a lot of money to just shove it to Marketing and fool people into using it.
Creating a new architecture in an attempt to secure the entire market for yourself didn't work so well for Intel when it attempted doing exactly that 20 years ago.

I believe most of the market has gotten fed up with single-vendor proprietary standards. Nvidia spent a small fortune attempting to force people to pay a $100-200 Gsync tax for variable refresh and got crushed by VESA introducing Adaptive Sync to the DP spec, allowing AMD and Intel to provide variable sync for practically free on dozens of monitors within two years, forcing Nvidia to also adopt it.

While it may still be possible to do your own thing, succeeding as a single-vendor shop where viable cross-vendor options already exist is getting tougher. Even Intel decided to develop most of its Xe image enhancements to be compatible across vendors to help drive adoption instead of pushing its own exclusive hardware-locked thing.
 
  • Like
Reactions: digitalgriffin
I'm hoping this merger fails hard, in the words of Linus Torvalds "F*** You Nvidia", hopefully everyone see's through the crap they've been trying to pull. A basic search into their history will show why they shouldn't be allowed to acquire ARM. Frankly their attempted merger has just pushed more people towards developing RISC V chips since Nvidia has a history of not liking to share their toys or play nice with others.
 
Invalid is quite correct. NVIDIA has a big uphill battle if they try to force people to adopt a new chip architecture.

Arm is QUITE efficient. Developing NEW IP that is just as efficient will be difficult without stomping on patents. It was a huge money losing venture by Intel with Atom.
 
  • Like
Reactions: artk2219
Creating a new architecture in an attempt to secure the entire market for yourself didn't work so well for Intel when it attempted doing exactly that 20 years ago.

I believe most of the market has gotten fed up with single-vendor proprietary standards. Nvidia spent a small fortune attempting to force people to pay a $100-200 Gsync tax for variable refresh and got crushed by VESA introducing Adaptive Sync to the DP spec, allowing AMD and Intel to provide variable sync for practically free on dozens of monitors within two years, forcing Nvidia to also adopt it.

While it may still be possible to do your own thing, succeeding as a single-vendor shop where viable cross-vendor options already exist is getting tougher. Even Intel decided to develop most of its Xe image enhancements to be compatible across vendors to help drive adoption instead of pushing its own exclusive hardware-locked thing.
I don't completely disagree, but there's one important difference here:Intel tried to hook up server with the promise of "cross X86". That was their biggest mistake there.

To ellaborate a bit more: Intel only does X86 (really) and they have no experience or real incentive to move over, no matter what they wanted you to believe back then. Now, nVidia provides solutions which are ISA/CPU independent, as long as they implement whatever API the GPU uses over PCIe and nVidia, obviously, provides the drivers. So, unlike Intel, which only provided the CPU, nVidia can provide vertical elements Intel could not back then and had to work making everything work by themselves while HP tried to sell the stuff. That's why they have their A100's working using ARM or X86 (IIRC). And remember CUDA. Like it or not, nVidia does have a strong hold there. Imagine they get ARM and now their CUDA-capable cards can only work using ARM CPUs because they drop X86 support from it. What would all the people using CUDA would do then? Move all their software that's been working for years to ARM along with all the underlying infrastructure and dev kits associated? What if nVidia can bundle everything for you there for a "modest" fee? And several other very plausible scenarios where nVidia would just squeeze you out until you can't breathe. This last bit is also to explain why I think people blindingly using CUDA are, well, in danger at all times.

All in all, the cost of aquiring ARM must certainly be cheaper than developing everything from scratch and it would not allow them any unfair advantage, so I'm guessing they won't anyway, heh.

Regards.
 
  • Like
Reactions: artk2219
To ellaborate a bit more: Intel only does X86 (really) and they have no experience or real incentive to move over, no matter what they wanted you to believe back then.
Intel wanted the mainstream on IA before 64bits became a necessity on the desktop and was pushing hard to make that happen, at one point selling IA development kits at absurdly low prices to help speed the process along. Then AMD launched the Athlon/Opteron 64bits lineup and the bulk of Intel's prospective IA clients flocked to that, forcing Intel to launch its own x86-64 chips and effectively dooming IA.

x86 support on Itanium was only intended as a transition convenience so clients don't need to have a separate machine to run legacy x86 stuff on until they get around to porting the code, not performance. Intel did try to scale IA's x86 performance in later CPUs due to intense criticism. At that point though, IA even running native code was much slower than Opterons and it went downhill from there with only the "big-tin" clients who needed IA's more robust RAS creds sticking around.
 
Leaving your snark aside, I am pretty clear on what you mention, hence why mentioned I don't understand and the only reason that comes to mind is just get an unfair advantage.

Regards.

because nvidia also in disadvantage in x86. AMD and intel can create specific tech for their x86 CPU that will work in tandem with their GPU but won't work with other GPU that is not theirs. and things like this are very crucial in the HPC space. it is one of the reason why nvidia spend almost 7 billion to acquire mellanox before so they are not in disadvantage when they want faster interconnect for GPU to x86 CPU communication. (intel have something like Omni path with AMD have infinity fabric). probably why the frontier super computer are using complete AMD system instead of the usual pairing whatever CPU with nvidia GPU. intel and AMD could make pairing their CPU with nvidia GPU make less sense going forward so nvidia need to secure a CPU that can compete with AMD/Intel x86 processor ASAP. if they can own ARM then they can fund ARM into that direction faster. like it or not the server/HPC segment still being dominated by x86. nvidia being unable to create their own x86 CPU so it will cater specifically for their GPU design is already at disadvantage.
 
because nvidia also in disadvantage in x86. AMD and intel can create specific tech for their x86 CPU that will work in tandem with their GPU but won't work with other GPU that is not theirs. and things like this are very crucial in the HPC space. it is one of the reason why nvidia spend almost 7 billion to acquire mellanox before so they are not in disadvantage when they want faster interconnect for GPU to x86 CPU communication. (intel have something like Omni path with AMD have infinity fabric). probably why the frontier super computer are using complete AMD system instead of the usual pairing whatever CPU with nvidia GPU. intel and AMD could make pairing their CPU with nvidia GPU make less sense going forward so nvidia need to secure a CPU that can compete with AMD/Intel x86 processor ASAP. if they can own ARM then they can fund ARM into that direction faster. like it or not the server/HPC segment still being dominated by x86. nvidia being unable to create their own x86 CPU so it will cater specifically for their GPU design is already at disadvantage.
But you do realize that ARM licenses all their tech, right? It's not like nVidia is suddenly cut to any of ARM's IPs in order to develop whatever they want unless they buy them. Same with some AMD and Intel tech; they can perfectly enter into cross licensing deals, although nVidia has already burned bridges before, they should have the incentive to do so; or you'd think if the massive ego they hold doesn't get in the way. Nothing prevents nVidia from:
  1. Developing their own IPs for whatever they want, as they have so far.
  2. Buy companies which are not used by other many competitors in their critical designs and/or have an influential market-share.
  3. Use consortium standards
  4. Just (cross)license what they need.
I don't know why you want to paint nVidia as the "poor guy" in all this.

EDIT:
Intel wanted the mainstream on IA before 64bits became a necessity on the desktop and was pushing hard to make that happen, at one point selling IA development kits at absurdly low prices to help speed the process along. Then AMD launched the Athlon/Opteron 64bits lineup and the bulk of Intel's prospective IA clients flocked to that, forcing Intel to launch its own x86-64 chips and effectively dooming IA.

x86 support on Itanium was only intended as a transition convenience so clients don't need to have a separate machine to run legacy x86 stuff on until they get around to porting the code, not performance. Intel did try to scale IA's x86 performance in later CPUs due to intense criticism. At that point though, IA even running native code was much slower than Opterons and it went downhill from there with only the "big-tin" clients who needed IA's more robust RAS creds sticking around.
Not to keep the hard tangent on the topic, but that's basically what Intel could not work around: migrating the x86 ISA effectively because they don't have many verticals in industries that require fully integrated machines/servers. They're now working on that and ARC is the proper beginning of it. Back to nVidia: they do have those verticals, which don't really require owning ARM. Not in a significant way, I'd say.

Regards.
 
  • Like
Reactions: artk2219
But you do realize that ARM licenses all their tech, right? It's not like nVidia is suddenly cut to any of ARM's IPs in order to develop whatever they want unless they buy them.

what nvidia want is ARM to give more focus on the server segment more. ARM might do well with their licensing business but not that well to the point it can aggressively attacking x86. they simply does not have that kind of money to make such rapid progress. one of the reason why Sofbank put ARM on sale was because the company did not generate much profit and ARM also disagree with Softbank initial suggestion for them to raise the licensing fee.

Same with some AMD and Intel tech; they can perfectly enter into cross licensing deals, although nVidia has already burned bridges before,

if anything intel is afraid for nvidia to make their own x86 CPU (Jensen himself was AMD CPU engineer before he move to LSI and later co-found Nvidia in 1992). the one that burn the bridge is intel not nvidia. just look at the lawsuit settlement between the two had agreed in 2011. originally it was about the original 2004 cross licensing deal where intel said they did not allow nvidia to continue making intel chipsets pass certain generation. then how the settlement comes with "permanently barred nvidia the right to make x86 CPU".


I don't know why you want to paint nVidia as the "poor guy" in all this.

i did not say nvidia is the "poor" guy here. just stating the fact the kind of disadvantage that nvidia have to face with x86 and how they want to secure their own future if they can have full control of ARM. licensing and cross licensing still come with certain limitation. and after the issue they had with intel before they know owning the tech is much better if possible then have to license it. remember when nvidia still in mobile SoC business? rather than licensing the modem/baseband from Qualcomm they end up buying Icera. just look what happen with samsung exynos. one of the original exynos did license Qualcomm modem. then to force people to use their snapdragon Qualcomm no longer licensing their modem to other SoC maker except Apple. and then it marks the bitter samsung have to go through when they cannot license modem/baseband from Qualcomm where the unite states flagship have always to use snapdragon SoC instead of their own exynos.
 
what nvidia want is ARM to give more focus on the server segment more. ARM might do well with their licensing business but not that well to the point it can aggressively attacking x86. they simply does not have that kind of money to make such rapid progress. one of the reason why Sofbank put ARM on sale was because the company did not generate much profit and ARM also disagree with Softbank initial suggestion for them to raise the licensing fee.
That's a bad excuse though. As I said, ARM licenses everything they produce and they do have ISA capable of handling server. While I know what you mean, there's nothing preventing nVidia from asking ARM to do it? Also, they can add whatever they want via extensions in between major releases of their ISA, so...

if anything intel is afraid for nvidia to make their own x86 CPU (Jensen himself was AMD CPU engineer before he move to LSI and later co-found Nvidia in 1992). the one that burn the bridge is intel not nvidia. just look at the lawsuit settlement between the two had agreed in 2011. originally it was about the original 2004 cross licensing deal where intel said they did not allow nvidia to continue making intel chipsets pass certain generation. then how the settlement comes with "permanently barred nvidia the right to make x86 CPU".
I can't remember how those terms went exactly, but I do remember it was a clash of egos at the core of it. Do you really think nVidia asked nicely to Intel and vice versa? And I'm sure Intel doesn't want any other Company out there making X86; they didn't even want AMD, but IBM forced them to.

i did not say nvidia is the "poor" guy here. just stating the fact the kind of disadvantage that nvidia have to face with x86 and how they want to secure their own future if they can have full control of ARM. licensing and cross licensing still come with certain limitation. and after the issue they had with intel before they know owning the tech is much better if possible then have to license it. remember when nvidia still in mobile SoC business? rather than licensing the modem/baseband from Qualcomm they end up buying Icera. just look what happen with samsung exynos. one of the original exynos did license Qualcomm modem. then to force people to use their snapdragon Qualcomm no longer licensing their modem to other SoC maker except Apple. and then it marks the bitter samsung have to go through when they cannot license modem/baseband from Qualcomm where the unite states flagship have always to use snapdragon SoC instead of their own exynos.
It's the same disadvantage Intel and AMD have without access to some very good IP nVidia holds for HPC and GPUs in general. AMD got ATI back then, so there's that. Difference is, ATI wasn't supplying any ISA or components to any other major players and AMD was quick to unload the licensed stuff to other to Qualcomm as part of the deal, since Qualcomm did raise an eyebrow back then. That is what also prompted the exit of Ruiz, for not keeping those and making low power designs for Qualcomm and others. If you look at this, nVidia could do the same. Give the rights to Apple, Qualcomm, Samsung and others to just use the ISAs perpetually or sell them what they want in terms of IP, but they won't and we all know why. It's not even in the conversation, never was near the table, not even in the same planet, universe or dimension to consider XD

Regards.
 
  • Like
Reactions: artk2219
And I'm sure Intel doesn't want any other Company out there making X86; they didn't even want AMD, but IBM forced them to.
That's folklore.
Back then anybody who could figure out how to make an 8088 could and did make them.
https://en.wikipedia.org/wiki/Intel_8086#Derivatives_and_clones
Compatible—and, in many cases, enhanced—versions were manufactured by Fujitsu,[22] Harris/Intersil, OKI, Siemens, Texas Instruments, NEC, Mitsubishi, and AMD. For example, the NEC V20 and NEC V30 pair were hardware-compatible with the 8088 and 8086 even though NEC made original Intel clones μPD8088D and μPD8086D respectively, but incorporated the instruction set of the 80186 along with some (but not all) of the 80186 speed enhancements, providing a drop-in capability to upgrade both instruction set and processing speed without manufacturers having to modify their designs. Such relatively simple and low-power 8086-compatible processors in CMOS are still used in embedded systems.

The electronics industry of the Soviet Union was able to replicate the 8086 through both industrial espionage and reverse engineering[citation needed]. The resulting chip, K1810VM86, was binary and pin-compatible with the 8086.
The IBM deal was for intel to have a second way to FAB their version of the CPU so intel went into a contract with amd which is basically just an agreement to FAB something for the other company, amd became to intel what tsmc is to amd today, just an external FAB.
https://en.wikipedia.org/wiki/Advanced_Micro_Devices#Technology_exchange_agreement_with_Intel
Intel had introduced the first x86 microprocessors in 1978.[50] In 1981, IBM created its PC, and wanted Intel's x86 processors, but only under the condition that Intel also provide a second-source manufacturer for its patented x86 microprocessors.[11] Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981[45][51] and formally executed in February 1982.[34] The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other; that is, each party could "earn" the right to manufacture and sell a product developed by the other, if agreed to, by exchanging the manufacturing rights to a product of equivalent technical complexity.
 
  • Like
Reactions: artk2219
That's folklore.
Back then anybody who could figure out how to make an 8088 could and did make them.
https://en.wikipedia.org/wiki/Intel_8086#Derivatives_and_clones

The IBM deal was for intel to have a second way to FAB their version of the CPU so intel went into a contract with amd which is basically just an agreement to FAB something for the other company, amd became to intel what tsmc is to amd today, just an external FAB.
https://en.wikipedia.org/wiki/Advanced_Micro_Devices#Technology_exchange_agreement_with_Intel
I don't read anything in those citations which can deny the "IBM made Intel share X86 with AMD".

Regards.
 
  • Like
Reactions: artk2219
I don't read anything in those citations which can deny the "IBM made Intel share X86 with AMD".

Regards.
It couldn't be clearer.
IBM only needed intel to provide a second-source manufacturer for intel's patented x86 microprocessors.
There was no sharing of x86, anybody could make x86 CPUs, there was you could produce and sell my tech if we agree.
The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other;
 
  • Like
Reactions: artk2219
It couldn't be clearer.
IBM only needed intel to provide a second-source manufacturer for intel's patented x86 microprocessors.
There was no sharing of x86, anybody could make x86 CPUs, there was you could produce and sell my tech if we agree.
I don't know if I'm being too obtuse, but this:

"The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other"

Doesn't that mean "cross licensing"? And IBM forced Intel to do it?

Am I missing something here? XD

Regards.
 
  • Like
Reactions: artk2219
I don't know if I'm being too obtuse, but this:

"The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other"

Doesn't that mean "cross licensing"? And IBM forced Intel to do it?

Am I missing something here? XD

Regards.
Did TSMC got the license to make TSMC branded ryzen chips just because they are the manufacturer of ryzen chips for AMD?
Producing something for someone does not transfer any licenses to you.
 
  • Like
Reactions: artk2219
I don't know if I'm being too obtuse, but this:

"The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other"

Doesn't that mean "cross licensing"? And IBM forced Intel to do it?

Am I missing something here? XD

Regards.
There was a time where Intel couldn't meet demand and contracted AMD for extra fab capacity just like AMD is contracting TSMC now with TSMC being just a manufacturer, not a licensee.

IIRC, where the licensing battle started is with DoD or NSA having a rule banning single-vendor critical components, which meant Intel had to license x86 to at least one other vendor if it wanted x86 to be considered for critical infrastructure.
 
  • Like
Reactions: artk2219
Did TSMC got the license to make TSMC branded ryzen chips just because they are the manufacturer of ryzen chips for AMD?
Producing something for someone does not transfer any licenses to you.
There was a time where Intel couldn't meet demand and contracted AMD for extra fab capacity just like AMD is contracting TSMC now with TSMC being just a manufacturer, not a licensee.

IIRC, where the licensing battle started is with DoD or NSA having a rule banning single-vendor critical components, which meant Intel had to license x86 to at least one other vendor if it wanted x86 to be considered for critical infrastructure.
Well, the deal was not only to fab, it was something like a cross-licensing: "Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981". This is not just "right to manufacture", but "exchange technology". If that is not cross-licensing, I don't know what is?

And this: "However, in the event of a bankruptcy or takeover of AMD, the cross-licensing agreement would be effectively cancelled". Does that sound familiar?

Also, during the years where that was valid, they developed new chips that were not terrible using the license agreement.

So, again, I don't see how it's not IBM that pushed Intel to allow AMD make X86 chips.

EDIT: Forgot the link... https://en.wikipedia.org/wiki/Advanced_Micro_Devices#IBM_PC_and_the_x86_architecture

Regards.
 
Last edited:
  • Like
Reactions: artk2219
Well, the deal was not only to fab, it was something like a cross-licensing: "Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981". This is not just "right to manufacture", but "exchange technology". If that is not cross-licensing, I don't know what is?

And this: "However, in the event of a bankruptcy or takeover of AMD, the cross-licensing agreement would be effectively cancelled". Does that sound familiar?

Also, during the years where that was valid, they developed new chips that were not terrible using the license agreement.

So, again, I don't see how it's not IBM that pushed Intel to allow AMD make X86 chips.

EDIT: Forgot the link... https://en.wikipedia.org/wiki/Advanced_Micro_Devices#IBM_PC_and_the_x86_architecture

Regards.
How did you manage to read just hat one part without reading the rest?
Intel and amd had a cross-license deal since 1976 long before the IBM deal became a thing in 1982, it got renewed due to the deal but it wasn't anything new.

In fact that portion you quoted was a main issue when AMD had to sell their FABs.

The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995.[33][34] The agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one year's notice.[33] The main result of the 1982 agreement was that AMD became a second-source manufacturer of Intel's x86 microprocessors and related chips, and Intel provided AMD with database tapes for its 8086, 80186, and 80286 chips.[34] However, in the event of a bankruptcy or takeover of AMD, the cross-licensing agreement would be effectively cancelled.[
 
  • Like
Reactions: artk2219
How did you manage to read just hat one part without reading the rest?
Intel and amd had a cross-license deal since 1976 long before the IBM deal became a thing in 1982, it got renewed due to the deal but it wasn't anything new.

In fact that portion you quoted was a main issue when AMD had to sell their FABs.
Because the first X86 CPU was in 1978 and Intel got away with not letting AMD license it. Then IBM forced them to.

That's all there is to it.

I'll stop here; this seems unproductive xD

Regards.
 
  • Like
Reactions: artk2219
Because the first X86 CPU was in 1978 and Intel got away with not letting AMD license it.
Only, they did.
The deal allowed AMD and Intel to flood the market with ridiculously profitable chips, retailing at just over $350 or twice that for 'military' purchases. The 8085 (3 MHz) processor followed in 1977, and was soon joined by the 8086 (8 MHz). In 1979 also saw production begin at AMD's Austin, Texas facility.

By the time Intel released their first 8-bit microprocessor (the 8008) in 1974, AMD was a public company with a portfolio of over 200 products -- a quarter of which were their own designs, including RAM chips, logic counters, and bit shifters. The following year saw a raft of new models: their own Am2900 integrated circuit (IC) family and the 2 MHz 8-bit Am9080, a reverse-engineered copy of Intel's successor to the 8008. The former was a collection of components that are now fully integrated in CPUs and GPUs, but 35 years ago, arithmetic logic units and memory controllers were all separate chips.

The blatant plagiarism of Intel's design might seem to be somewhat shocking by today's standards, but it was par for the course in the fledgling days of microchips. The CPU clone was eventually renamed as the 8080A, after AMD and Intel signed a cross-licensing agreement in 1976. You'd imagine this would cost a pretty penny or two, but it was just $325,000 ($1.65 million in today's dollars).

The deal allowed AMD and Intel to flood the market with ridiculously profitable chips, retailing at just over $350 or twice that for 'military' purchases. The 8085 (3 MHz) processor followed in 1977, and was soon joined by the 8086 (8 MHz). In 1979 also saw production begin at AMD's Austin, Texas facility.

When IBM began moving from mainframe systems into so-called personal computers (PCs) in 1982, the outfit decided to outsource parts rather than develop processors in-house. Intel's 8086, the first ever x86 processor, was chosen with the express stipulation that AMD acted as a secondary source to guarantee a constant supply for IBM's PC/AT.
 
  • Like
Reactions: artk2219