Plausible: Nvidia Working on x86 CPU

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]dragabain[/nom]My two cents (I'm no expert and I don't work in IT) is that Nvidia is working on a hardware decoder to turn x86 commands into something its Graphics cards can handle. That way they can throw 2, 3 or 4 cards in a computer and not need a processor. Also this makes it so they don't have to ask the community to recompile their programs for their arch.[/citation]

I suppose that may be possible on future hardware, but GPUs really aren't built for the type of instructions you would have on an x86 CPU. Even if they did that it would be rather inefficient. Still, I suppose that the overhead would be small on a GPGPU optimized for such a task. At the least they could make the ion platform without Intel at all ^_^.
 
G

Guest

Guest
What I expect is that GPU and CPU merge in a single entity sometime in the future. Instead putting multiple expensive cards in a PC, you could put multiple of these futuristic CPUs.
 

mgrant1957

Distinguished
Feb 10, 2009
2
0
18,510
I Think this is a smart move for Nvidia, Intel has already stated that Larabee is going to be a Multi-core x86-based chip. Think of a chip with 20-80 pentium1 class cores running at 2GHz, that can all execute x-86 instructions. If Larabee takes off and developers start writing x-86-based video drivers for larabee that would leave out AMD/ATI and NVidia, as their chips could not execute these instructions. this is a a ploy by Intel to steal NVidia's and ATI/AMD's thunder in the GPGPU arena as both AMD/ATI and NVidia each have their own propritary API's to use the GPGPU functionality of their respective chips. this would make CUDA and BADABOOM obsolete, and force NVidia to re-design all thier chips to execute x-86 code to run larabee optimized code on Nvidia's future chips, and Nvidia has a very long way to go to catch up to Intel's experience in optimizing compilers for x86 micro-code. AMD/ATI already have experience developing/optimizing code for x86 architectures.
 

that_aznpride101

Distinguished
Aug 13, 2005
111
0
18,680
If the rumor is true, Nvidia is going to face many issues getting into the CPU industry. One of the high entry barriers in the CPU industry is consumers want the CPU company to have a proven track record before purchasing their products. Both AMD and Intel have this already, for a new company in the CPU market like Nvidia, they're already have a big disadvantage.
 
G

Guest

Guest
At what point did the Inquirer become a credible news source that is worthy of being quoted - there is so much misinformation and completely wrong stories on that site, that I'm stunned Tom's would consider this worth linking.

Recall this is the sight that gave us "Reverse Hyperthreading", a claim that AMD was "dancing in the aisles" back in Apr 07 with a 3.0+GHz K10 chip, a flawed Intel 45nm process that lead to them developing 2 separate 45nm processes... and I can go on and on. so now you pick up an article based on conjecture, written by someone who has a personal axe to grind with Nvidia and call it news?
 

stasdm

Distinguished
Feb 10, 2009
53
0
18,630
Most probable nVIDIA is doing its own "Larrabug" - though do not think that they may succeed - they just could not do even thrue HT-conforting chipsets (so had to "raise hands" before x58).

Would not be not a great surprise if AMD is also creating its own "Larrahopper" - but here the chances are much greater.
 

LightWeightX

Distinguished
Dec 1, 2008
32
0
18,530
I keep hearing these type of rumors from NVIDIA though I have yet to actually see any hard evidence. When/If a product comes to market then I'll check it out, until then it doesn't much matter.
 

hannibal

Distinguished
Competing with Intel can be deadly in normal CPU area... But those mobile parts may be possible. They allrady have very good mobile GPU chips so having an CPU for handheld devises is possible. You don't need all those instruction sets, but it is not easy...
 

kansur0

Distinguished
Mar 14, 2006
140
0
18,680
Let's assume the rumors are true. nVidia is making an x86 compatible CPU. Don't you think nVidia would research and plan plan plan ahead of time (even longer than two years which is the stated developement time) before even attempting a design that is going to be a fairly big undertaking?

Let's assume that the x86 patent has expired. Let's also assume that the "architecture" could also be changed to become more like a GPU core but be able to run x86 code. Now imagine hundreds of tiny x86 cores that handle the basic functionality of windows. I know there is the issue of MMX and SSE but what if you could replace all of that with CUDA or OpenCL? You would completely avoid paying all of those licencing fees to use that technology. Very adventageous.

Have a look at this article:
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3073&p=2

AMD has created a set of SSE5 instructions. This article talks about the weak points of the x86 architecture and how SSE instructions over the years have fixed these problems by creating more robust instructions sets. This leaves me to believe that these instructions can be created by anyone that has an x86 licence.

Let's assume that you CAN create your own instruction sets in order to utilize the x86 architecture and thus interface with modern operating systems. After this assumption it is pretty clear that thousand(s) of tiny x86 processors on a chip (scale may be off) could be a very very powerful force on the CPU landscape. Now combine this chip with the simple number crunching power of the GPU and you have a computing monster on your hands.

Disclaimer: Don't flame me for experimental thought. These are only ideas based on articles I read here. I am not a programmer. I am not an engineer. I am just a fan of technology.
 

bounty

Distinguished
Mar 23, 2006
389
0
18,780
Also, Nvida may be banking on Intel stepping into the GPU arena and Intel stepping on their patents. If Intel steps on their patents enough, they can just start making X86 parts. Then if Intel sues, they counter sue. Mutually Assured Destruction, kinda like Palm and Apple at the moment.

Also, with AMD's falling market share, they may have some "monopoly" leverage. Back in the days, you could only use ATT/ma bell leased phones, but they forced them to allow aftermarket phones. Microsoft was forced to open up it's API's a bit as well. So who knows. It's all rumors (charlie... inquirer...) at this point anyways.
 

hellwig

Distinguished
May 29, 2008
1,743
0
19,860
[citation][nom]exit2dos[/nom]While you're right that Inellectual Property (In this case, it falls under Industrial Property) patents expire after 20 years, many new patents are filed with each uarch. nVidia may be able to use instructions from the orignal 8086/8088 line - but each jump in technology has granted new patented CPU instructions and technology. Here is a list of a few of the patents such as DSP, Multiscalar, Multiprocessor arrangements, Bus commands, etc:http://www.wipo.int/tools/en/gsear [...] [/citation]
Ah ha. Ok, so the comment about needing an x86 license would be because they need a license for any of the extensions: SSE, MMX, 3DNOW (AMD), etc. I mean, I knew there could be no way the original x86 architecture (or microcode or whatnot) was still covered under any patent. Depending on what they need it for, I don't see why NVIDIA couldn't create a 386 clone (which had 32-bit extensions) and simply update the architecture on their own to have whatever performance they need from it. I mean, their own GPUs are far better at multimedia, why would NVIDIA need any of those extensions? Anyway, it's all rumor at this point, so I guess the "why" is the big question here.
 

danimal_the_animal

Distinguished
Jun 11, 2006
331
0
18,790
[citation][nom]is this a joke[/nom]At what point did the Inquirer become a credible news source that is worthy of being quoted - there is so much misinformation and completely wrong stories on that site, that I'm stunned Tom's would consider this worth linking.Recall this is the sight that gave us "Reverse Hyperthreading", a claim that AMD was "dancing in the aisles" back in Apr 07 with a 3.0+GHz K10 chip, a flawed Intel 45nm process that lead to them developing 2 separate 45nm processes... and I can go on and on. so now you pick up an article based on conjecture, written by someone who has a personal axe to grind with Nvidia and call it news?[/citation]

Tom's didn't link it....A USER DID!
 

effee

Distinguished
Feb 10, 2009
2
0
18,510
What is the point? Seems like reinventing the wheel. Intel and amd already have very advanced designs based on the x86 architecture, and via also makes x86 chips. How many cpus of a given species can the market absorb, especially in such a seriously depressed economy? Qould have to be some special chip I'd say.
 

Dave K

Distinguished
Jan 13, 2009
115
0
18,680
Intel is going to have 32nm parts out this year... NVidia is at least a year or two behind Intel technology wise (they're at what... 45nm?) - and that's a HUGE disadvantage even if they have a competitive processor design.

I don't think NVidia has a chance of entering the general purpose CPU competitively... they'd have to invest billions in fab and chip development just catching up, all at a time when most companies are seriously tightening their belts.

As someone said earlier... maybe a mobile part or a niche market, but the i7 has nothing to fear from anything NVidia is going to come up with in a reasonable time frame.
 

WheelsOfConfusion

Distinguished
Aug 18, 2008
705
0
18,980
[citation]...And @Valkin, every program in use to date uses the x86 code,[/citation]
Um... what? You mean everything written for Microsoft systems MS-DOS and up, excluding Windows CE and Mobile? There are still some PowerPC-based Apple machines out there that are perfectly usable and still run OS-X 10.n, but software support for them in certain applications (like iLife) is waining.

"...and if we all used linux it would still be an issue, as it only runs on x86 compatible hardware."
You couldn't be more wrong. Linux has be ported to run on x86, MIPS, ARM, PowerPC, SPARC, RISC, 68k, too many embedded processors to name... You can put it on both of the latest Playstations, both Xbox's, Gamecube, Wii, DS, PSP, freakin' iPods...
 

Dave K

Distinguished
Jan 13, 2009
115
0
18,680
You may see that sort of convergence someday... but in all likelihood it'll be Intel doing it (maybe working with someone else... maybe with a new gpu chipset of their design).

According to Toms... Intel is dropping $7 billion in the US this year getting their 32nm fab up, NVidia would be at a serious disadvantage going up against that kind of firepower. I know they seem to be taking pot shots at the big guy... but IMO they'd be better served by protecting their home turf from the likes of ATI and keeping their eyes out for disruptive technologies.
 

mavroxur

Distinguished
[citation][nom]cpu@cpucom[/nom]nVidia will be VIA's high-end CPU collaboration.The 64 bit extension to x86, sometimes IA-64 or AMD64 or EM64T or "x86-64" is still part of x86. [/citation]



IA-64 isnt an extension of x86, it's a completely different architecture.
 

bsharp

Distinguished
Feb 12, 2009
3
0
18,510
Any one ever heard of CUDA?
If NVIDIA make's an X86 arc chip it will most likely be awesome.
I can picture it now running quad 240 core chip's doing AI.
 

Dave K

Distinguished
Jan 13, 2009
115
0
18,680
CUDA is not a general purpose tool... it's a specific library that plays to GPU strengths to accelerate highly parallel functions.

It says absolutely nothing about NVidia's ability to create a General Purpose Processor - though if they decided to get into the DSP market they might be able to give it a go.
 
[citation][nom]bsharp[/nom]Any one ever heard of CUDA?If NVIDIA make's an X86 arc chip it will most likely be awesome. I can picture it now running quad 240 core chip's doing AI.[/citation]

Different chip/design/application - what may be good at rendering might not be good as a "processor" in windows/apps etc

Its like our brains - there complex and powerful, yet off the top of our head can we calculate 3424344.433332 x 233 + 12? nope, but a calculator can, doesnt mean it can do anything else like walk/ballance, see, smell, feel, think, act etc - not even remotely posible!

Aswell the legalities of coming into the x86 market - dont think its posible, but it is posible to design a cheap and effective cpu, then use some form of modified linux to work with it to make a cheap netbook, and other applications etc (set top boxes with basic internet, hand held devices etc).

Nvidia has to do something with those loss figures they posted recently...
 
G

Guest

Guest
Nobody actually uses X86 architecture anymore

Both intel and amd use converters to break x86 (a complex long winded assembly language) into a smaller, easier managed language, whereby the work is actually done by the CPU's

INtel tried to sue AMD back in the day, because amd's first processors, K6 and the like, did this conversion and thier processors were jsut as fast as intel's offerings. When they lost the suit, intel began doing the same thing, processors haven't actually "processed" real x86 in years, but everything is still written to code to it in assembly, whereby each manufacturer has its own proprietary sub language that it converts to for the processor to do its "work"

No doubt Nvidia would take the same approach, rather than license the real x86 language, which isn't very efficient, apparently
 
Status
Not open for further replies.