Nvidia Files Patent For Hierarchical Graphics Processors

Status
Not open for further replies.
G

Guest

Guest
/\ /\ sorry i wish it was the fastest since Crysis, Metro 2033 and compute disagree. The 7970 is underclocked!
 

sixdegree

Distinguished
Apr 22, 2011
157
1
18,680
AMD should patent "innovative power regulator in graphic processor" while Intel patent "Data throttling through PCI express to graphic processor".
 

xingmaodr

Honorable
Apr 30, 2012
3
0
10,510
Awesome

kv.gif
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
I think they've patented a method of increasing gpu efficiency, rather than "patent a GPU". I don't know too much about GPUs on an architectural level, but this is what i got out of the patent as described in the article.
 

Soda-88

Distinguished
Jun 8, 2011
1,086
0
19,460
[citation][nom]Frankythetoe[/nom]/\ /\ sorry i wish it was the fastest since Crysis, Metro 2033 and compute disagree. The 7970 is underclocked![/citation]
7970's die size is 352mm^2 whereas GTX680 is 294mm^2 (that's smaller than 560Ti). nVidia didn't have to release it's flagship (GK110) because 680 (should've been called 660) was trading blows with AMD's flagship card. Obviously, nVidia didn't wanna shoot itself in the foot by not seizing this opportunity to gobble as much cash as possible with the '680'.
 

sykozis

Distinguished
Dec 17, 2008
1,759
5
19,865
[citation][nom]Pherule[/nom]The patent trolls have struck again...[/citation]
How is nVidia a patent troll? nVidia has been the on the receiving end of the trolling as many times as Samsung has. nVidia is simply setting up for an entirely new architecture and properly filing for patents related to their work. This is actually what the patent system was setup for. If AMD or Intel want to follow suit, nVidia will license the patent to them.
 
[citation][nom]Soda-88[/nom]7970's die size is 352mm^2 whereas GTX680 is 294mm^2 (that's smaller than 560Ti). nVidia didn't have to release it's flagship (GK110) because 680 (should've been called 660) was trading blows with AMD's flagship card. Obviously, nVidia didn't wanna shoot itself in the foot by not seizing this opportunity to gobble as much cash as possible with the '680'.[/citation]

... The 7970 doesn't have a die size and neither does the 680. The GPUs inside each graphics card, the Tahiti and GK104, have a die size because they have dies. The 7970 and 680 are the entire card, not just the GPU. I still don't understand why some people refer to graphics cards as if they are GPUs. It's like calling an entire computer a CPU.

Furthermore, the Tahiti in the 7970 is a compute oriented chip whereas the GK104 in the GTX 680 is a gaming oriented chip. That the Tahiti even comes as close as it does is quite a feat for AMD considering that it isn't even a gaming oriented chip like the GK104. Compare the two in compute performance. The Tahiti whens by over 50% in single precision compute performance and the Tahiti is nearly 6 times faster than the GK104 in dual precision compute performance. This is like Fermi versus Cayman, except not as bad. The GF110 has a 530mm2 die from the GTX 580 and the Cayman from the Radeon 6970 has a 374mm2 or so die, yet GF110 only somewhat outperforms it in gaming. Compute, on the other hand, shows the GF110 vastly outperforming the 6970.

This is the same, except AMD doesn't want to make a huge 500mm2+ die to beat Nvidia, especially with the Tahiti being right behind the GK104 (680 versus 7970) despite the Tahiti not being a gaming oriented chip.

Considering the deplorable yields of the GK104, the GK100 would have had FAR worse yields (it is supposed to have a 550mm2 die) due to it being even larger. Nvidia can't even keep the 680s in stock and it would have been FAR worse if they had GK100 dies instead of GK104s. Also, what's the point of having GK100 if the GK104 is already this fast? What, do you want to run triple 1080p displays on a single GPU for some ridiculous reason? There's no need for it because what we have today is already more than fast enough for current games. Four 680s or 7970s can handle triple 2560x1600 or triple 1080p3D right now in ANY and ALL games. The only resolutions higher than that are monitors that would cost more than the rest of the entire computer system. There's no point to making a faster single GPU card right now, especially considering how badly that would effect already horrible supply.

Why should the 680 be called the 660 just because it has GK104 instead of GK100 (and yes, it's GK100, not GK110, so get it right)? The GK104, even before it was assigned to the 680, would have been the 670 or 660 TI, not just the 660 (660 would have probably had a cut down version of the GK104 instead of the full chip). Furthermore, the GK104-equipped GTX 680 performs on par with the GTX 590 for gaming, just like the GTX 680 should do if it is to follow the trend of Nvidia's top video card from their new architectures plus die shrinks roughly equaling the previous dual GPU card (for example, GTX 480 and GTX 295) in performance.

So, GK104 managed to do what it needed to do with the GTX 680, GK100 would have had even worse yield problems than GK104 does, and GK100 would have had more performance than reasonably usable, so Nvidia made the best possible decision for themselves in this situation by using GK104 in the GTX 680.

Also, Tahiti itself has more than the 2048 cores that the 7970 has access to, so not only could it have been made smaller, but the full version could be released by AMD at any time, closing the gap between AMD's top single GPU card (presumably the 7980 if AMD does this) and the 680. By then, Nvidia might be able to get GK100s going in a GTX 680 TI or GTX 685, and the gap will be widened again when the greater performance of even faster GPUs is more important. Then it will be more reminiscent of the GTX 580 versus the Radeon 6970, except this time the compute advantage switches from Nvidia to AMD.
 

_Cubase_

Distinguished
Jun 18, 2009
363
0
18,780
Nvidia seems to have invented a unique method of utilizing existing hardware, and want to patent it before developing and integrating it. Fair enough.
A patent troll finds and commonplace method, and tries to patent it without actually having to invent or implement anything.
 

kronos_cornelius

Distinguished
Nov 4, 2009
365
1
18,780
Nvidia is definitely no patent troll... To accuse the company of such is to confuse the meaning of the word.

I don't think I get the patent from this short description. Given that they have general processing cores (CUDA) and graphics cores(regular GL), is this patent only for graphics rendering ? or general processing ?
 

antilycus

Distinguished
Jun 1, 2006
933
0
18,990
AMD/ATI Copies (always have, always will) NVIDIA invents. There is an extremely large differnce. If you don't know, just look at their drivers. AMD's SUCK. SUCK SUCK. SUCK on the most ultimate level. NVIDIA's simpley dont. (nt4.dll is one nasty driver error that NVIDIA should be shot for, but still super tiny compared to the bazillion driver errors that go with the bloated Catalyst drivers).

Seriously. I adore AMD... especially their processors and I love what they are doing with their GPU's but they are always following, never leading when it comes to GPU technology. Once they start engineering instead of stealing (nvidia's tech) then we can talk. But until then.... you are simply biased if you don't see that NVIDIA is a better graphic card company that ATI/AMD.
 

markem

Honorable
May 1, 2012
37
0
10,530
Wow, so nvidia wants to follow apple (The biggest patent troll), maybe nvidia should learn something from current events (aka samsung vs apple law suits), especially if they have not even created the tech or maybe intel and amd should start applying for patents, then we will see how far CPU and gpu progresses...

nVidia wants to kill progression and competition as is apple currently (In the end - troll always loses)
 
[citation][nom]markem[/nom]Wow, so nvidia wants to follow apple (The biggest patent troll), maybe nvidia should learn something from current events (aka samsung vs apple law suits), especially if they have not even created the tech or maybe intel and amd should start applying for patents, then we will see how far CPU and gpu progresses...nVidia wants to kill progression and competition as is apple currently (In the end - troll always loses)[/citation]

How is this trolling? This is Nvidia patenting a new basis for their next architecture. This is no worse than Nvidia patenting Fermi and Kepler.

[citation][nom]kronos_cornelius[/nom]Nvidia is definitely no patent troll... To accuse the company of such is to confuse the meaning of the word.I don't think I get the patent from this short description. Given that they have general processing cores (CUDA) and graphics cores(regular GL), is this patent only for graphics rendering ? or general processing ?[/citation]

This should be used for gaming and compute workloads. Compute often scales the cores far better than gaming does, so technically, it's probably more for gaming than compute.

I'll try to explain this a little. If I make a GPU with 1 core, then it won't have multi-core scaling issues. If I make a GPU with 64 cores, then it probably still won't have scaling problems. However, every time you add more cores, the distance between each core (and the other hardware in the GPU, all measured in transistors between each core and the other hardware) increases. Eventually, this leads to scaling problems. IE if you have a GPU with 1024 cores and they are in a monolithic block, then the ones furthest away from whatever they need to talk to (such as memory controllers and other hardware, or even the other cores on the other side of the chip) have to wait long times to talk to those other components. This decreases scaling (among other causes for the scaling decreases).

One thing that AMD and Nvidia has done to decrease this problem is use the shader cores in blocks (IE Kepler has blocks of 192 shaders, Fermi had blocks of 64 shaders, I don't remember Cayman's and GCN's block sizes, but they were probably something like that). By using multiple smaller blocks instead of a single, large block, scaling improves somewhat because the cores only need to talk to other pieces of the block instead of hardware on the other side of the GPU.

However, with so many blocks becoming necessary or very large blocks (Kepler's 192 core blocks are very large for such blocks, yet it still needed 8 of these blocks for the GK104, each almost as large by transistor count as one of the lower end Fermi GPUs), this is no longer enough to keep scaling up. A hierarchical structure can improve this scaling more. For example, look at Bulldozer. Sure, it's called a modular architecture, but think about it. Instead of cores, it has modules. That's like adding an additional hierarchical level to the CPU, rather than just throwing more cores into the same level. It's not really thr same as what Nvidia is trying to do (more comparable to the change from a ingle block of cores to multiple blocks), but just look at how it affected scaling. Under Windows 8, an OS optimized for this, Bulldozer shows incredible performance scaling with many cores (even if each core is slower than a Phenom II core, which was already pretty slow for this day and age when compared to a Nehalem core, let alone a Sandy or Ivy Bridge core).

What this style of computing is supposed to be like (as far as I can tell) is that it will change the way the cores and hardware talk to each other and process the data. It will make the scheduling process into a more hierarchical structure at the hardware level. IE, instead of having the hardware strewn about the GPU, each block will instead of being a semi-independent component that does more or less all of the processing within the schedule for the data that goes into the block (a portion of the screen's pixels), each *block* will do all of the processing for a specific part of the schedule. One block will do something like the first part of a work, then shuffle the data to the next block for it to do the next part of the work, and so on. This could help improve the scaling because it should reduce the amount of data that get's shuffled all over each block and the GPU and make the data shuffling more uniform and more from certain points to others (instead of it going from all over the GPU to some other place all over the GPU, it will go from one block to the next block in the line, providing a more ordered and focused path from start to finish).

So, this would improve parallelism (pretty much multi-core scaling, IE scaling between splitting the load of each step in the schedule between multiple more or less identical parts, such as splitting the load of a job between multiple shader cores) because everything that needs to talk will be right next to each other and the data transfers can be more optimized for because they will be more uniform and predictable. For example, a bus such as those extremely fast ring buses that Intel uses (another notable example would be the Cell chip in the PS3, if I remember correctly) could be put into use.

However, I'm pretty much grasping at straws at this point, so if an engineer who knows more about this could come here and clear things up for us, I think I can speak for at least most of the commenters on this article if I say that we would be grateful.
 
G

Guest

Guest
antilycus: AMD/ATI Copies (always have, always will) NVIDIA invents. There is an extremely large differnce. If you don't know, just look at their drivers. AMD's SUCK. SUCK SUCK. SUCK on the most ultimate level. NVIDIA's simpley dont. (nt4.dll is one nasty driver error that NVIDIA should be shot for, but still super tiny compared to the bazillion driver errors that go with the bloated Catalyst drivers).

Seriously. I adore AMD... especially their processors and I love what they are doing with their GPU's but they are always following, never leading when it comes to GPU technology. Once they start engineering instead of stealing (nvidia's tech) then we can talk. But until then.... you are simply biased if you don't see that NVIDIA is a better graphic card company that ATI/AMD.

A bit of information on how top companies stay on top. First be the only worthwhile thing in said area. Then market it as the only one in said area. Time passes.... Your company is basically the standard now. Make it very hard for competition to rise up against you by....setting up "pitfalls" they have to jump through to catch up. Time is the biggest factor. Time gives the revenue to funnel into R&D when a competitor peaks it head.

Can someone tell the last time you saw a commercial for AMD/ATI for a processor or a game that starts with an AMD logo telling you its the shiznit?

 

alextheblue

Distinguished
[citation][nom]antilycus[/nom]Seriously. I adore AMD...[/citation]Really, where? Where you claim that they only copy and follow Nvidia? Where you bash their drivers while ignoring the fact that they've come a long way and most people have no issues anymore? Or where you ignore Nvidia driver issues like "Whoops we turned fan control off for some of you, have fun lighting your GPU on fire"? Yeah, you forgot about that one, didn't you. I think it was 196.75? It was a WHQL release, too.

Guess what? They both screw up. Don't put Nvidia up on such a pedestal. Neither one of them has flawless driver records. But they're both better than Intel, in the graphics driver department. Regarding "copying" you have no idea what goes into designing a modern GPU or how long the process takes. If all they did was copy they'd be generations behind. Not to mention that they sometimes beat Nvidia to market with a new architecture, and vice versa.
 

alphaalphaalpha

Honorable
Mar 7, 2012
90
0
10,640
[citation][nom]dreadlokz[/nom]One day patents will die and the tech will be free! This will be the first day after the apocalipse =)[/citation]

No patents at all would be a worse system than keeping the patent system in it's current form, although only somewhat worse. That would destroy the right to intellectual property, meaning that anyone could use anything they wanted to. If I make a new architecture and make some GPUs, then I would be pretty angry if some jerk tries to use my architecture without my permission, especially if they try to ruin my business with my architecture. I would like to have a system of laws protecting my invention even if it's a semi-broken system of laws.
 
Status
Not open for further replies.