Why nVidia 7xx0 series cannot do Folding@Home !

ElMoIsEviL

Distinguished
I've heard alot of peeps claiming that all we have to do is wait for the nVIDIA 7xx0 series to be adopted by Folding@Home, unfortunately this is not going to happen for several reasons.

1. Bandwidth: Internal bandwidth on the ATi x1x00 series, especially the higher end models starting with the x1800 series is pegged at 512bit. Folding@Home makes great use of available bandwidth. The 7xx0 series by nVIDIA is twice as slow being limited to 256bit.

2. Shader Power: ATi's x1x00 series are more capable of higher sustained Shader Op/s (Especially the x19x0 series) this is mainly due to the 48 Shader Units with 2 ALU's each (96 ALU's) vs. nVIDIA's 24 Shader Units with 1 1/2 ALU's each (36 ALU's). Also worth nothing that ATi's VPU's are heavilly threaded and as such this helps boost the shader looping method needed by Folding@Home. This makes the x19x0 series Almost 3 times faster.

3. Dedicated Branching Units: Folding@Home makes HEAVY usage of shader loops and Dynamic Branching calls. As a result ATi's x1x00 series have a dedicated Branching Unit which enables them to process the code in a single pass rather then 4 PASSES on nVIDIA 7xx0 series. 4 Times faster.

When all 3 are put together you get to see why it's not feasible or useful for nVIDIA GPU's to be used as multi purpose processors. Heck, even in Physics calculations they're VERY slow compared to ATi's x1x00 series. also worth noting that Folding@Home does not support Multi-GPU or Multi-CPU hence rendering the 7950GX2 useless for such operations.

But, don't worry there is light at the end of the tunnel. G80 will enable nVIDIA to catch up to ATi and even surpass ATi.. that is.. until R600 is released.

So don't be angry that Folding@Home doesn't support your nice nVIDIA card. Simply save up for G80.
 

quantumsheep

Distinguished
Dec 10, 2005
2,341
0
19,790
I've heard alot of peeps claiming that all we have to do is wait for the nVIDIA 7xx0 series to be adopted by Folding@Home, unfortunately this is not going to happen for several reasons.

1. Bandwidth: Internal bandwidth on the ATi x1x00 series, especially the higher end models starting with the x1800 series is pegged at 512bit. Folding@Home makes great use of available bandwidth. The 7xx0 series by nVIDIA is twice as slow being limited to 256bit.

2. Shader Power: ATi's x1x00 series are more capable of higher sustained Shader Op/s (Especially the x19x0 series) this is mainly due to the 48 Shader Units with 2 ALU's each (96 ALU's) vs. nVIDIA's 24 Shader Units with 1 1/2 ALU's each (36 ALU's). Also worth nothing that ATi's VPU's are heavilly threaded and as such this helps boost the shader looping method needed by Folding@Home. This makes the x19x0 series Almost 3 times faster.

3. Dedicated Branching Units: Folding@Home makes HEAVY usage of shader loops and Dynamic Branching calls. As a result ATi's x1x00 series have a dedicated Branching Unit which enables them to process the code in a single pass rather then 4 PASSES on nVIDIA 7xx0 series. 4 Times faster.

When all 3 are put together you get to see why it's not feasible or useful for nVIDIA GPU's to be used as multi purpose processors. Heck, even in Physics calculations they're VERY slow compared to ATi's x1x00 series. also worth noting that Folding@Home does not support Multi-GPU or Multi-CPU hence rendering the 7950GX2 useless for such operations.

But, don't worry there is light at the end of the tunnel. G80 will enable nVIDIA to catch up to ATi and even surpass ATi.. that is.. until R600 is released.

So don't be angry that Folding@Home doesn't support your nice nVIDIA card. Simply save up for G80.

So theoretically it COULD be used, it'd just be a heck of a lot slower than using an ATI card?

There's also the problem that they couldn't get the nVidia card to accept the code.
 

ElMoIsEviL

Distinguished
I've heard alot of peeps claiming that all we have to do is wait for the nVIDIA 7xx0 series to be adopted by Folding@Home, unfortunately this is not going to happen for several reasons.

1. Bandwidth: Internal bandwidth on the ATi x1x00 series, especially the higher end models starting with the x1800 series is pegged at 512bit. Folding@Home makes great use of available bandwidth. The 7xx0 series by nVIDIA is twice as slow being limited to 256bit.

2. Shader Power: ATi's x1x00 series are more capable of higher sustained Shader Op/s (Especially the x19x0 series) this is mainly due to the 48 Shader Units with 2 ALU's each (96 ALU's) vs. nVIDIA's 24 Shader Units with 1 1/2 ALU's each (36 ALU's). Also worth nothing that ATi's VPU's are heavilly threaded and as such this helps boost the shader looping method needed by Folding@Home. This makes the x19x0 series Almost 3 times faster.

3. Dedicated Branching Units: Folding@Home makes HEAVY usage of shader loops and Dynamic Branching calls. As a result ATi's x1x00 series have a dedicated Branching Unit which enables them to process the code in a single pass rather then 4 PASSES on nVIDIA 7xx0 series. 4 Times faster.

When all 3 are put together you get to see why it's not feasible or useful for nVIDIA GPU's to be used as multi purpose processors. Heck, even in Physics calculations they're VERY slow compared to ATi's x1x00 series. also worth noting that Folding@Home does not support Multi-GPU or Multi-CPU hence rendering the 7950GX2 useless for such operations.

But, don't worry there is light at the end of the tunnel. G80 will enable nVIDIA to catch up to ATi and even surpass ATi.. that is.. until R600 is released.

So don't be angry that Folding@Home doesn't support your nice nVIDIA card. Simply save up for G80.

So theoretically it COULD be used, it'd just be a heck of a lot slower than using an ATI card?

There's also the problem that they couldn't get the nVidia card to accept the code.

Yes, they would have to re-write the program to use nVIDIA GPU's, but it would be about the same as using a CPU performance wise thus not really worth the R&D investment.
 

quantumsheep

Distinguished
Dec 10, 2005
2,341
0
19,790
I've heard alot of peeps claiming that all we have to do is wait for the nVIDIA 7xx0 series to be adopted by Folding@Home, unfortunately this is not going to happen for several reasons.

1. Bandwidth: Internal bandwidth on the ATi x1x00 series, especially the higher end models starting with the x1800 series is pegged at 512bit. Folding@Home makes great use of available bandwidth. The 7xx0 series by nVIDIA is twice as slow being limited to 256bit.

2. Shader Power: ATi's x1x00 series are more capable of higher sustained Shader Op/s (Especially the x19x0 series) this is mainly due to the 48 Shader Units with 2 ALU's each (96 ALU's) vs. nVIDIA's 24 Shader Units with 1 1/2 ALU's each (36 ALU's). Also worth nothing that ATi's VPU's are heavilly threaded and as such this helps boost the shader looping method needed by Folding@Home. This makes the x19x0 series Almost 3 times faster.

3. Dedicated Branching Units: Folding@Home makes HEAVY usage of shader loops and Dynamic Branching calls. As a result ATi's x1x00 series have a dedicated Branching Unit which enables them to process the code in a single pass rather then 4 PASSES on nVIDIA 7xx0 series. 4 Times faster.

When all 3 are put together you get to see why it's not feasible or useful for nVIDIA GPU's to be used as multi purpose processors. Heck, even in Physics calculations they're VERY slow compared to ATi's x1x00 series. also worth noting that Folding@Home does not support Multi-GPU or Multi-CPU hence rendering the 7950GX2 useless for such operations.

But, don't worry there is light at the end of the tunnel. G80 will enable nVIDIA to catch up to ATi and even surpass ATi.. that is.. until R600 is released.

So don't be angry that Folding@Home doesn't support your nice nVIDIA card. Simply save up for G80.

So theoretically it COULD be used, it'd just be a heck of a lot slower than using an ATI card?

There's also the problem that they couldn't get the nVidia card to accept the code.

Yes, they would have to re-write the program to use nVIDIA GPU's, but it would be about the same as using a CPU performance wise thus not really worth the R&D investment.

Well it could be useful if you want to still do other things with your CPU like surf the internet etc.
 

korsen

Distinguished
Jul 20, 2006
252
0
18,780
to say that nvidia isn't capable of folding@home is assholish. Despite me having a 6800GT (would be a 1900xtx but dx10 is just around the corner :p) folding@home is research. it's not supposed to have bias against anything. getting any kind of help that it can should be paramount.

Is anyone else dreaming of a GPU co-processor for torrenza?
intel and their 80-core FPU chip looks like caca compared to a GPU. Gimmie a quad core gpu co-processor and welcome me to 2012.

EDIT: using an nvidia 7xx0 series cannot compare to a CPU performance. you're making a comparison between tens vs hundreds of Gflops.
 

ElMoIsEviL

Distinguished
to say that nvidia isn't capable of folding@home is assholish. Despite me having a 6800GT (would be a 1900xtx but dx10 is just around the corner :p) folding@home is research. it's not supposed to have bias against anything. getting any kind of help that it can should be paramount.

Is anyone else dreaming of a GPU co-processor for torrenza?
intel and their 80-core FPU chip looks like caca compared to a GPU. Gimmie a quad core gpu co-processor and welcome me to 2012.

EDIT: using an nvidia 7xx0 series cannot compare to a CPU performance. you're making a comparison between tens vs hundreds of Gflops.

Hey sometimes the Facts hurt. This is especially true if you're an nVIDIA fan. I've always moved from ATi to nVIDIA etc varying on who had the best product (I do the same with CPU's).

Although the nVIDIA 7xx0 series is capable of several more Gflops, this is not as important as how many Gflops nVIDIA's 7xx0 is able to sustain at higher precisions. You see, scientific research needs to be precise, and as I've tried to explain is that nVIDIA's 7xx0 hardware cannot sustain a looping high precision shader effectively.

In most regards technologically speaking nVIDIA's 7xx0 series is a generation behind ATi's x1x00 series (except Dual Graphics configurations).

The lack of available internal bandwidth and the fact that nVIDIA's 7xx0 hardware can only sustain a shader of 64K in length (vs ATi's infinite length) also contribute.

Get my drift... 7xx0 is about on par with a Core 2.. hence not worth the effort to build an entire engine to support nVIDIA hardware. The problem lies with nVIDIA's low internal bandwidth of 256bit, low high precision Shader OP/s output, exclusion of a dedicated Branching Unit and low Shader Length forcing 4 passes for each single ATi pass.

In the end you've got something preforming like 20% faster then a Core 2 Extreme X6800. Totally not worth it.

G80 will more then likely rectify these shortcomings.
 

quantumsheep

Distinguished
Dec 10, 2005
2,341
0
19,790
to say that nvidia isn't capable of folding@home is assholish. Despite me having a 6800GT (would be a 1900xtx but dx10 is just around the corner :p) folding@home is research. it's not supposed to have bias against anything. getting any kind of help that it can should be paramount.

Is anyone else dreaming of a GPU co-processor for torrenza?
intel and their 80-core FPU chip looks like caca compared to a GPU. Gimmie a quad core gpu co-processor and welcome me to 2012.

EDIT: using an nvidia 7xx0 series cannot compare to a CPU performance. you're making a comparison between tens vs hundreds of Gflops.

Hey sometimes the Facts hurt. This is especially true if you're an nVIDIA fan. I've always moved from ATi to nVIDIA etc varying on who had the best product (I do the same with CPU's).

Although the nVIDIA 7xx0 series is capable of several more Gflops, this is not as important as how many Gflops nVIDIA's 7xx0 is able to sustain at higher precisions. You see, scientific research needs to be precise, and as I've tried to explain is that nVIDIA's 7xx0 hardware cannot sustain a looping high precision shader effectively.

In most regards technologically speaking nVIDIA's 7xx0 series is a generation behind ATi's x1x00 series (except Dual Graphics configurations).

The lack of available internal bandwidth and the fact that nVIDIA's 7xx0 hardware can only sustain a shader of 64K in length (vs ATi's infinite length) also contribute.

Get my drift... 7xx0 is about on par with a Core 2.. hence not worth the effort to build an entire engine to support nVIDIA hardware. The problem lies with nVIDIA's low internal bandwidth of 256bit, low high precision Shader OP/s output, exclusion of a dedicated Branching Unit and low Shader Length forcing 4 passes for each single ATi pass.

In the end you've got something preforming like 20% faster then a Core 2 Extreme X6800. Totally not worth it.

G80 will more then likely rectify these shortcomings.

What he said.
 

flasher702

Distinguished
Jul 7, 2006
661
0
18,980
"cannot" and "would be a lot slower if implimented" are two very different statements... You saying that "only" a 2.2x or more increase in processing power being "Totally not worth it." is indeed presumptuous.

The folding@home project is acedemic in nature, meaning that R&D expendetures can be justified simply for the sake of doing R&D, which is probably how they justified the expendature for the ATI cards. Seriously, even if every single rig currently running folding@home with an idle 1x00 card updated to the new client and maxed out their GPU for the project how many more gflops would the project get? answer: a neglible amount. It's a distributed project designed to run on millions of reletively ow-powered machines across the world, many of them don't even have GFX cards. There are no statistics published for what GFX cards folding@home systems are running but look at this: http://fah-web.stanford.edu/cgi-bin/main.py?qtype=cpustats 20% of the active systems working on the project aren't even running WinXP (the current "gaming" OS). Over 13% of the active systems are running Pentium3 or older CPUs (I estimate it at 25% or more by applying the ratio of P2 and P3 to P4 processors to the Athalon Processor count). Only 50% (estimated using the same method) of the system are running CPUs types with windows to suggest that they are *maybe* gaming machines.

I think that the folding@home project could likely get more gflops out of nVidia 7xx0 GPUs that people already have then g80 based cards that almost no one will have for at least several more months. I think the decision about wether or not "2.2x" is not worth it but "5x" is has a lot more to due with ease of coding, user installation base, and programmer resources then how many shader units the card has. You make a very complicated proposition sound simple, and you do it with such an air of infalibility despite contradicting yourself, it does indeed make you seem like an asshat :p They could have coded it to use an Ageia Physx card, or a number of other dedicated scientific co-processing cards, and it would have been even faster, but very few systems have these cards.

You could have just made the title of your thread "why nVida 7xx0 series would be much slower Folding@Home coprocessors" and the body of it would then also say something along the lines of ~"so they are unlikely to be implimented" rather then ~"it'll never work". 120% more performance for a top-of-the line system that happens to be runnin nVidia over the course of several months would be a very significant increase for that system (and for people running slower CPUs the performance increase would be even higher). It's not like many people run out and buy new hardward just to process more chunks of folding@home, but with the g80 coming out soon (and I believe a higher install base of ATI cards right now anyway) perhaps they will wait to code for g80, or maybe they have no near-term plans to code for nVidia GPUs as they aren't getting the programming support they need from nVidia. The future is anything but definate, and especially in a project such as this one there are many more considerations then "omg, what's the fastest thing the hardware companies have come out with today".

If you are on the folding@home programming team and know that they have decided not to code for 7xx0 cards you should say so. If you aren't... you shouldn't act like you know what their plans are and just let the facts speak for themselves ;)
 

flasher702

Distinguished
Jul 7, 2006
661
0
18,980
Hey sometimes the Facts hurt. This is especially true if you're an nVIDIA fan. I've always moved from ATi to nVIDIA etc varying on who had the best product (I do the same with CPU's).

Says the guys with the the ATI logo as his avatar. So you're a fanboy who jumps ship to whoever has the fastest card out at the time? Still a fanboy :p