Nvidia Briefly Makes Mention of Secret GTX 580

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]Crashman[/nom]There are two sets of rumors about this card. Specifics about the rumor in this news is that the GF110 is a 512 cuda core "fixed" version of the GTX 480, on a smaller die process.Now before anyone goes off on some rant, if those rumors are true this could be one heck of a fast card. Die shrinks usually allow much higher clock speeds, as witnessed on the Radeon 6800 series.[/citation]

But again the question about if 32nm can be done & still generate a profit. If it can't, NVIDIA was "rumored" to have hamstered 40nm Fermi that had all 512 CUDA cores, but that doesn't justify calling it GF110 unless they did change the GF100. If they didn't hamster the best GF100s, & they actually have something new, that doesn't mean it's 32nm just because they modified the GF100.

If you hit a wall in the past, you'd work to improve the code, or clock it higher & improve the cooling. So they can shrink it, improve the software, or OC the heck out of it.
 
[citation][nom]liveonc[/nom]But again the question about if 32nm can be done & still generate a profit. If it can't, NVIDIA was "rumored" to have hamstered 40nm Fermi that had all 512 CUDA cores, but that doesn't justify calling it GF110 unless they did change the GF100. If they didn't hamster the best GF100s, & they actually have something new, that doesn't mean it's 32nm just because they modified the GF100.If you hit a wall in the past, you'd work to improve the code, or clock it higher & improve the cooling. So they can shrink it, improve the software, or OC the heck out of it.[/citation]The "news" was that the name was spotted in Nvidia material. Everything else, including a dual-GPU and a die-shrunk fully-enabled GPU, are rumors :) I was only addressing his speculation on those rumors by filling in the blanks that were left out of the article.
 
This card was announced since ..April? not really a surprise. or it was an Aprils fools day joke?

And supposedly will feature 580 CUDA cores (even possible?) to go along with its 580 moniker and have 2560MB of 384-bit GDDR5 memory.

http://www.overclockersclub.com/news/26376/
 
[citation][nom]WarraWarra[/nom]Would be nice to see something new from NV as a lot of hardened ATI followers might be thinking "AMD twice" and might just switch to NV.[/citation]
Yea, ATI just basically came out with a 5850-5870 for about $240, so, they've got to be so pissed...right...

[citation][nom]liveonc[/nom]GF110 sounds like a GF100 that's 32nm instead of 40nm. The Geforce 3xx wasn't the the Fermi replacement of the Geforce 2xx, which first came as Geforce 4xx.[/citation]
I really wouldn't doubt that rumors start about it being a 32nm chip, since it's all smoke and mirrors until they release it anyways, which should be...what...only 3 months after ATI releases this entire line up this time?

It's rather disappointing that Nvidia dropped the ball with the 400 series, but, they started making a come-back (kinda) with the 460. On that note, apparently, they ARE releasing a dual 460 card with SLI support. I really don't know if that's going to help much, as, if it's priced well, it'll kill 470 and 480 sales.
 
They should just strap an i7 930 OC'ed to 4GHz onto an nvidia board w/4GB of GDDR5.....Sorry, I thought I was still sleeping. Going back to bed, please wake when the GTX 7xx series is released.
 
[citation][nom]schizofrog[/nom]512 CUDA cores? Not much of a jump in the number of cores for a new high end piece. Is there any info on a change in the architecture over the GTX480?[/citation]

The speculation is that this is what happens when you take the 480 and build it with the process for the 450/460. Originally it was supposed to be called a 485, but because this is from Nvidia's website, it is pretty clear the 465,470 and 480 will become the 500 series and will be built with the new process used for the 450/460.
 
512 core fermi chips have been availible for awhile, they are just thousands of dollars and called tesla cards.

releasing a cheaper card with a lower rofit margins for gamers will take time as the development process matures.
the gtx 580 will probably most assurdly have 512 cores with a gpu clock of 800mhz or so, its also a gpu that can use 2gb of memory so not being shipped with that will be a mistake.

all this seems reasonable given the shrink and heat these will probably require. i dont think its too far fetched to think nvidia will squeek out a few before christmas for reviews if nothing else and wide spread availibility by march.

they have to do something; the cayman xt is by rights going to be a true 2gb card that has absolutly massive performance with the features to match. nvidia has no choice but to get something out to save some face in the benchmarks this holiday season.

the renaming of amds latest midrange card is really no reason to get angry, look at the price in the crysis benchmarks alone compared to the gtx 480 in sli, a little over 5 months after it was released we have a prt that costs less than half, requires half the power, and performs within 5% of the most powerful dual gpu combo availible. kudos amd, the 6870 is a true sucess as these cards will be under 200 dollars each by christmas.

i think amd has tesselation down now, and with more types of aa availible than ever its a great time to upgrade.

if the cayman xt is infact a 2gb card with ddr5 7000mhz, 1600 or more shader processors. i really dont think nvidia will be able to do anything even a hand selected super fermi wont give them the crown this fall.

i am frothing at the mouth for the cayman xt's and gtx 580's
so i can finally see two refined and great dx11 cards worth the money they will probably cost. these look potentially like maybe, just maybe they will be the card to buy until dx12 arrives.
 
Since Nvidia had to disable Cores on the GTX480 in order to stay within the current power restraints on MBs and keep heat manageable I wonder how they are going to tackle it with the GTX580 unless they significantly reduce die size!
 
People forget that GF104 has one SM disabled. If we enable that SM then one GTX460 will go into GTX470 territory (the 470 will still be better, but not by much). GTX580 must be a dual GF104 card with all 16 SM's enabled.

48 CUDA cores per SM means 768 CUDA cores total. A dual GTX470 would have 896 CUDA cores. But the lower power envelope of the GTX460 will allow for higher clocks.

So imagine a dual GPU card with two fully enabled GF104 cores that will have the performance of a dual GTX470 SLI setup.
 
i feel antilles being two cayman xt gpu's is just marketing hype or going to be something different to be managable. the 5970 was just an exercise in my opinion.

similarly with a dual gtx 470 part, the power required of these new supposed dual gpu cards is pretty astounding. the probelem with the gtx 470/480 has been power from the start, i do expect this to get better. but better fast enough for this year seems unlikely to me. nice in theory though.

maybe someone will know how close cards are to reaching the maximum bandwidth in a pci express 2.0 slot and when crossfire and sli will be mandatory for those not wishing to upgrade there entire system.

does the two gpu's being on the same pcb typically scale better than two sli'ed or x'fired cards?

although overclocks for the 470/480 have been pretty decent on over volted cards. the heat is at the limit plain and simple.

another note is that we still havent seen a retail gaming card with a fully functional gf100 gpu. this has to be because they are still that hard to make.

although nvidia could be binning the lowest power 460/470 gpus which will make nice overclockers, similar to what amd had to do for the 5970.

maybe someone with extensive knowledge on wafer manufacturing would have a better guess on the potential gtx 580 being either a high clocked fully functional gf100 ish, or a cut up binned dual gpu part.
 


Must it? What if it's a single GPU card using the GF110?
 
[citation][nom]Quote[/nom]Nvidia briefly listed the GeForce GTX 580 on its systems requirement page for the 3D Vision, so it's most definitely real[/citation]
That's the standard? Lol.
 
so they mentioned the number 580 somewhere?

my god! I would have thought they would stop at the 400 series!

200, 300, 400, ... WHAT 500!
 
[citation][nom]mousemonkey[/nom]Must it? What if it's a single GPU card using the GF110?[/citation]

It doesn't really make sense. A fabled GF110 that will be maybe 5 - 20% better then a GF100 (GTX480), depending on the application, but always requiring ~25% more power so it will not be received well. nVidia learned this. Power consumption is not linear with added performance, you need a lot more power to get just a little bit of extra performance.

Also the GF110 will be even larger then a GF100, so yields will be lower, production costs that much higher. The GF104 is already in production and yields are fine. Also the price is acceptable for it's market segment, it's quite popular even.

A GTX460 has a TDP of 150-160W. Historically dual GPU cards have less then double the TDP of a single GPU card, when you use that same GPU. So a dual GTX460 card will sit at a very good 260 - 300W TDP, where you'd want it to be, because dual GPU cards had this TDP before and no one complained.

Also why would you spend time and money for the development of a new chip, even if it similar to GF10x, when you have exactly what you need around, at a lower production cost. You just use GF104 chips when you need them and adjust production according to market demand.
 
[citation][nom]Sihastru[/nom]It doesn't really make sense. A fabled GF110 that will be maybe 5 - 20% better then a GF100 (GTX480), depending on the application, but always requiring ~25% more power so it will not be received well. nVidia learned this. Power consumption is not linear with added performance, you need a lot more power to get just a little bit of extra performance.Also the GF110 will be even larger then a GF100, so yields will be lower, production costs that much higher. The GF104 is already in production and yields are fine. Also the price is acceptable for it's market segment, it's quite popular even.A GTX460 has a TDP of 150-160W. Historically dual GPU cards have less then double the TDP of a single GPU card, when you use that same GPU. So a dual GTX460 card will sit at a very good 260 - 300W TDP, where you'd want it to be, because dual GPU cards had this TDP before and no one complained.Also why would you spend time and money for the development of a new chip, even if it similar to GF10x, when you have exactly what you need around, at a lower production cost. You just use GF104 chips when you need them and adjust production according to market demand.[/citation]Let me repeat the missing piece of the GF110 rumor: The die shrink.
 

I saw this the other day and it has the GF110 down as a 40nm part, so what is it that you know Crash and can you share any of that with us yet?
nvidia_2011_gpu_roadmap.jpg
 
Status
Not open for further replies.