GT300 series, real specs revealed.

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
512 bit buses are overkill. The reason why Nvidia generally has a tiny performance lead is because they use 50% more transistors on their gpu's. The r800 will have a lot more than the r700 had.

Anyway, expect the exact same as last year with the 5870 X2 holding a huge lead until Nvidia figure a way to glue two g300's together without causing whole cities to black out.
 



At what resolution is 512-bit memory "worth it"? I think 512-bit would defiantly be worth it at 1920x1200. Right? More interface with the memory should always make things faster, especially at higher resolutions with more stream processors (like 512, for instance).

Also, how much more performance can we expect out of the GTX 380 then the GTX 280? Maybe double? I wanna know if I should get a 5870X2, or a GTX 380. I'm kinda leaning towards the 380 because it'll most likely draw less power and Nvidia has better Driver support.
 
You are mistaken Uber. Memory bandwidth is a function of the bus and speed (in fact it is teh multiplication of teh two). Right now Nvidia uses double the bus, ATI uses effectively double the speed (GDDR5, not 3). There is no difference in the two methods from a performance view.

When the bandwidth needs to be increased beyond what a simple clock increase can provide both companies will use a larger bus width, both will probably use Gddr5 in the next generation of cards, not sure what the bus width will be.

And Uber.. noone can say for sure.. Yuo won't be able to pick a card (5870X2 or 380) until they are realsed, noone can guess as to which will be better or why. Driver support is realtively a moot point at this point in time as neither is particularly better at the moment (I regret saying this as I'm sure it will encur the wrath of both sides demanding to be called the best...) and its a clean slate as far as DX11 goes.. they may both F it up 😀
 


Be that as it may, it was just an overclocked 8800GTX, when 8800GTX's could easily be set at Ultra speeds. So, still competitive? Yes. Ever worth paying >$500 when you could get a cheaper GTX and overclock it to the same speeds? No way. Ultra was the sucker's card.
 
Understand that if nVidia goes to GDDR5, theyll most likely drop their 512 bus' also. GDDR5 may be smokin fast by the time these cards are released.. To have a huge wide bus and super fast memory is reduntant, as 1 or the other is enough. The wider bus' cost alot more money, and I havnt seen any advantages having a wider bus vs faster memory, and currently, theres really almost no way to test one against the other, but looking at the 4870 vs the 4850, often the 4870 is faster in games than the % difference we see in the core clock alone, meaning, the GDDR5 is boosting the 4870s perf, whereas the 4850 cant get it done with its GDDR3
 


i thought Nvidia was the one doing the gobbling up? oh well, seems like you are hollow inside.
I dont buy nvidia cards but appreciate the competition they put up and cause the graphics card prices to come down. We dont need a monopoly!
 
yea, we might see great things yet. Nvidia maybe has some secret project up their sleeves FOR gt300 and will give good, if not great competition for ATI. I do believe this generation will happen much like the last. Except maybe ATI won't pull a X2 as soon because ATI wants a bigger chip and won't be able to compress that into an X2 unless they dumb the specs and have a die shrink like the 295.

Hopefully there will be similar cards this time around so I get get the most for my life savings! :lol: (J/K...maybe not...lol)

 
That guy who said those things on the inquirer has a horrible track record!

He even said that the 8800GTX would be bad! If that's not enough proof then look at most of the responses on that page!


So, I do think the GT300 will have 512 shader cores, at least 1536mb of ram, and 512-bit bus GDDR5. Hopefully 2gb 😀 Maybe it'll even have 576 cores for all we know! :ouch: If the GT300 has all the supposed specs, I've crunched the numbers over and over and, The GTX380 will be have 3x more fps in crysis then my 8800GTX! Talk about wow! 😀


Even though I'm somewhat a Nvidia fanboi, I do want ATI to up their specs a bit. I do (not) want a repeat of the 8800GTX! lol... The prices were off the charts for quite some time. Maybe ATI will have 2000 shader cores! lol..........why do I kid myself...


Any Idea when the 5870 will come out?? I've heard Q4 but really I think next year Q1 sounds more realistic.
 
God I hope the GT300 is nothing like the above mentioned monster.

512-bit bus of GDDR5? A waste of money for no performance gain

1.5Gb-2Gb of GDDR5? 1GB of GDDR5 is a bit of a waste anything more is likely going to be useless.

Again, nVidia needs to get away from this monolithic GPU thing they have started, it hasn't exactly paid off with the GTX 2xx series.
 



I have to disagree with you. Graphics cards need to progress as much as possible to create more efficient lower-end cards, and a higher performance on average for users.

With bigger and badder high-end graphics cards, it helps (to a degree) to raise efficacy for lower end models. With more efficient low-end models, the average user can get a better graphics card and companies can start making better video games. With better and better video games, well, who wouldn't want to play a game that comes on 5 blu-ray discs and has grahpics that make crysis look like pong, on max settings. We all will, eventually. It just takes time for graphics cards (and new consoles) to push the video game industry into making better and better games.

there's my two cents :)
 
I was point out that a few of the rumored specifications are useless and that the large die, low yields and high production cost of the GTX 2xx series hopefully will not translate to the GT300.

"Bigger and Badder" does not equal efficiency, just look at the GTX 2xx series compared to the ATI 4xxx series. The 4xxx series gets about the same and sometimes better performance through a much more efficient design.

We don't need to pay more for extra CUDA features or specifications that do nothing but look good to uninformed consumers.

I just hope that ATI and nVidia truly pull out something next generation and not just a incremental improvement.
 
Until nVidia creates a two tier system, one top card, or cards, that incorporate gpgpu/Cuda abilities, with high DP etc, and a killer gpu for gaming, Im afraid we'll keep seeing these monsters. That in and of itself is one reason why G200 isnt as efficient as R700, having all of that, tho even so, the R700 holds its own, either way.
LRB will be quite good at doing gpgpu, and nVidia wants to meet it head on, so for now, this is what we will see
 
GDDR5 is twice as fast per clock as GDDR3... so multiply by 2......

In the future please just use google to look for something as trivial as this info.. don't be lazy, and don't needlessly bump a 2 month old thread.