GT300 specs, gtx380,360 gts350,340

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

jjknoll

Distinguished
Sep 13, 2006
225
0
18,680
This site is generally pretty reliable in regards to new hardware. Looks like NVidia has some serious firepower coming our way. Hopefully, info is accurate and prices aren't obscene! Based on just the numbers they give, theoretically gts 350 and above will outperform 5870. Of course, these cards aren't available yet and we don't know what real world performance will actually be..... but still something to look at and drool over!


Main page
http://www.techarp.com/showarticle.aspx?artno=88&pgno=0


Nvidia

//www.techarp.com/article/Desktop_GPU_Comparison/nvidia_4_big.png



For the AMD guys

http://www.techarp.com/article/Desktop_GPU_Comparison/ati_4_big.png


It's hard to imagine; 280+ gb of bandwidth form a single chip card. It says max board power for the 380 will be 225w. I wonder if that means they snuck in under the limit using 2-6 pin connectors or have to have a 6+8 pin for OC headroom. I wish it was out NOW!
 
Solution


That's a silly statement to make to the RojakPot specualtion.

'Everything else being "mostly" equal' ?

WTF are you talking about? Very little is going to be 'equal', and how do you think 320 shaders = 1600 shaders?

There is no way to directly relate the two, especially since the HD5870 changes their RBE structure and their cache and buffer arangement, so whether or not they need or can use more bandwidth is another question, but as the HD2900 showed, raw bandwidth alone means very little.

And until we know how the shader, texture units and especially the...

IzzyCraft

Distinguished
Nov 20, 2008
1,438
0
19,290
By specs that thing is a beast but also will cost a considerable premium. unless nvidia has truly found magical ways of producing cards. Still it is a major transition of card design for nvidia lets just hope it's a good one. Shame specs alone never could accurately place a card's performance i guess we will have to wait till nov 27 if that chart tells no lies.
 

rescawen

Distinguished
Jan 16, 2009
635
0
18,990
i hope ati will pull off a good 5890 and for once they conquer single gpu strongest.

facts are Nvidia will probably only have advantage over single gpus performing better aaHD5k, but consuming more power.

For example ATI wins with Eyefinity and low power consumptionl and smaller and cooler cards.
 

darkvine

Distinguished
Jun 18, 2009
363
0
18,810


No, the 5870 was only meant to beat the 295 because there was nothing faster out. That is one of the advatges to releasing first, you look oh so much better when faced with the fasted last gen card. Even if you are better then the new gen as well you don't look so whip ass when the race is still closer.
 

smoggy12345

Distinguished
Aug 11, 2009
267
0
18,790
aha...If this is true this is exactly what i expected from Nvidia.

1single GPU to compete with the X2.

This is REAL improvement. The 5870 is aload of balls...35% faster than GTX285??? pffft.


now say the 285 = 100%



so say double to 285 = roughly performance of new GTX380 = 200%

5870 = 135%

so X2 = about 240-50% (take some off for dual GPU not scaling 100%)


So the X2 will be roughly 40-50% faster than the GTX380 ...there thats my guess.


I expect the GTX 360 to be about 160% and the GTS 350 to be about 130% (to compete with 5870)



Surely Nvidia's X2 version WONT be like a GTX380 x2 but more like the GTX360 x2...Like previous Gen....would they still have Die size, power and heat problems then?
 
G

Guest

Guest


Nvidia will also have gx 2 card to compete with that 5870x2

http://www.fudzilla.com/content/view/15713/1/
 

rawsteel

Distinguished
Oct 5, 2006
538
0
18,990



You cant calculate like that. You can clearly see that ATI doubled everything and MORE and they still get around 50% more than previous gen. Its not linear and it depends on many other things. So If nVidia doubles everything like ATI and scaling and things stay same we should expect same difference like 285vs 4890 (which is 5-10%). Of course nVidia could have hit the mark with some new arch which is generally very good and get better results like 20%-30% better. Thats my prediction ;)
 
Well as the block diagram blurred into existance, it's now looking more and more like a return to the 384-bit memory, so right there, 25% of the memory bandwidth quoted, gone, the RBEs (ROPs in that list) need to match up so I would think 48 is more likely than a return to 24, and so alot of that info just got truncated.

Now if they are that tight for die space it's unlikely this thing is going to be very fast initially and we will likely see slow shader clocks at first.

With all that dedicated CUDA hardware, it's also likely to have alot of die space that goes unused in games that don't require PhysX, and might get some use in some games that use OpenCL, but to current and older games, there won't be much of a boost from those transistors.

I suspect you will see a G300/F100 closer in line with the HD5870 with probably a more sedate 20% boost on the HD5870, and while people talk about their being a GX2 model on the books, I doubt we'll see it until they shrink the die, and if they use a crippled model, it likely mean high power consumption and something that would essentially tie an X2 with a smaller margin, maybe +/- 10 %.

And with these chips being used in the Tesla model, I also have a feeling we see the return of the NVIO as to not waste die space on things that would be useless to the strictly computational card; which actually make it a little easier for nV to add last minute Eyefinity to the design, but I doubt they have the time to tweak and NVIO to take advantage of that.
 

jennyh

Splendid
Exactly, the 5870x2 is already pushing every boundary that exists. Nvidia can't redefine these boundaries, they have to work up to them too.

Think of ATI's advantages for a moment -

1) 2nd gen DDR5 vs Nvidia's 1st gen
2) nth gen tesselator vs Nvidia's 1st gen
3) much smaller die size, yet the x2 is still as big as it can be.
4) We know that ATI doubled everything and added everything they could, we don't know that nvidia can do that yet. There is a lot of talk about Nv just passing on dx11 this round and it's not going away.
5) Nvidia have to stick on stuff like cuda and physx, taking up yet more silicon real estate.

While nvidia might manage all of it, believing it sure is having a lot of faith in them. I'll be impressed if they pull it off, though I can imagine how much one of those will cost - more than two 5850's and two new 22" screens for eyefinity I'd say.

Ordered my first 5850 today, with luck I might have it on monday :)
 

kelfen

Distinguished
Apr 27, 2008
690
0
18,990
That is going to be one huge, packed chip. it will be interesting to see how it performs, too bad no real rumors on that yet. Once they do start coming in though, subtract 10% or so and you have the real card (maybe). I wonder how much the emphasis during the design was shifted away from games. It is all they talk about now, but back when the groundwork was set for this chip, i wonder if they planned for this.
 

jennyh

Splendid
"The graphics processing unit (GPU), first invented by NVIDIA in 1999"

Stopped reading there tbh. [strike]God I hope this company dies out slowly and takes their idiot fanboys with them.[/strike] Good luck to nvidia with this they sure will need it and more.
 

jennyh

Splendid
Graphics processing units were in existence long before Nvidia were.


Wiki :-

"
1980s

The Commodore Amiga was the first mass-market computer to include a blitter in its video hardware, and IBM's 8514 graphics system was one of the first PC video cards to implement 2D primitives in hardware.


The Amiga was unique, for the time, in that it featured what would now be recognized as a full graphics accelerator, offloading practically all video generation functions to hardware, including line drawing, area fill, block image transfer, and a graphics coprocessor with its own (primitive) instruction set. Prior to this (and quite some time after on most systems) a general purpose CPU had to handle every aspect of drawing the display."

If that isn't a gpu then I don't know what is, and the Amiga wasn't the first by any means.