GT300 specs, gtx380,360 gts350,340

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

jjknoll

Distinguished
Sep 13, 2006
225
0
18,680
This site is generally pretty reliable in regards to new hardware. Looks like NVidia has some serious firepower coming our way. Hopefully, info is accurate and prices aren't obscene! Based on just the numbers they give, theoretically gts 350 and above will outperform 5870. Of course, these cards aren't available yet and we don't know what real world performance will actually be..... but still something to look at and drool over!


Main page
http://www.techarp.com/showarticle.aspx?artno=88&pgno=0


Nvidia

//www.techarp.com/article/Desktop_GPU_Comparison/nvidia_4_big.png



For the AMD guys

http://www.techarp.com/article/Desktop_GPU_Comparison/ati_4_big.png


It's hard to imagine; 280+ gb of bandwidth form a single chip card. It says max board power for the 380 will be 225w. I wonder if that means they snuck in under the limit using 2-6 pin connectors or have to have a 6+8 pin for OC headroom. I wish it was out NOW!
 
Solution


That's a silly statement to make to the RojakPot specualtion.

'Everything else being "mostly" equal' ?

WTF are you talking about? Very little is going to be 'equal', and how do you think 320 shaders = 1600 shaders?

There is no way to directly relate the two, especially since the HD5870 changes their RBE structure and their cache and buffer arangement, so whether or not they need or can use more bandwidth is another question, but as the HD2900 showed, raw bandwidth alone means very little.

And until we know how the shader, texture units and especially the...
If Nvidia mange to double the performance of the GTX280 (and that's a big if) then Nvidia won't be any better off then they were in the last generation.

Why not? AMD doubled the 4870 with the 5870, why can't Nvidia do the same? Actually, if they went from 240 to 512, they more then doubled the shaders. And by moving to faster GDDR5, they won't have any memory bandwidth problems either. Even if this is nothing more then G(T)200 doubled, then it should still be fast enough to be faster then the 5870. They will have their die size problems, but that won't take away the faster then 5870.

People are expecting miracles of Nvidia, and they are just gonna end up disappointed.

See above.
 

jennyh

Splendid
Why not? AMD doubled the 4870 with the 5870, why can't Nvidia do the same?

Because AMD doubled in 15 months with a die shrink. Nvidia would be doubling in 8 months with a die shrink of a die shrink, new architecture and no dx10.1 to move on to dx11 and no tesselator period before now.


Understand that Nvidia only has 8 months experience of 55nm, that is why their 40nm is 4 months later than ATI's was...and it's a lot crappier too. Remember the 4770? It's still way better than anything Nvidia have at 40nm.


Actually, if they went from 240 to 512, they more then doubled the shaders. And by moving to faster GDDR5, they won't have any memory bandwidth problems either. Even if this is nothing more then G(T)200 doubled, then it should still be fast enough to be faster then the 5870. They will have their die size problems, but that won't take away the faster then 5870.

Nvidia can have a faster single gpu, but they cannot possibly have the fastest overall. The 5870x2 is a step beyond Nvidia's capability and it has years of ATI's expertise in all the things i mentioned before.

Nvidia are trying to build it all from scratch and still hold the lead all round. Forget it, it won't happen.
 


Why would it have to be?, surely if you wish to remain fair as it's something Nvidia never does, if so then the g380 should only have to match or beat the 4870 x2 just as the 58570 was supposed to beat the GTX295, next gen single card versus last gen dual card no?
 

jennyh

Splendid


I expect it will beat the 4870x2, which is fine, and no mean performance either. The problem is, there will be a much faster card available (5870x2), which wasnt the case before.

This is new ground for Nvidia, they arent used to being in such a defensive position. By all accounts the 5870x2 is almost ready...if Nvidia release the gtx380 and it's worse than the same generation 5870x2...that doesn't bode well for them. It is entirely possible that Nvidia could go all year without having the fastest graphics card available, which, for nvidia, could be a disaster.

Nvidia sell cards on the basis that they are 'fastest'. Without that, it'll all go to hell and it doesn't matter if it takes 2 gpu's or not.
 
Well I for one still only rank single GPU cards against single GPU cards, hence why I was taking the pee when people were posting the power figures for the 5870 & 295 giving it all "ooh and ahh it uses so much less power, that's amazing!!" when if it's only got one GPU on a smaller process that was only to be expected really.
 

jennyh

Splendid
Yes ofc.

I mean.....ATI could have released a 500mm2 monster gpu last week, but instead they released a chip that is awesome at just over half the size, has all the features you could want, scales down nicely and can be doubled up to make an extremely powerful x2 version without too much being lost.

Nvidia ran out of room this time. It's really difficult to explain it, and I can't explain it even though I know myself what I mean. There are a lot of different variables that add up to Nvidia being unable to beat ATI barring a bunch of unlikely miracles all happening at the same time.
 

jennyh

Splendid
Hehe, I dunno about that but I'll take your word for it.

What I believe is possible is that Nvidia take the single gpu crown. They might also have a good foundation for gpgpu (is anyone forgetting that) with this gpu.

If you just look at what Nvidia have to do, from scratch, you would realise that I'm being pretty generous to them. New dx, new tesselator, new nm process...it's a helluva lot to ask of them and still expect them to hold the leads they used to.
 
Oh I don't expect or believe that they will take the lead (unlike some), it would be nice sure but it hardly tops my list of 'things that must happen to keep me going' I'm far more interested in the (hopefully) plethora of cards that may be available to me later next year when I come to replace my gaming rig. By the way, have you got your 5870 yet?
 

jennyh

Splendid
Not yet. I almost bought one from ebuyer today, a powercolor at £299 but without the dirt2 voucher.

I 'borrowed' one of my friends 4870's in the meantime, doubt she'll notice while playing wow lol :D

Think i'm gonna wait for the 5850, it makes a lot more sense, especially if I'm going to have to spend more than anticipated on another lcd with displayport. If I get one 5850 it will be a lot easier to get another one later and I already have the crossfire mobo so i might as well make use of it. 2x5850's looks most likely, I just hope i can wait another 2 weeks.
 

The_Blood_Raven

Distinguished
Jan 2, 2008
2,567
0
20,790
Jennyh has a point, nVidia has had to do a lot of R&D and they had to do it quick.

That said, I still don't believe the G300 will be the single fastest card for too long, ATI has a decent amount of time to mod the 5870 and come out with a slightly altered architecture for the 5890 (say 1800 or 2000 SP), they could also simply find away to pump up the clocks. They could also take this time to do nothing like nVidia did with the 8800 series, in the end we can only wait and see.

Also I don't understand why everyone is suddenly all hyped for the G300 (the GTX 380 or equivalent) when the 5870 will likely still be a great bang for the buck and spending all that money on a slightly faster card doesn't make sense to a lot of the people hyped about nVidia's launch. Also the GTX 360 (or equivalent) will likely trade blows with the 5870 like it did this round (GTX 260 vs 4870) so why no one is hyped for the GTX 360 is beyond me. I guess it is because the nVidia fanboys can't clearly point to and it say "It'll be fasta!" I don't understand the whole fanboy thing, you get whats best not what has your favorite brand.

Bring on the GTX 360 and the competitive pricing!
 

Harrisson

Distinguished
Jan 3, 2007
506
0
18,990
I would take this "data" with a grain of salt, there are a lot of inconsistencies. But lets say for argument sake Fill rate and bandwidth is correct, then GTX350+ indeed has a chance of beating 5870, real games will show if its so. Still I have some doubts Nvidia vastly improved AA back-end as did ATI (100% outperforming GTX295 at 2560 AA 8x!), so its very much possible GTX350-360 will lose in ultra high-end configs, but GTX380 should win IMO. Still I see very little Nvidia can do about 5870 X2, 6+ bln. transistors is a bit too much on one card at 40nm.

Lets not forget multiple displays benchmarks will start soon, and I have little doubt Nvidia has no answer to eyefinity either.
 

The_Blood_Raven

Distinguished
Jan 2, 2008
2,567
0
20,790


Yeah I'm curious about the 5850 myself. I want to get a new 5xxx series card for my cousin for his birthday next month. He has 2 8800 GTS 512s and I want to see if the 5850 will be an upgrade worthy of $260.
 

jennyh

Splendid
Raven makes a good point that I haven't brought up yet.

ATI were held back by TSMC on 40nm, and are still being held back if reports are true. However, just because they couldnt get cards released didn't mean they stopped innovating or progressing.

The 5870 could have been ready and waiting on TSMC getting their *** sorted out. If that is true, ATI are in an incredibly commanding position, with x2 probably ready already and a 5890 not far off. It could be possible that a 5890 might beat the gt380 quite quickly if required.

Also, this 5870 is considerably undervolted at reference. Not a lot has been said about this yet, but expect to see 1ghz 5870's before the end of October.

I know I like to paint rosey pictures for AMD, but I don't think I'll be too far out in the end.
 

jennyh

Splendid


At $100 cheaper it's certainly much better on price performance compared to the 5870. Having the crossfire mobo makes it a lot simpler for me, it's just a case of being patient as much as i can be lol :D
 

Ahh, but when did they actually start?



Pretty much.



I'm a bit narked tbh because GTX275 stocks do not seem to be getting replenished nor does their price seem to be dropping, so personally I'm not a happy camper.
 

The_Blood_Raven

Distinguished
Jan 2, 2008
2,567
0
20,790
Everyone says your a fangirl Jenny, either I don't see it or it is a good time to be an ATI fangirl. OR MAYBE I'M ONE...

noooooooo.jpg


NOOOOOO!!!

a... fanboy that is I saw my mistake...
 
Yes, jenny is an AMD fanboy.

If you just look at what Nvidia have to do, from scratch, you would realise that I'm being pretty generous to them. New dx, new tesselator, new nm process...it's a helluva lot to ask of them and still expect them to hold the leads they used to.

Some of this is big, others not so much. New DX level? Its not like that came out of left field. They new it was coming, and what they'd have to add to support it. I'm sure they can also build a tesselator. The big problem they'd have with that is where to put it on the chip. They also don't have to develope the new NM process, TMSC is doing that. They do have to get it all working together, but that should be a "simple" process for Nvidia.

Their big problem is increasing the SP, ROPS, etc. Adding a tesselator. Adding all the new things on the chip, its going to be huge. Even at 40nm. It will be fast yes, but it will cost, probably be hot, and expensive. AMDs 5870 will perform slower, but will be the better buy. And as a gamer, I'd much rather have eyefinity then GPGPU.
 


That's a silly statement to make to the RojakPot specualtion.

'Everything else being "mostly" equal' ?

WTF are you talking about? Very little is going to be 'equal', and how do you think 320 shaders = 1600 shaders?

There is no way to directly relate the two, especially since the HD5870 changes their RBE structure and their cache and buffer arangement, so whether or not they need or can use more bandwidth is another question, but as the HD2900 showed, raw bandwidth alone means very little.

And until we know how the shader, texture units and especially the arbiter/compiler/scheduler work, there's no way to even start thinking of relating the two, especially as so many people like to keep repeating, this is not your father's G80 and should involve a new enough architecture to require actual testing to make any kind of educated guesses.

AS for the data, it's not a fact, it's Adrian's compelation of the rumour of the specs, which can be accurate or way off, as we have seen before in the past, he's just trying to put a face to the rumours nor putting a definitive spec down.
 
Solution
Heres what I see.
nVidias transistor density may kill them unless theyve changed it dramatically.
What was once a problem for ATI with their high density transistors, is now no longer a problem, as the temps and power show on the 5870.
If nVidia changes this density, it could cause heat and possibly power problems.
Konkort, the author of the link provided earlier showing the specs also said the G300 taped out in Feb also, so, take all this with a grain of salt.
Now, if they dont change their density, having 2 huge die on 1 pcb will be very difficult, just size wise, so a x2 solution is iffy.
Having the single core chip beating the 5870, if theres any close approximation to these numbers is a possibility, but taking everything in consideration, tessellator, DX11 changes, going from simd to mimd, having greater gpgpu abilities , new process, huge die, I would question high clocks as well as costs.
Say they pull it off, the likely hood of a x2 is slim this process, the power may be a problem, and the clocks will be lower.
I mention all this simply because, remember the 280, it had major problems, as it was the highest returned card of last gen on both sides, whereas we saw the 260 beating out the 9800x2 in less returns.
Seems as if nVidia plays it so close to the max, that it takes awhile for them to get their top cards perfected, not saying they wont this time, but we will see, as this card is even more than the 280 jump was
 


Once again, you don't get it, AMD isn't trying to continue selling it old cards under a re-branded name, they are replacing the old line, with the new line. Prices have alot of flexibility in the same way that the HD2600 launched right next to the HD2900, and the HD3600 did for the HD3800, and 4600 for 4800, and now the HD5600 and 5700 series launch shortly afterward and replace all the cards you mention, especially the HD4870X2 and it's ilk which cost more to produce than they are worth.

They're just looking to clear the channel prior to the launch of the new parts, nV doesn't have that option, they only now are getting their 40nm G2xx series parts out, they don't have any G3xx lower end parts, but it still didn't keep them from launching and pricing the GTX250, GTX265, GTX275 the way they did.
Competing with previous parts only matters if you intend on keeping those old lines producing, and that's not happening with the HD4K parts all of which have HD5K replacements already outlined.