NVIDIA GTX 350

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.



Aren't they always refreshes? Gets a little hazy with NV doesn't it?
 



It won't be a 2gb card even in the best case. It'd be at most a 1gb per core design just like the 9800 gx2 was a 512mb per core design. Even at that its more likely to be 1gb GDDR3 on a 512bit bus, not 2gb GDDR5 - I thought jaydeejohn did some digging on this and concluded that the gtx200 core doesn't support GDDR5 in its architecture or something like that.



realistically you could maybe expect...

gt200 architecture, 55nm, 576mm

512bit width bus, 1gb GDDR3 per core (1gb framebuffer), 240x2 SP

And it'd run so hot you could cook eggs on it like this: http://www.youtube.com/watch?v=IDoOV0FFPvA

Clocks might end up being 600/1300/1000 - but I really think it couldn't get much faster than a stock gtx280 clock.
 
The FUD got out of it's containment cell, everybody to the bunker quick!!!

Seriusly, I don't buy this, it would require a direct connection to the power plant to boot, let alone OC it.
 


Well, I think if they did make another dual gpu card it'd likely have the same problems that the 9800 GX2 had - namely: lack of scaling in SLI, huge power draw, high temperatuers, and "can be beaten in fps by a combination of 2 lesser cards for cheaper cost"

a gtx280 would be comparable to a the 9800 GTX of the 9800GX2 era~

Honestly, i'm more than a little surprised that Nvidia could even be thinking about launching a gtx280x2 - namely because they already know they are going to get beaten by the AMD 4870x2 - so they should really be concentrating on a new architecture instead, well in my opinion anyways.
 
At 55nm, at the same clocks, the power may be doable. But at higher clocks , no way. Especially whats listed here.. Also, itd have to be a sandwich style like the 9800x2, so again, the power goes up. Not so sure this is doable at 55nm, plus better cooling, thus more power needed. Just doesnt look theorhetically possible
 
Possible or not, I just don't see the point. 4870x2 is already competing with the gtx280 - and 4870x2 quadfire is beating gtx280 SLI solidly.

Even if the gtx280x2 or whatever its called could be manufacturered and "beat" a 4870x2, it would be a microstutter fest in quad SLI... meaning 4870x2 quadfire would run over gtx-X2 quad sli

I don't know, I just .. well maybe they could do it and maybe they planned to do it anyways - but I think it'd be a waste of money and resources especially if they want to have dx10.1 cards ready for next spring...
 
Dont forget it would cost $1000+

Why buy an entire PC with reasonable graphics power when you can pay the same amount for JUST a video card eh? Sounds like one helluva deal.....




...if youre a dolt.
 


I think he is just over exaggerating a little. I would say at least 700 or 800 considering the GTX280 is finally around $450. Still way too much to even consider unless you like throwing money into the wind. I don't really see Nvidia pulling out a brand new card 2 months after they released their SUPER GTX series. It Doesn't make good business sense to make a new architect, throw it away 2 months later and replace it with something possibly a little better. Who knows tho, we'll just speculate a little more and wait and see.
 


good call b/c TSMC is delaying its commencement on the 45 nm operations, so definetly its not going to be that if it comes out this year, where as 55 nm , hmm maybe refresh..., still though 480 Nvidian SP that's just insanity...
 


lol ovaltine I expect you too understand that effectively doubling the SP, would almost double the price, not exactly but man that would be expensive, 800ish at least if it comes out some time soon, which jaydeejohn seems to say and I even think so, that it's bogus...at least the gddr5 part...something is off here, and the hardware is moving too fast software simply cannot keep up,
 


I can see it happening. Typical Nvidia work around and the fan's be darned as far as PSU's go. They'll probably charge $650 for it, with a bit of a loss and say you need two PSU's.

All they need is the high end, regardless of how many are manufactured or how much power it draws. They've never been concerned with thermals, price or power supply requirements before.

All they want is sheer fps in Crysis, or whatever current generation game is top for reviews. It's all perception: 'Nvidia has the high end, their GTX350 beats the 4870x2 by 6 fps in Crysis on a 30" LCD, so that means we must all buy GTX280's for Christmas, or GTX260's if we can't afford that, or 9800gtx+'s if we're really broke'.

Marketing trumps engineering every time. At least with Nvidia.
 
Color me skeptical; G92 had a 2.5 shader:core clock ratio, but GT200 went with 2.16:1, quite likely because they couldn't get the shaders to run stably at that clock speed.

Likewise, at 3360MHz memory, it'd be 315.04 GB/sec, not 316 GB/sec as it stated... A little nitpicking, I know.

I'm also HIGHLY doubtful that they'd be able to squeeze that sort of clock rate from a 576mm² chip, even after a revision to 55nm. And even then, I look at the fill-rates, and none of it even lines up; it's like the numbers were produced without any math involved at all.

For now, I'll actually maintain that at least part of those "specs" are fabricated for the time being, likely by someone over-eager to try to fill in the gaps because nVidia won't tell anyone what goes in those gaps.
 


Judging by that, let's look at the 3850 price at launch. That was 179$ with 320 Shaders.
The 4850 came out with, 800 Shaders, which is 2.5 times the amount. So it should cost ~450$.
That would make the 4870x2 quite the expensive card, almost cracking the magic 1000$ barrier.
To make it worse, i could start comparing the 8600 to 8800 series based on their shader count and their price, but that would be nuts, wouldn't it?

You don't happen to work at nvidias financial division, calculating the launch prices, would you?

The Shader count may be a factor in the cost but it is clearly not the dominating one.
 
Those specs don't look legit to me. Even if it was true, then wow. This card looks like a killer, not to mention twice as expensive. I would love a 512 bit and GDDR5 but I don't see that happening anytime soon.
 
Lets play the pricing game. The cooling solution would have to be more expensive, because were talking around a 33% shrink but doubling all the physical numbers here, so youre still cooling around 40% more, and Im rounding and being conservative here. Its still a 512 bit bus. The pcbs will be expensive, even more than the 65nm part because, one its a smaller process/startup, and two, its GDDR5, which even easier to design, still has to have a completely new design for wire layout. Not really any savings there except maybe a little power, therefor not doubling the requirements regarding the pcb. The chips would be approximately 35% larger, I know it says 576m^m2, but unless its a completely new arch, its 2 G200s, which after die shrink and doubling is more like 760m^m2, or around 380 apiece. No power savings, and at 760, its currently 35% or so larger, thus no money savings, and will be harder, or take more to cool. Now the interesting part. If all things come out the same, meaning that leakage/power usage per transistor is % the same per die size, you could say that per transistor youd be saving 35% but youd still be using more power, not alot but more, and this never happens, it just doesnt. Its why when we see say a 30% shrink, with no other changes, we dont see a 30% clock change, as thermals dont run straight with transistor size. And that brings us to the clock speeds. There you have a 33% increase, which as I said, you just cant do, and keep within thermals. So, even if kept within thermals, which isnt possible, the gains from shrink, from memory, from wiring placement all these things, pretending that it could all be done and within thermals, would still leave us with doubling the power draw of a G200x2, which is impossible because of the PCI-e2 compliance. To give you an example. The 4870x2 requires a 8 pin and a 6 pin, tho you can run it 6 and 6, as saw in some previews. At 8 and 6 is currently as high as we can go, period. Doubling a G280s power is adding another 90 watts or another 8 pin to it, and it just cant be done
 

To be honest, I think that right now, nVidia's people know less about what they're doing than the average enthusiast does. So I think that appeals to nVidia's authority at this point are pretty much moot.


However, RAM prices affect things QUITE well. I'd note that both the 3850 and 4850 came with 512MB of GDDR3, being what is now a middling amount of what is now a commonplace and cheap kind of memory. Meanwhile, the supposed GTX 350 outright DOUBLES the amount of memory, as well as switching from cheap GDDR3 to expensive GDDR5. GDDR5 is a memory technology in its infancy, meaning that currently, 512 mbit (64MB) chips, the smallest ones, are the only kind that are quite plentiful at the moment, meaning that they're considerably cheaper than 1024 mbit (128MB) chips. A 512-bit memory interface means that you're going to have a whole 16 chips of RAM, which is fine for 1024MB; that's just using 512 mbit chips, which are also even cheaper in GDDR3 form. Howeve, the 1024 mbit ones that would be required to get a whole 2048MB on a 512-bit interface would be over twice as expensive... On top of the cost changes in going from GDDR3 to GDDR5. You're probably talking winding up paying 3-5 times as much for the VRAM, which on a board with so much of it, is going to be a significant portion of the price.

So yeah, when comparing that supposed GTX 350 to the GTX 280 as far as price goes, comparing the HD 4850 to the HD 3850 is a very flawed analogy that doesn't take all that much into the picture.


Plus, as I noted, there was the part where none of the throughputs were divisible into the clock speeds, or even divisible into any logical number of processing units that the chip might have. (gimme a break; I'm assuming it'll be a whole number, and even a multiple of 8, as have nVidia's stuff since the G70)


Actually, the way I calculated it out, a shift from a 65nm process to a 55nm process would take the chip from 24x24 mm (576mm²) to 20.3x20.3 mm, (412mm²) only 28.5% in terms of die area savings. This could be eaten up by an additional 39.8% transistors, which would bring the total of transistors up from 1400 million to 1957 million... To put that into perspective, that's about 81.8% the transistor count of a G80.


Plus, each transistor shrink does not grant an increase in maximum stable clock speed that matches the decrease in size, otherwise we'd have seen a doubling in clock speed every 18 months outside of the NetBust era. But rather, if memory serves, the typical full-node step only allows for transistors to switch up to around 20-30% faster than before on average, while retaining the same level of reliability; as we saww on top-end Radeon cards, we went from a maximum of:
■540 MHz on 130nm, (Radeon X850XT PE) to
■650 MHz on 90nm, (Radeon X1950XTX) to
■743 MHz on 80nm, (Radeon HD 2900XT) to
■775 MHz on 55nm. (Radeon HD 3870)
Similar, on top-of-the-line GeForce cards:
■425 MHz on 130nm, (GeForce 6800ultra) to
■550 MHz on 110nm, (GeForce 7800GTX 512) to
■512 MHz on 90nm, (GeForce 8800ultra) to
■675 MHz on 65nm (GeForce 9800GTX)
Obviously, I left out some processes because they weren't used for high-end GPUs; I'm not counting the (almost always higher) core speeds found in mid-range GPUs, since the GTX 350, as supposed, is not a mid-range GPU by any stretch of the imagination.


Yeah, the limit on a PCI-e 1.x slot is 250 watts; 75 for the slot, 75 for a 6-pin connector, and 100 for an 8-pin connector; the GTX 280 already comes dangerously close to that level... A higher level is possible if you're using a PCI-express 2.0 slot, since if I recall correctly, a x16 slot can supply 100-125 watts of power, bringing the total to 275-300. Of course, if you make a card that needs that much, then you screw out the 90% of people that have a perfectly good motherboard and CPU (including Core2extremes) that would not be able to use the card because it couldn't provide enough power.
 
In a way, unless nVidia changes the ratio again, which would mean going back to the old 2.5 to 1, the sp's do have a direct relation to die size, given ratios are constant like you were saying. They changed their ratio with the 200 series, like Marvelous pointed out, and thats where he thinks its hurt them the most, and he could well be right. Even going back to the old ratio, youre still talking about a number of things. Changes all over the arch, and a new shrink, which nVidia doesnt usually do on their highend/new arch, if ever.
 


lol that is clearly not the point son, ati SP do not equal Nvidan SP, period!

you have to take ati SP's divide them by 5, then you get the equvilanecy to NVidian SP's...

but there are clearly more transisstors the PCB board ahs to be packed by more resistors, transistors, capacitors, and etc etc etc, just to provide processing overhead, so yeah its going to cost more, ty for humoring me, but clearly SP's are not the only determining spec to look at....
 



While it was entertaining to read your essay ( I hate to say it but I couldn't be bothered, not trying to be rude ) - it is unlikely that the MSRP would be any higher than 649.99 - otherwise they wouldn't have a hope to keep up with 4870x2, and they obviously know this otherwise gtx260/280 wouldnt have dropped in price like they did. Doubling the count of the card's physical statistics certainly makes it more expensive, but its not going to throw it into the 1000$ range anyways. If nvidia made a gtx280x2 it wouldn't use GDDR5, it'd use GDDR3 as there is no rhyme or reason for them to move to GDDR5 when the architecture is questionable to support it.

Building graphics cards might seem as simple as "throw a bunch of crap on a pcb" to some people, but its not - its obviously much more complex than that and making large sweeping architectural changes like switching to GDDR5 obviously requires a different type construction and design.
 
Exactly. I said I was rounding, and being conservative on top of that, all in nVidias favor. Your numbers look more precise, and I didnt want to research power draw/shrink ratios. Good to know, ty. Yea, the pcb would have to be reworked for GDDR5, and itd be redundant as youve already got a 512 bus. As OTP says, theyre hard enough to balance, and adding all this overkill would not only up the costs out of control for market, but most likely bottleneck the card, and bring it so out of balance that wed be all buying R600s and praising them heheh.
 
Nvidia don't really care about the production costs for the top-end unit , they just want the performance crown no matter what.They make most of their money at sub $150 level.Losing money and making money, they both occur at various times, there's no company without any loss in the history.

Too much speculation and sticking in the past or basing everything on what has happenend before in this thread.Forget about codenames like gt200b, they are only to trick.
 
Status
Not open for further replies.