NVIDIA GTX 350

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Well, using the g92 arch wont save them either, because even a g92 at 800+ clock wont catch the x2. And, like weve said, unless its a totally new arch, even then, with the numbers posted, as I went totally in favor of nVidia to the point of miracles, the power draw wont be exceptable. Now at 40-45nm....
 
The ONLY thing or alternative nVidia has at this point is to take the 55nm G260, make it a sandwich, ramp the clocks, and hope it competes and stays within power envelope. Not sure its possible, but they just may be desperate enough.
 

dos1986

Distinguished
Mar 21, 2006
542
0
18,980


No offence John, but we dont know what Nvidia or Ati can and cant do.We are just enthusiasts who sometimes, hear from a source, we are not engineers.

Are you implying that nVIDIA cannot come-back at this time and take the performance crown from the HD4870X2, without sandwiching together 260's ?

And what do you mean by power envelope?

Their is no envelope for enthusiasts with pockets full of cash.These babies may need fuel and they will fund it/find a way.



 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310

I'm not quite so bothered by your impatience at what I wrote, but rather that you seemed to not bother to realize that I'm not FrozenGpu. While I argued against points made against them, I wasn't necessarily in agreement with them. :sweat:

I do agree that their follow-up card couldn't possibly cost $1,000US. nVidia knows that they probably wouldn't be able to get even $600US for whatever it is, just about no matter how powerful it is, or even if it costs them more than that much money in order to produce it.

Mainly, what I was getting at is how preposterous the numbers being presented are; nVidia's hurting right now, and such a GPU would only hurt them worse.


Exactly the point I was alluding at; I wasn't saying so much that the next card woud use GDDR5 and cost more, rather than that such was a reason it WOULDN'T use GDDR5. While I have my doubts that there would need to be much redesign necessary to the PCB in order to accomodate GDDR5, (as far as I know, it has the same pin count, it'd just need the PCB to take EMI/crosstalk considerations a tad more seriously) it would, however, necessitate a pretty serious re-design of the GPU's memory controller array, which would be nothing to sneeze at.


At the rate nVidia's going, I'd wager that a lot of us enthusaists might happen to know nVidia better than nVidia's own people. :pt1cable: (I'd note that a number of engineers are among the enthusiasts; while not working as an engineer, I have a bit of education as one)


No, no amount of cash thrown at buying a PC can allow it to exceed the laws of physics; I'm sorry, but you just over-ran your own knowledge there.

It remains a fact that you can't break the limits of how much electricity you can feed the parts. One major way nVidia dug a hole for themselves, as I mentioned prior, (if you'd have bothered to read my post; long it may be, it's still a part of the topic) the GTX 280 comes dangerously close to the absolute maximum power that can be provided to a video card; it's at some 234 watts, out of a practical maximum of 250 watts. If you crank the speeds higher, it's going to require much more power, close to proportionate to the increase in clock speed. Likewise, GDDR5 consumes a lot more power than GDDR3; that's a major factor as to why the 4870 consumes some 37% more powerthan the 4850.

THAT is what is meant by "power envelope." It's simply how much electrical power can be provided.
 
Look whats been written. As I put it, it applies correctly, what dont you get? And yes, nVidia can pull a rabbit out of their butts, and come in with a whole new arch, on a new process, which theyve never done before, which has also been written. But, as I said, using a G260x2, theyd have a tough time keeping within the power envelope of the pci compliance with the clocks ramped up.
 

dos1986

Distinguished
Mar 21, 2006
542
0
18,980


Of course I dont know much about this King, im not an electrican/engineer, but I fail to see how 250watt+ defies the laws of physics?


 
G

Guest

Guest
it would be cool if true... but the refresh seems to have done a little much... I thought they rushed the gt200 series just to beat the 4870's... (which didn't work) and they saved the real gt200 (what it was supposed to be) for the refresh... but that seems like a little too much for a refresh... so I call FUD

EDIT: Actually just looked up benchmarks for the gtx 280 and if these specs are true... you might get say a 50 - 60 % increase in performance.... which (according to anandtech) would give the gtx 280 50 fps at 19x12 on high... the 4870 X2 (preview) gets 40.... scaling will probably improve to the point where it gets up to like 45 - 50 fps.... this might actually be nvidias answer to the 4870 X2....

I want to see an actual next gen chip though.... as it says in toms article
http://www.tomshardware.com/news/nvidia-directx-drivers,5821.html
 

Currently, pci compliance allows for 1 6pin connector at 75 watts, 1 8pin connector at 100 watts and the pci/mobo/card connect at 75 watts. Add it up, it comes to 250 watts. Thats what were talking about. No enginering miracle needed, other than to come in under those power requirements. Im doubtful is all, as the G280 already comes within 16 watts of this cap. With die shrink etc, as weve said, itll be damn impossible for them to do it. Like I suggested, a G260x2 at 55nm with ramped clocks may do it, but itll be so close, itll be tough, and even so, we dont know the final bench numbers of the 4870x2, nor its oc abilities, which would allow more headroom, whereas, the G260x2 at 55nm may hit the ceiling regarding power requirements, while ocing
 
G

Guest

Guest
^ i want them to put 4870 Diamond Black Edition chips on the 4870 X2... 900 + core clocks on a 4870 X2 would be ridiculous
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310

You don't need to be an engineer or electrician to know that you can overload circuits. If you try to pull more power than a cable is designed for, it won't be too reliable, and it can possibly catch fire, because the stuff will RAPIDLY heat up once the limit is reached. At best, you'd wind up with a card that is CONSTANTLY crashing hard, often forcing the entire computer to reboot without warning. And right now, the most you can safely give to a PCI-express card that has all the cables plugged in is 250 watts.

I don't see how hard this is to understand. I may have used the term "physics," but that doesn't stop it from being simply grade-school stuff, much less what'd be expected for any enthusiast to know without trying.
 

Slobogob

Distinguished
Aug 10, 2006
1,431
0
19,280

I think nvidia simply overestimated AMDs greed. AMD coud've charged between 50 and 100 $ more on the 4850/4870 and still sold considerable numbers. That still wouldn't have made the GTX series competitive considering their insane launch price, but the price cuts wouldn't have been too drastic.
To be honest it wasn't that difficult to see AMD coming around this way. Since the 2xxx series they stopped targeting the high end and instead went for the mid-range. The 2xxx series failed thanks to a messed up launch and lousy drivers at the beginning. The 3xxx series did considerably better, again aimed at the mid-range market (against almost the same nvidia line-up) but was drastically cheaper to produce. Following that direction, AMD made a card that is still cheap to produce in great numbers but improved performance again - this time even more significantly. And basically the cards are up again against the same old nvidia line-up. The 9800GTX is nothing else than a 8800GTS which is nothing else than a 8800GTX with a smaller bus with different clocks. And the GTX series is not much more than a double G92 (and not even as good as that!), but sadly nvidias architecure doesn't scale as good as amds.
I'm not really sure the GTX series was aimed at the gaming market at all. These cards would work nicely with cuda and the prices nvidia can ask in the server segment are a whole lot higher.


The memory is actually what makes me doubt that nvidia will come out with something like that. As you summed up, the memory is already expensive. What i find even worse is, that if they intend to do all that on a single PCB, the layout would be very complex. Fitting all those chips (and cooling them!) on a single card and routing the 512-bit bus is a nightmare. I'm not even sure it makes sense to have so much bandwidth.
On the other hand, they possible could sell such a card for a few thousend dollars in the server segment. Doing physics calculations or similiar math is one of the strong points of these cards. They would compete with server clusters that are a lot more expensive to maintain AND aquire. The question is, is nvidia already at a point where they can afford to build a professional lineup for the server market? And is that market big enough?


I agree. The OPs numbers would only make sense if nvidia totally re-arranged their architecture. Doing that in under a few months sounds pretty flaky.




Ouch.

Guess what happens to the quotient if you divide divisor and dividend by 5?

 

dos1986

Distinguished
Mar 21, 2006
542
0
18,980


More complex interface?


 

FrozenGpu

Distinguished
Dec 8, 2007
986
0
18,990


wth?
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310

That's true... I recall the original calls for MSRP were to place the 4850 at $179US and the 4870 at $249US, though they bumped those up a bit in time for release when they realized that they were still going to be butchery at the raised level. They could've gotten away, probably, with $250US and $400US easy.


That's something I have a bit of a bone to pick, is all the people saying that AMD has gone and focused on the "midrange" with RV670 and especially RV770; when your single GPU performs within a handful of percentage points of your competitor's flagship, I can't see that as "mid-range." Even with a wider gap, as between the 3870 and 8800 ultra, it still falls within the same category. Just because a battleship isn't the biggest doesn't mean it's not a battleship... And sometimes the smaller size means that they're faster and deadlier anyway... ;)

But seriously, analogies aside, we didn't see this "mid-range" stuff bandied about when it was nVidia's hardware that was sucking ATi's exhaust fumes... The X800XT passed the GeForce 6800 ultra by, tended to retail for less, and then that was later followed by the X850XT PE. And then there was nVidia's continuance with G71 as R580 crushed it, continuing as R580+ gave it another hammering blow. And yes, at 196mm², G71 was considerably smaller, 55.7% of R580's 352mm², though not quite as severe as how RV770 is but 44.4% the size of GT200. Yet never did anyone ever look at how nVidia launched a swarm of 7900 and 7950 cards and call them "mid-range."


My answer is that I doubt it; right now PCI-express isn't quite the ideal interface to be using these cards for mass-use. If they develop perhaps a cheap Hyper-transport solution, it could be that they'd make a specialized sort of board that could be massively grouped together to form a GPU-based supercomputer. I'm wondering when they'll actually get around to that sort of thing; at 1.026 petaFLOPS, it would only take 1,710 RV770 GPUs running at the stock 750MHz in order to equal such a computer, compared to the 12,240 Cell Broadband Engine CPUs combined with 6,562 dual-core Opterons that make up the current world-record supercomputer... And producing a whopping 4 gigaflops per watt compared to 488 megaflops per watt for CBEs, the current record-holder for efficiency on the TOP 500 chart.


Doing so would require people who want to use the card have all-new hardware to support it. As it stands, at least you can still plug your GTX 280 into a normal PCI-express slot and hook it up to a normal PCIe-ready ATX PSU. If your card requires proprietary hardware (motherboard interface, etc.) to run, then it really can't be competing... Just like you never hear about PCI-X devices in the desktop world, because standard desktop PCs never came with PCI-X slots.
 

FrozenGpu

Distinguished
Dec 8, 2007
986
0
18,990


that's what i said earlier did you just ignore my post completely?

Dude what is your point in even repeating everything that i am saying?
 
I usually dont put a whole lot of weight on him but he seems to have nailed it http://www.theinquirer.net/gb/inquirer/news/2008/05/29/nvidia-gt200-sucessor-tapes Quote "The answer to that is to tape out the GT200b yesterday. It has taped out, and it is a little more than 400mm^2 on a TSMC 55nm process. Given that TSMC tends to price things so that on an equivalent area basis, the new process is marginally cheaper than the old, don't look for much cost saving there. Any decrease in defectivity due to smaller area is almost assuredly going to be balanced out by the learning curve on the new process. Being overly generous, it is still hard to see how the GT200b will cost less than $100 per chip. Don't look for much cost savings there.

The new shrink will be a much better chip though, mainly because they might fix the crippling clock rate problems of the older part. This is most likely not a speed path problem but a heat/power issue. If you get a better perf/watt number through better process tech, you can either keep performance the same and lower net power use, or keep power use the same and raise performance.

Given NV's woeful 933GFLOPS number, you can guess which way they are going to go. This means no saving on heatsinks, no savings on components, and a slightly cheaper die. For consumers, it will likely mean a $50 cheaper board, but no final prices have come my way yet. It will also mean a cheaper and faster board in a few months.

The GT200b will be out in late summer or early fall, instantly obsoleting the GT200. Anyone buying the 65nm version will end up with a lemon, a slow, hot and expensive lemon. Kind of like the 5800. It would suck for NV if word of this got out. Ooops, sorry."

It sure seems like was so far has gone on, and maybe explains this "G350"
 
G

Guest

Guest
^ lol that was what I was thinking when the gt 200 first launched
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310

It's especially amusing to read over all those old comments from all those that dismissed the RV770 out of hand, arguing that the 4870X2 might at best only be competitive on outright performance with the GTX 280... When we saw that such is clearly not the case. So yeah, I think that this time around, INQ got something right.


That's what I'd personally been thinking. nVidia could ill-afford to actually cram anything more onto the GPU, though they may try to press clock speeds a bit higher. Though looking at things that way, they may be able to actualy keep the whole 512-bit memory interface.
 
Im just not sure how theyre going to do it. Im almost certain due to wiring that 512bus, theyll have to have a sandwich. And, it becomes too power hungry going x2. I just dont see a way out for them. What am I missing? Im sure theyre going to go for it, but what?
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
I still hold some doubts here. They may WANT to take the X2 route, but I can't imagine them reaching it unless they slash off half of the memory interface, perhaps even IF they go the sandwich-board route. They may actually go ahead and do that. Given how the GTX 260 had considerably less power draw than the GTX 280, they could possibly get it to work with a combination of the die shrink and lowered clock speeds. So they could do it, but I can't really conceive of a way that they could concoct an actual "4870X2 killer," at least this time around.

Or they may give up on the X2 route entirely; this happened with G80, as they pushed it off to G92. Likewise, ATi had gone as far as proclaiming a Radeon HD 2900XT2, only to quietly scrap it, along with the GDDR4-based 2900XTX.
 
Status
Not open for further replies.