Nvidia GeForce GTS 450: Hello GF106, Farewell G92

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

hannibal

Distinguished
Well every new cards seems to be overpriced. Even GTS 250 was too expensive at first. These will come down when the stocks of older cards dry out.
But yeah near 100-110$ will be sweet spot to these in next spring... Well if the ATI 6000 series is not too fast... then the prices come down faster or maybe 6000 series is more expensive than 5000 series... In the beginning most propable. But I think that the stock of 5000 series are not too high, so the price drop can come quicker.

All in all not actually so bad as some says. Many other "new dx" generation card has been worse compared to older. This is allmost as fast with todays games. Normally the second generation of new DX cards is better choice than having the first generation one. More supported games, and better balanced graphick power!
 
[citation][nom]rohitbaran[/nom]Lol! nVidia's current gen mainstream is worse than previous gen mainstream![/citation]

Hugs his two 9800gt 1gb and GTX 460. :) The G92 in my book is and will be for a very long time the best bang for buck gpu that Nvidia ever made. Remember that it took till the GTX2xx to fully beat the G80 and nothing ever since match the value of the G92. 2 and a half years dominance of the G80 and 3 years counting for G92 value.
 

rohitbaran

Distinguished
But it’s a lot like buying the 2011 Lexus IS when you already own the 2010 model. Are you really going to get that tweaked over new wheels and LED daytime running lights?
Excuse me, but cars last much longer than electronics and there aren't many changes every year.
 
G

Guest

Guest
I've read the review on hardware.fr, where they write their own synthetic tests to assess the performance of each aspect of each card (its triangle throughput, its fill rate, etc) and what they've shown, once again, is that the Fermi architecture is held back by its fill rate. A wider bus would not help the GTS450. That's probably why nvidia chose to ship it with a 128-bits bus instead of a 192-bits bus.

What nvidia needs is yet another revision of their design where they fix the balance between the shading power the number of core and the capability of the cards to fill triangles with pixels.
 

porschedream

Distinguished
Jul 28, 2010
16
0
18,510
im running my year old 4850 cf setup still, and honestly i havent had a single game that they cannot counter at 1680x1050 and max settings(usually with AA, crisis is an exception), i guess theres a year or two left in them
 

tstng

Distinguished
Oct 12, 2009
103
7
18,695
Completely underwhelmed. It's too weak to catch my interest, and the 460 is to expensive to catch my interest. The 5770 still rules.
 

nativeson8803

Distinguished
Oct 26, 2009
196
0
18,710
For those in the market for a slightly lower end card this is O.K.

I just took my 9800gtx apart and cleaned it up, now it's there for a backup since I can't sell it for much. Got it in a physx configuration but that seems to be the biggest gimmick of all. I guess it's retirement for the old guy.

Great card, good memories!
 

flyinfinni

Distinguished
May 29, 2009
2,043
0
19,960
You guys really ought to do a Crossfire/SLI comparison between the GTS450, the 5750 and 5770 (throw in the 4850 and GTS250 for a few charts too maybe) but I think that it might end up closer than most people think. I know that the 58xx series does not scale particularly well, but I'm currently running 5750 Crossfire, and it (when I last benched it) scaled at ~180%+. I would love to see the scaling compared on the same system (as my system won't really be comparable to the one here).
 

hixbot

Distinguished
Oct 29, 2007
818
0
18,990
Man, the GPU market is so weak these days. I'm not sure why people are excited that the geforce 8 series is still relevant. When the GPU market was strong a GPU would be incredibly obsolete after such a period of time. Now advancement is stale because the market and competition are stale. Performance is not increasing rapidly every year when looked at from different price points.

We should be seeing GTX 480 performance at the $100 price point by now.

The dominance of consoles and decline of the PC gaming market has had its effect. As I'm sure has uncompetitive collaboration between AMD and Nvidia.
 
WOW!
2 SLI 450's almost double the frame rates, that's great scaling.
But...
I paid $85 for a 4870
which, if you check the benchmarks, you'd see that it gives me the performance close to a GTX 460 for $175 less! All because I don't demand to have DirectX 11 excessively early. (Seriously there's only a few games out with that AND running games in DX11 causes MASSIVE fram rate drops.) Sorry, I love money and high frame rates too much to sacrific it to DX11.
 

porksmuggler

Distinguished
Apr 17, 2008
146
0
18,680
"But remember that the GeForce GTX 460 is really seven-eighths of a GF104. So, the GeForce GTS 450 is in all actuality a bit more than one-half of a GTX 460."

I was told there would be no math on this exam.
 

spirit123

Distinguished
Jul 13, 2009
10
0
18,510
Newegg is selling hd4870 for 104.99 after mir and free shipping.This 450 is just priced way to high.Sale it for $80 and it is a good deal.
 
G

Guest

Guest
It is ironic that this review would point out how well pervious generation GPUs do versus modern GPUs. This is something I have been dwelling on with regards to the Radeon series, in particular with regards to the upcoming Southern Islands GPUs. The rumor is that Cayman will deliver performance in excess of that of GTX 480 using just 1600 shaders, and Barts will achieve Radeon 5830 level performance with just 800 shaders. The latter is definitely ralistic considering in DX9/10 games the Radeon HD 4890 is clock for clock equivalent to a Radeon HD 5830. I picture the following sequence of events:

Just as ATI designed the cut down 640 shader 4770 to replace the 4830 while testing the 40nm node, they started a cut down 1280 shader part, which would ship with one execution pipeline disable, to replace the 5830 while testing the 32nm node. At the same time they started tweaking the design of Cypress to produce an RV790 style speed bump for the 40nm Cypress design, which would have been the Radeon HD 5890. With the introduction of the Radeon HD 5830 came the realization that the 800 shader 4890 was clock for clock as fast as the 5830 for DX9/10, and a change of plans. Instead of the 32nm replacement for the 5830 being a derivative of Cypress, it would be a 32nm shrink of RV790 with DX11 and double precision fp bolted on, in many cases through microcode. The resulting part would be faster clock for clock than the 4890 in DX9/10 because of the doubling of ROPs resulting from using the Evergreen memory contollers, enough so that the slight shortfall in DX11 performance relative to the Radeon 5830 would be compensated for.

Then disaster struck. Both TSMC and GlobalFoundries cancelled their 32nm nodes. ATI was left with a 50%+ completed high end Northern Island design that would have been about 400mm2 in 32nm and north of 600mm2 if implemented at 40nm. The entire 32nm team got together and realized that implementing some of the NI DX11 enhancements in the 4890 derivative core would get it up to Cypress level DX11 performance, and doubling this core would create a GTX 480 level performance replacement for Cypress. Thus Southern Islands was born. With the team working on the RV790 analog for Cypress joining the Southern Islands project to get SI out the door in the same timeframe NI was supposed to launch, Southern Islands benefitted from the planned clock rate enhancements as well. The RV790 analog project could be safely cancelled because the anticipated price drops with the entry of nVidia's GTX 480/470 did not materialize.

With these assumptions we can predict the performance of Barts. At 850 MHz with half its ROPs disabled Barts would match the DX9/10 performance of the 4890 and the average DX11 performance of the 5830, with a huge increase in tessellation perfromance. And without compensating for the increased performance from the ROPs operating at a higher clock, Barts would match the performance of the 5850 if it were clocked at 930 MHz with all 32 ROPs active. Adjusting for the faster clock of the ROPs it would probably be able to achieve 5850 equivalent performance at 900 MHz. The question is, how well would it perform relative to Juniper? The answer to this depends on how high the clock for clock performance of Juniper is relative to RV790. This is hard to judge since Juniper has significantly lower bandwidth than RV790, so it is hard to tell how much of the performance difference is due to lower bandwidth and how much due to lower performance execution units. I went through a convoluted several step process to estimate this performance differential, and came up with the RV790 shaders being clock for clock 8% faster than the Juniper shaders plus or minus 8%. If only there were a more direct way to estimate this ratio.

The Radeon HD 5850 has 1.8 times the number of shaders of the Radeon HD 5770 and operates at 85.3% of the frequency. The Geforce GTX 460 has 1.75 times the number of shaders of the Geforce GTS 450 and operates at 86.2% of the frequency. With such a small difference in ratios and the shader ratio being higher for the Radeon while the frequency ratio is lower for the Radeon, we would expect the performance of the GTS 450 relative to the HD 5770 to be almost identical to the GTX 460 relative to the HD 5850 if the performance of the shaders of the 5770 relative to the 4890 is the same as that for the 5830 shaders. The performance of the GTS 450 seems to be much closer to that of the HD 5750 however, so it appears that, while the shaders of Cypress are clock for clock about 70% of the performance of RV790, the shader of Juniper are clock for clock 80% of the performance of RV790. I find it questionable that RV790 would be clock for clock 25% faster than Juniper when, even factoring in my margin of error, my estimate was up to 16%. But I am now confident that RV790 is shader for shader at least 16% faster than Juniper, and even with half the memory controllers disabled Barts could have have just 720 shaders active and clocked at 800 MHz while still matching the performance of the Radeon HD 5770. At 700 MHz with 720 active shaders and all memory controllers and ROPs operational, Barts will can probably still match the performance of the Radeon HD 5770. At 700 MHz with 640 active shaders and half the memory controllers disabled, Barts can probably match the performance of the Radeon HD 5750.

If the 6770 is the 5850 alternative at $239, and the 6650 the 5830 replacement at $169, ATI could offer a 6630 as the 5770 alterative at $129 and possibly a 6610 as a 5750 alternative at $99.
 

shin0bi272

Distinguished
Nov 20, 2007
1,103
0
19,310
[citation][nom]TheCapulet[/nom]The only use I see this card holding is PhysX for the ATI guys.[/citation]

except nvidia disabled that ability in their drivers. If it detects an ati card is in the system it disables gpu physics.
 

shin0bi272

Distinguished
Nov 20, 2007
1,103
0
19,310
[citation][nom]tstng[/nom]Completely underwhelmed. It's too weak to catch my interest, and the 460 is to expensive to catch my interest. The 5770 still rules.[/citation]

You should do what I do. When a new card comes out and its time to upgrade, wait 6 months then buy the second one from the top performer. Ive spent 350 on my graphics cards every time doing it that way and I dont have to upgrade for 3 or 4 years between cards. See if you go top shelf you have a longer lifespan than if you spend 150 bucks every year.
 

OdeonDeathstalker

Distinguished
Aug 1, 2010
10
0
18,510
I just recently picked up a GTS 250 with plans to go to SLI and maybe 3-way SLI as soon as I can (money is crazy-tight) and I'd REALLY like to see if the generally-believed benefits are truly 2x and 3.5x performance increase with SLI and 3-way SLI.

An article pitting the same cards shown here against each other in dual, triple, and quad CF/SLI (where possible) would be of great benefit to many people and would show where each card's multi-GPU value really lies.
 
Status
Not open for further replies.