New GeForce GTX 260 to Feature 216 Shaders - Beats Radeon 4870

AuDioFreaK39

Distinguished
Jun 7, 2007
139
0
18,680
geforce.jpg


Tuesday, 02 September 2008

The current Geforce GTX 260 has 192 Shaders and it looks that the new one will have 216. The story is quite simple. The GT200 core has 10 units with each featuring 24 Shaders and instead of eight clusters enabled with the old GTX 260, Nvidia will enable nine.

The new card should be available at some point in September and it definitely wins over the Radeon 4870. That is the whole point and it looks like the new GTX 260 will end up around $50 more expensive than the current one.

It looks like that this is the first "new" product that will try to consolidate its roadmap in the war against ATI. ATI came out strong and this time Nvidia has a better answer than simply renaming its current products - it will slightly alter them and market them as new.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=9209&Itemid=1
 
So, is this run of the GTX260 on the newer 55nm process or is it just more of the same with some new features enabled? Why didn't nVidia enable nine clusters with the initial release? At least they could make the MRP the same as existing 260's, how much performance will an additional 24 shaders really buy for +/-$50? Whether it wins over the 4870 has yet to be seen, GIVE US THE REVIEWS AND BENCHES!
 
no die-shrink, no dx10.1 support, no faster ram, slightly more shaders & 50$ more.

Doesnt seem worth it at all. I think nvidia is just wasting time and they just should get there team to work hard on the gtx300 series. 40nm die-shrink dx10.1 support etc..
 
I agree with Blackwidow _rsa. Please for the love of all that is good, make it the GTX270. The whole point of going over to this new naming scheme was to make things simpler. Simpler my ass! This will be like the 8800GTS all over again. Let's count the versions of the 8800GTS

8800GTS 640MB G80
8800GTS 320MB G80
8800GTS 512 SSC G80
8800GTS 512 G92

Not to forget that the 9800GTX is an 8800GTS 512 G92 with higher clocks, so then you could add.

9800GTX
9800GX2
9800GTX+

Nvidia stop this bs! It's annoying and it misleads customers that aren't extremely computer savvy.
 
DX10.1 is nothing, theres no real diff between DX10 and 10.1, we already know that in alot of game, DX10 dont show improve graphic, only the support of SM4 who increase the perf
 

:sarcastic: Whatever.
 


In this you are mistaken. DX10.1 is what DX10 was suppose to be, except M$ catered to nVidia who won't/can't meet the spec. DX9 to 10 looks so unimpressive because of this.
 
In this you are mistaken. DX10.1 is what DX10 was suppose to be, except M$ catered to nVidia who won't/can't meet the spec. DX9 to 10 looks so unimpressive because of this.
true

and another thing- 4870 is faster than gtx 280 with 8x aa in most games. I dont know about you, but i start with 8x aa and than go up. I dont even bother with 4x aa or lower. If you like your games looking jagedy, than yes, gtx 280 is faster. gtx 270, if it was to be called, would still be slower than gtx 280- why would I want to pay more to get less? So can i say I have nvidia card? I dont care about the name, only about the performance.

nvidia lost this round as much as it won with the g80 chip.
 
I think they should call it... the G280-minus! Thatd at least be different. Then people would know its not a G260, sorta like a G280, but isnt, sorta like the saying "same thing but different". And then, because they havnt sold near as many G280s as theyd thought they would, even no wheres near the pricing they hoped they could, all theyd have to do is add a -. Easily done. Maybe then they could recoup some of their projected monies from your pocket on the cheap. 8800GS. 8800GTS640 with more shaders. Im just glad for nVidia most of their numbers are even, otherwise theyd really be in a pickle as how to name this thing
 

No 🙁

I agree with everyone here, this card should be called the GTX270.

To the DX10.1 naysayers, better MSAA support with deferred rendering alone is worth the price of admission IMO. That is, if developers would get on board...and stay (Ubisoft :fou: ).
 


Have you tried the driver mod for the fans? I have 2 4870 in Crossfire and they idle at 50 and 45 degrees and are still virtually inaudiable.
 


AA is not that important once you pass a certain resolution threshold. At around 1600x1200 (or 1680 x 1050 WS), more than 2x is really pointless. You are better off using the processing power to things like HDR.
8x is absolutely useless unless you are playing 800 x 640.
 


Not true, it depends on your display too. Depending on your pixel pitch, you may notice a huge difference between 2x AA and 8x AA. This especially becomes true when using HDTVs. I'm currently using an HDTV and the more AA I can push the better. Every game I play must have at least 2x AA for it to look halfway decent.

Also, like above posters have mentioned, there's a pretty big difference between DX10 and DX10.1. DX10.1 is what was promised to us by Microsoft, better performance plus better graphical features and quality. What we got was the unified shaders with some extra lighting features, minus a ton of other things. In Assassin's Creed, we saw a huge performance increase for the HD 3870/3850 cards when using DX10.1. So far that's the only game to support it, because Nvidia was the first to release their DX10 GPU, so Microsoft went along with the way Nvidia was doing it, which isn't true to the actual spec.
 


Then that is a monitor issue. Your typical computer LCD does not have this issue. Low cost HDTVs do. Pixel pitch reduces perceived resolution. So my statement is still correct. At a certain resolution AA is inefficient. So if your monitor can't achieve that resolution....



No there is not a big difference. It is hype and a little fact that many ATi supporters keep pressing. It is making mountain out of mole hills. It is more about proper implementation of DX10 is the issue. As programmers and devs write more efficient code, the better DX10 result you will get.
 



fud... FUUUUD!!!!

Let's see some ACTUAL results, THEN I'll be impressed. Til then, screw you nvidia. (Give me good performance at a reasonable price and I'm all over that ****....)
 



The issue, as I see it, with DX10.1 is that is has not gained wide support from the game makers, yet. And, I say "yet" for a reason. Given that DX11 is realistically at least 2 years away from the consumers, DX10.1 is/will be the defacto API, bet on seeing more games supporting DX10.1 within the next year.

The real shame about DX10 & DX10.1 is the poor perception and slowed adoption of Vista.

I haven't seen any real articles effectively comparing DX10 to DX10.1 to say whether one is better than the other. But I do know that Crysis running in Vista using DX10 is far more amazing to look at than Crysis in XP using DX9c. I'm actually considering another boot partition and a gpu upgrade just so I can replay Crysis with the eye candy.
 


Exactly. It is not a DX10 vs DX10.1.
The real issue is DX9 vs DX10. It is the code. If you understand the difference between DX10 and Dx10.1 then you KNOW the difference is not huge - at least when compared to DX9 and DX10.

Until devs get proficient with DX10, some people will have a wrong perception that somehow DX10.1 is a huge factor. It is not.
 
i bet by the time this is out, ati will have most or the entire lineup already released... the 4850x2 and the 4870 XOC edition... the one with watercooling... IM WAITING FOR THAT

who knows, they prolly even have the 4870x2 1gb edition ready, i hope i can afford this junk lol
 


After reading people saying things like that, I think Microsoft owes the makers of Crysis some major $$$. What other compelling (gaming-related) reason is there to upgrade? I've already got Vista Home Premium so don't anyone start up with me on that... I'm just asking... what was the killer, must-have game that really required Vista to shine? Crysis is really all I'm hearing.
 
If developers take advantage of tessellation, we could see HUGE performance and quality increases.