Nvidia vs ATI 2010

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
The longer nVidia waits, the worse it's position will be. Sure I expect the GT300 cards to be faster in DX10 titles, but what about DX11? Right now it doesn't matter much, but currently developers are using ATI's DX11 cards since they are the only game in town. As developers optimize games to run faster on ATI's cards it will put NVIDIAs offerings at a disadvantage.

If you recall the 2900 launch (I know think back), It often sat between the 8800GTX and Ultra in DX9 benchmarks, but was all but humiliated in the DX10 benchmarks. Early reviews often sided with the 2900, but for newer DX10 games, which were often developed with nvidia hardware, the 2900XT did not, and does not fair as well. This effect lasted to the 3800 series cards and only with the 4800 series cards did ATI fix enough issues and gain enough ground to be truly competitive. Of course, the 4800 series cards are arguably less efficient than their NVIDIA counter parts simply because their arrangement of shaders requires more care by developers to be properly optimized.

So now here is the problem for NVIDIA. The longer they delay, the more time developers have to optimize games for ATI's DX11 offering. The more they optimize for ATI's grouping of 5 shaders, the more it erodes the advantage of NVIDIAs independent shader arrangement. If NVIDIAs more expensive (in terms of transistors, complexity, and of course $) GPUs fail to have a significant advantage in DX11 games, then their margins will be smaller not only for this generation, but for their next as it will probably have to be based on the G300. Admitting the approach of your competitor was right and redesigning a chip completely is expensive after all.
 
@ sabot, yes you're probably right. the 800gt was one of the msot impressive GPU's released. but the 58xx series aren;y far off in terms of performance gap.

my point is that its not typical, and if you remember the release of the 8 series from nvidia, ATI didnt do a great job of keeping up at the time. which is why i think nvidia will be in the same boat.

even if they do somehow release a card that out performs it (which to me, seems unlikely) they have no chance at offering it at a competetive price.

at the guy talkign about 'physics'. yes, physics are very important. PhysX however is not. its a gimmick, that does nothing but take pointles processig tasks that xould easily be done on a CPU, and lock them to an nvidia GPU. IMO, its insulting that nvidia do something like that. they arent doung anything revolutiionary. just encouraging competetors to add features for their platform only. luckily, we havn't yet missed anything worth expereincing under physx.

but back on topic, if nvidia can release a more powerful card for a competeitve price, ill buy it. but to me, that seems impossibler. because when fermi is released, a 5870 will likely cost 300 dollars. adn there is no way they could start their high end cards at anywhere near that price.
 
This will most definitely be a peculiar competition.

The HD 5800's seriously have shoved a foot up Nvidia's money hole in terms of performance not to mention price. So far the Fermi's fake benchmarks, fake pictures are definitely not adding to the appeal for the new generation of Nvidia cards.

It is highly unlikely that ATI will lower prices because of the GF 300 series, as they will most likely be priced exuberantly high, meaning they won't compete with ATI's 5800 series which means ATI will have no incentive to lower prices.
The only card's that may see lower prices, not because of competition but because of marketability are the 5700 series for three reasons:
1. Expansion of market share.
2. Make Nvidia's older models comparable to the 4800's series completely irrelevant.
3. Push in a wave of low cost DX11 cards, which will push DX11 game developers to optimize for the 5700/5800 series cards (as stated earlier in this thread).

In all honestly, I don't know how Nvidia is going to catch up.
@ Intel joining the GPU industry. Highly unlikely that they will be able to compete with Nvidia or ATI.
 
The low end and mids are a problem
diesizeah1.png


As seen here, the G92s density still isnt on par with the3xxx on up or 5xxx series.
So margins favor ATI here, where scrimping is most important.

As for the higher ends, its all pretty much been said, Fermi is hot, a x2 solution wont be doable til a newer node comes.
As for fermis perf highend, itll be around where we saw the 200b vs the 4870 and the x2, in between, but, this time, itll need more power, and ocing may be stunted, so a less diverse product selection, something nVidias partners love to do.

As for DX11, itll take another hit if nVidia has no mid low parts, much like the DX10,1 situation

Perf wise, the low and mid solutions, if nVidia is stuck with g92 yet again, itll lose against the real 800 SP on down versions of fully compliant DX11 5xxx cards from ATI, and its important that nVidia gets their scaling down so 3xx series derivatives on mids and low end scale down for perf reasons, as well as DX11, and thus far, word zero on any tape outs or anything of that nature, while new infos is leaking on ATIs solutions as we speak
 
The gap will widen. Everybody expects fermi to beat the 5870 but it will be close with some titles going either way. ATI will be ready to release a 5990 dual gpu with top binned parts by then.
 
that does nothing but take pointles processig tasks that xould easily be done on a CPU, and lock them to an nvidia GPU

Not from what I've seen.

http://www.hardocp.com/article/2009/10/19/batman_arkham_asylum_physx_gameplay_review/9

Notice that once you go outside, performance drops horribly. There is no way thats playable. 0-6FPS? Not a chance. I'll grant you they've done their best to lock to to only their cards, but it can't really run on a CPU.

I'd also grant you that at present it doesn't add much. Its not like you can punch down walls now, or do X in game like you should be able to do in real life and have Y happen. Reading that review of Batman AA it adds more paper swirling in the wind! And more bricks! Wow! This is hardly game changing. I would think Eyefinity is a better gimmick, to bad it costs so much. Newegg currently has one HDMI monitor in stock with HDMI for $229 counting shipping. Add in another $150ish for another DVI monitor and assuming you have a monitor already you'd need $380 and up to use it.

JDJ, has that chart been updated? I see the 4770 up there, but not the new 5xxx cards. The number of shaders has been updated, so I would think they would be on the other side of G92 now. IF Nvidia fails to scale the G300 down like they failed with G200, then I don't see any reason to buy an Nvidia mid/low end card. Whats worse is what will happen to the industry. If Nvidia doesn't release lower end DX11 parts, I think this will hold back the release of DX11 games.
 
it cant run on a CPU because it was programmed for a GPU. the point is that much better physics processing can and HAS been done on CPUs (red faction for instance)

if the physx fetures had been prgrammed into a game as a primary intention, and not an afterthought, all of it could easily be handled by any CPU out there. Havok does much more than physx ever has. and thats CPU only.

to me, it seems the nvidia simply encourage devs to add some useless features that can only be used on an nvidia card. i don't know what incentive they offer them, but they could easily implement any of it within the standard game code.

and physx will never take off because of that. no dev wants to make a game that loses its appeal when played on an ATI card, and vice versa. nvida offer them the chance to try it and they do. but havok can do it all, better, faster, cheaper, and without alienating an audience.
 


Red Faction: Guerrilla & Crysis/Warhead/WARS really spit on PhysX.
 
I dont have a later chart, sorry, and though this can only be a rough outline, its more to the point, 92b is larger than 5xxx series, and underperforms as well, so a double loss here, more expensive to make, and less perf makes for a huge disparity where wargins are extremely tight.
Not to mention the features

Unfortunately I cant remember the size comparisons between the 5770 and the 4770, theyre out there somewheres, but was under the impression theyre relatively the same, and even allowing for more for the 5xxx series, it wont make up for the lack of perf seen in the 92b offerings
 
the point is that much better physics processing can and HAS been done on CPUs (red faction for instance)

Ahhh, now I see your point.

Unfortunately I cant remember the size comparisons between the 5770 and the 4770, theyre out there somewheres, but was under the impression theyre relatively the same

Unless I'm missing something again, how can that be? The only 40nm chip listed in that chart is the RV770, which is the 4770 is it not? If the 5850/70 have twice as many SP and is also at 40nm, wouldn't it HAVE to be bigger? I would be very willing to bet that the 5850/70 is quite a bit larger then the G92b. Pure guess would be just north of the G92. I agree that the performance would be better then G92b for the most part, and WAY more feature rich. You can't milk G80/92 for as long as Nvidia has and not expect to get passed on features.

 


One of the neat features of DX11 is that if the card that's running on the DX11 system (Win 7 or Vista with patch) is not DX compatable, it automatically scales back the new features without the dev's having to change the code. At least that's what I had read. This makes it easier on the Dev's creating DX11 games so they may be more likely to write for it.
 
Meh, rumors and hearsay. I'm not sure I believe this. For one, DX11 adds not only the tessellator, but the compute shader as well. DX10.1 at least has the tess, but neither 10 nor 10.1 has the compute shader. I dont' see how cards that aren't programed to handle DX11 code could run it. If anyone has some concrete info on this I would love to see a link. Previous generations of DX could do this so its posible. But there were games that came out that REQUIRE DX9c, and if you didn't have it you couldn't play it.
 


not exactly batman or action oriented, but if a game was purposely designed to use PhysX, you'd actually get something interesting.

http://www.youtube.com/watch?v=zhZd3WU5l38

i guess physX needs it own genre to be effective.

as far as it still lives within the confines of a game's graphics-option, dont expect it to be game-changing in anyway. thats like asking 8xAA to change the way how you shoot those nade-launchers.
 
How many games on the physX game list REALLY support physX? Everyone agress that Batman AA is the best game out there that supports PhysX. (ok, maybe not everyone but most.) If all PhysX does is make explosions bigger and add more things in the wind, I will never be impressed. I want to be able to shoot the roof out above a hiding bad guy so that it crushes him. I want to blow a hole in the wall and get around the door I don't have the key to. If I run out of ammo I want to be able to pick something up and throw it at someone. (wait, isn't there a game that allows you to do that? I'm pretty sure Half Life 2 uses Havok however.)
 


completely off-topic but from a game-design point of view (at least from a very very very very novice game-design point of view), if you'd allow players to wield such ability then theres no point incorporating a key and have a door locked.

1. a game designer can leave the door unlocked. or
2. your blow-a-hole solution above.
3. distinguish breakable-doors to unbreakable doors ala dragon-age (obviously breakable doors wont have keys hiding somewhere as its rhetorical)

now having said that, all physX would do in this circumstance is make explosions bigger, thus the game-changing argument is null.



crysis/warhead as well, not physx however.
 


The RV770 is 55nm.


4770 was a development of RV770 at 40nm.... a trial of the production process so ATi could do their homework for Evergreen.


Thus, while Cypress is bigger than RV770, it is not as much bigger since it is on 40nm as opposed to RV770s 55nm.


Help any?
 
Status
Not open for further replies.