Practical differences Nvidia - ATI

Jimmy_Dunn

Distinguished
Apr 7, 2006
7
0
18,510
So I have half decided to go for a X850XT over a 6800GS for my AGP system and wondered what, if any, differences there are in day to day life with 'the other' type of card 😀 I know Nvidia ins and outs quite well now but have never owned an radeon based card.

Thanks
 
They're both video cards, so I don't know what differences it can have. The driver controls or settings may have a different "look" to them, but that's basically it.

Cool, that's what I wanted to hear. Was wondering about frequency of driver updates, ease of install, running temps etc. but I don't read many differences.
 
I noticed a large difference in image quality going from my GeForce4 4200 Ti to a Radeon 9800 pro. The image looked a lot sharper at the same resolutions, but that might be because it was a newer gen. card.
 
Well, the 6800 can do HDr, but being a cut down from the 6800U, the performance impact might make the future redundant. Plus the X850 supports HDr at a much lower level.

So the 850XT will be a much better bet in terms of raw speed - plus radeons are renowned to run very good with AA and AF on. With the X850 this was more apparent at higher res due to good mem speeds.

Drivers are updated monthly, and you get 12 drivers a year. Sometimes more, due to hotfixes (not actually drivers - but add ons to current driver) or extra content drivers (like the CAT 5.13 for AVIVO features).

You'll have to install .Net for Catalyst Control Center drivers, that has nice look and ease of use, but bogs system resources and takes a while to load. The alternative to this is Omega Radeon drivers that use the "old" Catalyst Control Panel (was supported by ATi until recently).
 
There are no big differences to speak of. Both companys offer the same features with there card. One company might claim to be better then the other at specific things but it's really of little issue to the consumer. What matters is what you want, and can afford. I've always ended up with ATI and i've never regreted it. I got a AIW X800 XT and it works like a charm. True I dont have shader 3.0 but I really dont notice it. I got a very fast card with TV for a great price, So why complain. Some people have loyalty to ATI or Nvidia so be wise of the answers you get. There is no real reason why you should buy either companys product. I happen to like the little guy so i like ATI. But it's a selfish reason. So just worry about buying the best card for the price, company be dammed.

As far as install it couldnt be more simple. You'll find the instructions beyond easy for either card.

Image quality will be great for either card you mentioned. The ATI 850XT is faster so go with that.

One thing to be mindfull on is your system memory. If you have to little you wont notice a big inpact on perfomance from a video card. This is true with alot of newer games cause they require alot of system memory just to fuction. I bumped up from 512mb to 1024mb and things got alot faster in games. So be sure you have enough memory. It can hurt alot more then people think. Windows might need alot less but games are a different story altogether.
 
Don't forget that the x800s only support smart shader 2.0 while the 6000s support 3.0. Trust me the difference is very noticable.
 
THE ONLY visual difference between SM 2.0b and 3.0 card that I have EVER seen is that SM 3.0 card have the ability to use HDR in OpenEXR games.
Other than that, SM 3.0 is basically designed for higher efficiency, not visual improvements.

Even SM 2.0 cards can use HDR in games that don't use the OpenEXR method, like Halfd-Life 2 and Counter-Strike.

Having said that, OpenEXR is very pretty. But it takes a huge performance hit, and a 6800 GS will struggle so much you'd probably have to turn OpenEXR off, anyway...
 
ATI control Panel IMO is bad they did a horrible job with the panel


but for nvidia its cool for me

untitled6uy.jpg
 
according to this there seems to be a noticeable difference between pixel shader 2 and 3.
If you listened to Cleeve, you would of understand. Some screenshots don't mean anything it you don't know how things work.

I listened but I see it differently to you. It doesn't matter how things work just what it looks like. The screen shots I linked to were the differences between pixelshader 2 and 3 nothing to do with HDR, that's the next page over.

With PixelShader 3, the waves beating against the shore foam with spray, and the sea is transparent. Black & White 2 was optimized for ATI graphics cards, so the effects are also visible with PixelShader 2.1

If the game is optimised there will be no difference, if the game isn't optimised then you don't get the prettier graphics. My understanding (or lack of) has no bearing on image quality. I am basing my opinion on that one page I linked to.
 
I find that both ATI and nVidia are good. My old GeForce 2 was good at the time

My current 9600XT had a field day with games lie Half Life 2.

Nowadays, I find value for money more important than the brand. For example I would take a 7900GT over an x1800 any day.

But let me tell you one thing I now for sure. nVidia has rather decent linux Driver support. ATI's linux driver support is nearly non-existent. And even in Windows, I just like nVidia drivers better than ATI drivers. For Example, ATI drivers are not very backward compatible. I run my 9600XT with catalyst 4.2. Old, sure, but its the best for the card. nVidia's drivers are consistenly good on almost all their hardware. Even tht old GeForce 2.
 
according to this there seems to be a noticeable difference between pixel shader 2 and 3.
If you listened to Cleeve, you would of understand. Some screenshots don't mean anything it you don't know how things work.



This is tru64. :-D

A lot depends on the implementation.

In some cases there is no perceptible difference. In other cases there are major visual differences.

You would probably want a shader 3.0 card as opposed to a 2.0 or 2.0b card but that doesn't necessarily mean a 2.0 card can't look great too.
 
THE ONLY visual difference between SM 2.0b and 3.0 card that I have EVER seen is that SM 3.0 card have the ability to use HDR in OpenEXR games.
Other than that, SM 3.0 is basically designed for higher efficiency, not visual improvements.

Geometry Instancing is an SM3.0 feature that improves performance, and in turn IQ. With programmable shaders we wont be seeing a lot of "feature" improvements in regards to direct output, but the trend will be toward more programmability, more percision, and more power. These in turn will impact visuals. e.g. SM3.0 introduces 32bit percision as standard for the shader pipeline. This could be argued as being an improvement to visuals as well.

Also, while SM3.0 is primarily used for improvement in performance now, dynamic branching & flow control do allow for more detailed shaders--ones not practical with SM2.0 hardware, and thus can significantly impact IQ. At some point in the next 12 months we will begin seeing such.

Of course the problem is the GF6 and even GF7 series are not very good at SM3.0 anyhow (saw a benchmark a couple weeks ago where a comparable ATI card blew by the NV card in a heavy SM3.0 test by a margin 10-to-1).

The only place I can see SM3.0 becoming relevant on the GF6 series is in a situation like BF2 where the Radeon 8500s worked because they supported PS1.4 and the faster GeForce Ti 4200s did not work because they only supported PS1.3.


As for the broader scope of the "differences" between ATI and NV

NV
- Spends a lot of money on ISVs with their TWIWMTBP program
- Strong OpenGL support
- Have a tendancy to release new features as "sales points" that are woefully underpowered (32bit shader percision in NV30; poor SM3.0 performance in GF6) and only get them up to speed in the new generation 18-24months later

ATI
- Closely adhears to the DX spec
- Traditionally poorer OpenGL support, but this has improved
- Tendancy to release new features when they can make it at adequate speed

Both have tried including new features at times (ATI: 3:1 Shader-to-Texturing, 3Dc, GI and R2VB in 9000 series; NV 32bit percision in NV30, 64bit filtering and blending in NV40) but with DX10 this looks like this will be slowed some. To a large degree this is a good thing since typically these features, as great as some were, never got used. But then again they may continue doing such with the hopes that MS will pick it up in the next DX release.

Anyhow, going forward neither company will be allowed to have capability bits (cap bits) where they don't support features out right. Speed may be another issue, but if the spec calls for vertex texture fetch or whatever they HAVE to support it.
 
according to this there seems to be a noticeable difference between pixel shader 2 and 3.

Wow and if you bothered to look at the screeshot you'd know 2 things.

First the author doesn't know what they are writing about, there is no PS2.1, just 2.0A and 2.0B obviously they were talking about 2.0B based on; Optimizing a game like Black & White 2 for ATI improves the representation of water with the PixelShader 2.1 as well..

Second you'd notice that 2.1 and 3 are lumped together because the added support pretty much negates the differences. Even the performance differences Cleeve is talking about the support for geometric instancing is the biggest factor, and the VS support of the ATi cad includes that as well. The shader length difference does make it much faster because the performqanc hit is almost as bad as the second pass.
 
Thanks all, i'm another step toward ATI armchair expert 8)

Reading all the stickies the other day, I see a mild warning re: Powercolour, the company I may well source my X850XT from as it is only (good price for this country 8O ) £170. Ahhh stuff it, i'll be able to RMA it if the worst happens.

Nice, looks like a demon fast card for the money and takes up two slots, I like that 😀 Also just remembered that I will be able to run my 2nd stick of 512 3200 in dual format as well... does that have much performance gain?

Cheers.
 
ATI control Panel IMO is bad they did a horrible job with the panel

but for nvidia its cool for me

What's your point with the picture exactly? That you don't know how to use ATi's drivers or it's been so long you don't know the difference anymore?

BTW, I too can access those features from my task bar (without even adding TrayTools or other features that add more support). So I still don't understand your image other than you might like big icons to help you navigate the tough words :roll: ;

whatsyourpoint7ax.jpg


PS notice the smart shader features you don't have access to.
 
Geometry Instancing is an SM3.0 feature that improves performance, and in turn IQ.

Actually the R3xx and R4xx series support Geometric instancing in their VS2.0B. I use(d) it in Oblivion and FartCry quite effectively on the MRX700;

geo5ai.jpg


Also, while SM3.0 is primarily used for improvement in performance now, dynamic branching

But we all know that nV is absolutely horrible at branching, so it doesn't matter much, like I mentioned the performance gain is negated by the poor implementation.

Anyhow, going forward neither company will be allowed to have capability bits (cap bits) where they don't support features out right. Speed may be another issue, but if the spec calls for vertex texture fetch or whatever they HAVE to support it.

That's the plan, but what about nV's 'Hybrid' G80, it'll be interesting to see if that's the first to do so or the last to not.
 
Actually the R3xx and R4xx series support Geometric instancing in their VS2.0B. I use(d) it in Oblivion and FartCry quite effectively on the MRX700;

I thought someone might mention that 😉 The Since the 9700/9500 launched ATI has supported GI, unforunately they do not do so in the DX path. Its one of those features that has to be specifically enabled outside of DX.

Its kind of like 3Dc. Its there... but will people support it? Some yes, some no.

Anyhow, you got me 😉

But we all know that nV is absolutely horrible at branching, so it doesn't matter much, like I mentioned the performance gain is negated by the poor implementation.

Yeah, I agree. The only "useful" part of SM3.0 may be down the line when a game requires it to just to work (ala BF2 and PS1.4). A Radeon 8500 does not play BF2 well, but it does play it. It all depends on how long you have to keep your hardware.

That's the plan, but what about nV's 'Hybrid' G80, it'll be interesting to see if that's the first to do so or the last to not.

My guess is that NV's first DX10 GPU will be like all their other GPUs:

Heavy on Check boxes & And light on performance.

My guess the "hybrid" reference is to the Geometry Shaders and Vertex Shaders being unified, but discreet Pixel Shaders.

Personally I see this as a stinky situation. NV's market tactic (which, btw works... do well in current/older games with "check boxes" for the future) would indicate that they are gonna paralyze DX10 just like they did DX9. By not support FP24 and having dreadful DX9 SM2.0 performance it prevented quick uptake of DX9. There is no reason we should have had to wait for 2006(!) for a DX9 standard game.

I see the same thing happening with DX10. They are going to keep the dedicated shaders and thus their peak Geometry Shader & Vertex Shader performance will be WAAAAAY behind ATI (did I mention not even in the same ball park?).

So devs have a choice: Make software that runs poorly on NV hardware and alienate 50% of users (not likely) or work within the constraints of NV's architecture as usual. This would put ATI, theoretically, at a disadvantage because the flexibility of their hardware be ignored (and all those really neat Vertex-heavy designs and radical use of Geometry Shaders) and they would be shoe horned by designs that are best used with fixed function shaders instead of universal unified shaders.

In a perfect world everyone would buy the best hardware (who knows, that could be G80, but past NV hardware makes me cautious)... but the fact is NV has a strong dev-rel department with TWIWMTBP. Getting official patches for ATI specific features has been hard.

Perfect example: Oblivion. ATI was able to impliment HDR+MSAA. Their hardware supports it; so where did Bethesda get the idea that their design was INCOMPATIBLE with the hardware?

Easy: Bethesda is a TWIWMTBP partner and it looks really crappy on NV if their competitor gets a number of KEY IQ improvements (HQ AF, HDR+MSAA).

And I don't expect that to change much. The one "ace" ATI has now is that they have a unified shader part in the 360 and with XNA coming online they could see some advantage there. But then again devs seem pretty slow getting a handle on the 360 as well...
 
NV
- Spends a lot of money on ISVs with their TWIWMTBP program

What does this have to do with anything???....it's actually a bonus for nVidia users anyways :lol:

- Have a tendancy to release new features as "sales points" that are woefully underpowered (32bit shader percision in NV30

You're bringing up NV30? :lol: ....I'll see your NV30, and raise you a Quack :lol:
 
NV
- Spends a lot of money on ISVs with their TWIWMTBP program

What does this have to do with anything???....it's actually a bonus for nVidia users anyways :lol:

Subject title: Practice differences between NV and ATI.

So that IS a practical difference between the two companies. NV owners get a bonus knowing their company of choice has better developer relations.

Knowing NV is gonna spend money to make sure games run great on their hardware and their features well supported is a practical difference. Not that ATI doesn't do anything, only that NV is much more aggressive here.

- Have a tendancy to release new features as "sales points" that are woefully underpowered (32bit shader percision in NV30

You're bringing up NV30? :lol: ....I'll see your NV30, and raise you a Quack :lol:

Unfair comparison. Everyone knows a Quack performs better than a NV30 & is quieter!