What card ? ATI or Nvidia Ultra ?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
and will agp be able to keep up with games for at least a few more years? (in the case of i just got a x800)

GA-7NNXP, XP3000+ Barton
1 gig corsair pc2700, 2 Maxtor 80GB SATA 150
1 Seagate 160 gig ATA 133,
Asus GeForce4 TI4800, Samsung 172X
D. VINE 4 Chassis (moded)
 
I think about 2 years AGP will be phased out (kinda like what happened to VESA back in the days), the x800 series won't be able to keep up with games using dx9c and shaders 3.0, from what I've heard most developers are using shaders 3.0 already. Its not much of a change. Took me 2 hours to add in shaders 3.0 functionality.

Even Open Gl programmers will use Nvidia extenstions for SM 3.0 till Ogl adopts SM 3.0 model. Once that happens its just a matter of converted from NV extention to ARB or GLSL which is fairly easy too. If shaders a programmed in Cg (alot of developers are) its just a matter of outputting to ARB for the new Ogl functions.

In th next 2 years I already heard of 5 (should be popular) games coming out that support 3.0 shaders
 
I just love these threads. To the originator of the thread, as of right now, you can't even purchase a 6800. Nvidia's so hot to beat ATi to the chase that they "paper" release a card that hasn't shown on any store shelves, demands two connections from a 700 W power supply, the space of two slots, creates more heat than necessary, but wows everyone with next-generation performance and features. So where's the card? And what about the "Extreme Edition" or whatever that they promised to the public, yet not even a respected hardware review site can get their hands on one, nearly 2 months after press release? And to think, they've announced a REVISION on a card that hasn't even hit the market yet! I'm insulted by Nvidia's moves lately, as they seem childish and temporary. What kind of trust can they win with these actions? In the video card industry, things change drastically every quarter. If you're going to buy a video card, the most important decision is WHEN. If you want one now, look at the options for NOW. If you want to wait 3 months to see what's hot and what's not, then who honestly knows what to expect other than rumors created in threads exactly like these? I honestly have no idea where some of you people get your info, and how you give it credibility when it is so far down the road. Once again to the originator of the thread: don't let these rumors factor into your choice. Who's to say what the next few months will show, even if you're an insider into the industry? Let's look at ATi now; 2 cards available, for purchase even. Single slot, single molex power connector, significantly less power consumption and stress on the PSU, and on-par performance with the 6800, even from the "Pro" version of the card, while the "XT" seems to have a slight advantage, as ATi has had for the past 18 months or so. Both companies have made releases describing support for PCI-X, but don't put all of your eggs in one basket. Again, if you want the card now, look at your options now, but if you want the card in 3 months, look at your options in 3 months. I'm no fanboy of either company, but I do believe there are unwritten rules to this industry, and lately Nvidia seems to ignore or forget them.

One more point:

EVERY Nvidia fanboy has said this at least once in their lives: "Just wait until the next set of drivers!" Once again, depending on something that may not even happen, let alone positively affect your valuable purchase. Good luck with your decision.
 
my only thought in this is if your going to go out and spend 3-500 on a vid card, you might as well be able to use it for atleast a couple years to come. if not its just a waste of money. and most of us cant do that. so if you dont take future advancement into consideration in a technology purchase , it seems to me your taking a big gamble.

GA-7NNXP, XP3000+ Barton
1 gig corsair pc2700, 2 Maxtor 80GB SATA 150
1 Seagate 160 gig ATA 133,
Asus GeForce4 TI4800, Samsung 172X
D. VINE 4 Chassis (moded)
 
I agree with some of your views, but I don't think it's all that bad to try and determine how much longevity you'll get out of a card.

History shows that future-looking features may be worthwhile. For example, 2 years ago people were deciding wether to buy a 9700 PRO or Geforce Ti4600; those that decided to make a play for longevity chose the 9700 PRO, and their card is STILL contemporary and usable.

Those who said "Screw Dx9!" and went for a Ti4600 have probably had to upgrade by now.

The problem with the 6800/X800 decision is that, unlike the case of the 9700 PRO being obviously superior to the Ti4600 in forward-looking features, the 6800/X800 have very different kinds of forward-looking strengths; The 6800 seems to be a more elegant card with future-looking Dx9c style programability, while the X800 seems to be the brute strength answer to shader power.

At this point, it's kind of like comparing an Acura NSX to a Dodge Viper; the NSX is the car you want on the mountain roads, and the Viper is the car you want on the straightaways... but the real race won't start for a year or so, and we don't know if the software developers will stage it on a mountain road or on a dragstrip...

But like I said, if you buy an NSX or a Dodge Viper (or indeed a 6800/X800), chances are you're going to have alot of fun with it for at least a year, regardless of where the race is held when it happens.

And like the 6800/X800, both choices are more than you need for today's requirements.
It doesn't matter which you choose in the short term, because they both rock... and the future is a question mark, no matter what anyone's predictions (or brand preference) may be.

________________
<b>Radeon <font color=red>9700 PRO</b></font color=red> <i>(o/c 329/337)</i>
<b>AthlonXP <font color=red>~2750+</b></font color=red> <i>(2400+ @ 2208 Mhz)</i>
<b>3dMark03: <font color=red>4,876</b>
 
while the X800 seems to be the brute strength answer to shader power
I humbly disagree. As is clearly demonstrated in <A HREF="http://www.tomshardware.com/graphic/20040603/index.html" target="_new">THIS</A> THG article dated today, X800 benchmarks (and, consequently, the so-called "brute strength") are currently highly suspect while 6800's are not.

1. With brilinear floptimizations OFF (as in "with true trilinear filtering enabled") the 6800 performs exactly the same as the X800 with its current brilinear floptimizations ON. Forcing the X800 to render with full trilinear would definitely degrade its performance.

2. The latest nVIDIA drivers allow the user to choose whether to use optimizations or full trilinear. X800 does not allow you to do this and your benchmarks are always the result of floptimization.

3. The 6800 offers a bunch of advanced features (for example a built-in MPEG decoder/encoder chip) while the X800 does not.

So ...

<font color=red>EDIT: (added quotes)</font color=red> The THG article is long, so if you don’t have time to read the whole thing, here are a few quotes from it:

"What is annoying is that ATI did not bother explaining this filtering procedure. Reviewers did not notice this new ATI filtering technique because standard filter quality tests using colored mipmaps don't show this behavior. The driver switches to full trilinear whenever colored mipmaps are used."

"You have to remember that with colored mipmaps, there are still some places that are not colored. ATI driver still optimizes those textures. This means that the delta between full filtering and optimized filtering of the X800 must be even greater than the results with colored mipmaps show."

"It is more significant that in its white papers, ATI leaves one with the impression that full trilinear filtering is being offered, and calls on reviewers to switch off competitors' trilinear filtering - read GeForce 6800 - to ensure a "fairer" comparison. At the same time, they are called on to assess the image quality of the X800 by means of tests with colored mipmaps - but at that precise moment the driver provides a filter quality that is not offered in games."


<font color=green>"The creative powers of English morphology are pathetic compared to what we find in other languages." (Steven Pinker, The Language Instinct)</font color=green> 😎 <P ID="edit"><FONT SIZE=-1><EM>Edited by Slava on 06/03/04 02:45 PM.</EM></FONT></P>
 
Interesting point Slava, but I'm not sure it's correct in this context because Call of Duty is not exactly a shader-heavy game, and it's the only game benchmarked.

We'll have to see the impact this has on shader-heavy titles like Far Cry, HL2, and Doom3 before we can make a call on this issue methinks. I suspect the X800 will rule the shader arena regardless unless PS 3.0 has a large impact (the inevitability of which is a total question mark at this point, but may very well turn the tide).

Although I think it's a given that the brilinear filtering should be an option in the Ati control panel, but like I said, separate issue.

________________
<b>Radeon <font color=red>9700 PRO</b></font color=red> <i>(o/c 329/337)</i>
<b>AthlonXP <font color=red>~2750+</b></font color=red> <i>(2400+ @ 2208 Mhz)</i>
<b>3dMark03: <font color=red>4,876</b>
 
Take look around the next set of drivers are already out and the benchmarks prove the point 15-30% incease on all applications.

Btw the 6800 has gone retail. I can go to my local comp shop and order one (we ordered 10 for all our systems last week and they are already shipped should have time by beginning of next week).

Things don't change drastically every quarter, its usually every year for new tech to appear.

The 9700 and geforce fx series last a year an half! They just got more mature.

It wasn't a revision thats why Nvidia dropped the Extreme Edition, there was no need to clock it that agressively yet. They fully throtle ATI or match them even without the higher clocks.

What about the ATI x800 PE?

Oh wow your insulted by Nvidia's moves lol thats a good one. Its called good business. You would do the same if you were in it too. ATi does it all the time "PE". ATi's markting crap about 24 bit which is absolutely false. Just like Nvidia's cinimatic rendering with thier Fx line. Think about how developers feel about shaders 3.0? that is the wave of the futures. ATi's 24 bit is bump the developers have to program around. A very irritating bump. I hate it when graphics card companies substitute textures and crap to enhance thier speed. It takes my hard work and makes it look like crap (sometimes). All graphics cards companies do it. Its not avoidable. Just hoping they don't use subpar algo's that drop IQ.

This is why Epic dropped anything below 32 bit for Unreal 3 because anything else looks like crap compared to it.

There are ways of getting discplacement maps to run on 24 bit ATi cards. We have done it and its just as fast as 32 bit. But the downside is this it took us 1 month to create the algorithm that was fast enough to run at the frame rates we need it to keep the same detail level. Epic didn't spend that much time on the algo, and I know for a fact they use 4 texture channels while we use 1. Carmack doesn't spend months on making 1 shader either. Most big companies don't they don't care they don't have to break into the industry like we do. I do because it will help our game sell more. People with older cards won't have problems running it.

As a developer finally I don't need to worry about 24 bit color palettes and digital vibrance. It makes me nervous that what I do is being changed and I can't do anything about it. If something should go horrible wrong and look like crap, its the developer that gets flack. Think about Valve and Nvidia, was it truelly Nvidia's drivers that caused the abnormalities in HL2 beta? Thats bs I've worked with both Dx and Ogl, never came across that problem. There was a problem with the Source engine when it came to nvidia's cards. They should have used nv specific code paths which they didn't till later. (the fx line was shot to hell anyways. Had to do alot of tweeking with code to get things to work the way I wanted them too)

Please don't buy into the marketing crap, if you want a card that will last you and will be a good back up card when they become old, use your head, the x800 won't be able to keeep up.

Where did ya hear about 700 watts? I got the 6800 working on 350 with a 3.2 AMD 64

Works just fine no crashes.
 
I humbly disagree. As is clearly demonstrated in THIS THG article dated today, X800 benchmarks (and, consequently, the so-called "brute strength")
Hmm, Lars wasn't talking about pixel shader strngth at all but AF optimizations. So it is irrelevant to what Cleeve was addressing. Ati still has a lead in shader strength unless they are simple shaders where nV's 32x0 strength comes into play, so w/ 0 or 1 texture you see the GF6800 ahead (which actually helps alot in 3Dmk03) and with more complex shaders you see the ATIs pull ahead.

Look closely at <A HREF="http://www.digit-life.com/articles2/gffx/nv40-rx800-2-p1.html" target="_new">Digit-Life's review</A> I posted again yesterday. And their <A HREF="http://www.digit-life.com/articles2/radeon/r420.html" target="_new">initial review of the X800</A>. That does not change despite the change in AF algorithm.

are currently highly suspect while 6800's are not.
Right, sure, no question of the GF6800 and it's optimizations in games? BS! FartCry alone is a perfect example.

2. The latest nVIDIA drivers allow the user to choose whether to use optimizations or full trilinear. X800 does not allow you to do this and your benchmarks are always the result of floptimization.
Actually 61.11 does not allow that option in some games, regardless of what you do, so that's no different. They initially had that option, but it has been removed.

And as Lars even point out it can be forced for the ATIs in UT2K4 (and UT2K3 and other games) through the program itself. This issue was brought up LONG time ago and is actually a conflict between UT's method and how the control panel calls for filtering. It has always been available through a ini hack, which anyone who cares enough (enough to mention it) can enable it. That's still not the case with the 61.11 drivers for nV, although the latest (61.42 IIRC) haven't been disected, I haven't heard anyone mention that as being one of their features. And entium pointed to the 61.11 performance increases, yet those are the OLD drivers, not the new ones, and the 61.11 increases in FartCry still come using partial precision with visible IQ deterioration.

3. The 6800 offers a bunch of advanced features (for example a built-in MPEG decoder/encoder chip) while the X800 does not.
And the ATIs offer 'a bunch of advanced features' big deal. Show their utility, then it matters, until then it's nice hypothetically speaking, just like 3Dc, cool idea, but show me that it works and will be adopted and useful to the average user.
Mpeg Decoding is irrelevant, the ATI's have the Rage Theater 200 which does the same thing only better. So for encoding it's handy, and a feature not available on current ATIs (but it can be done by the host too), but I doubt it's important to more than 1% of owners. The S3 DeltaChrome series has also had this for some time, and yet it didn't make it the premier card out there. For those who really care they will get something that does 100% of the encoding on chip, instead of 'up to 60%'. Now talk about other features fine, but both companies have their pet projects.

As to the added part in the edit, once again, not relevant to the point of raw power.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK :evil:
 
the old to new drivers didn't take care of the iq and partial precision unfortunatly, I still notice it 61.32 as well. I asked a couple guys at Nvidia about this they said it is a Crytek thing. Which could be the case, I haven't noticed anything like that in my engine yet. Then again my lighting system is compeletely different. Most of my shaders are done in Cg using Nvidia specific paths then spitting them out to ARB for other cards, if its possible or I make 2 sets of each shader, Crytek uses Cg as well but I'm sure they are staying away from NV specific paths so that might be the case.

Where I do notice the pixelatted sawtoothing is with world space normal maps which aren't used anymore at all. Even Crytek has dropped them.

In the Doom 3 leaked demo the sawtoothing was there in the Fx line, I don't have access to the leak anymore unfortunatly to test on the 6800. But seeing the performance of Ogl programs I would suspect they won't be doing any lower precision on Doom 3.
 
Unless you're a developer, or you play games at VERY high resolution for 16 hours a day, then don't spend $500 on a graphics card. If you do then you're fool and I suggest you get a life; you could build a decent computer for $500. Use that extra money and get some new clothes and some new sneakers- look at your haircut for Christ's sake.

Get a 9800Pro, wait a while, then sell it on eBay, or to a friend, and buy one of the new cards when prices have dropped to a sane level. This is the time when the vendors cash in on all of the fanatics who absolutely MUST have it.

Oh and by the way, no offense, sincerely, to anyone here who actually bought one of the new cards. I wish I had one...
 
Take look around the next set of drivers are already out and the benchmarks prove the point 15-30% incease on all applications.
Can you show me a link. Remember the 61.11 is not the latest driver, nor does it provide overall 15-30% increase, it's quite limited to the performance increases. Also it still has partial precision issues in FartCry.

It wasn't a revision thats why Nvidia dropped the Extreme Edition, there was no need to clock it that agressively yet
More along the lines that they couldn't get the majority of their parts to reach those speeds. The yield of even Ultra capable parts is limited. Once TSMC gets on board that may change, but not with their current batch.

ATi's 24 bit is bump the developers have to program around. A very irritating bump.
And NOTHING compared to the floptimisations they've had to add for the nV lines.

This is why Epic dropped anything below 32 bit for Unreal 3 because anything else looks like crap compared to it.
Sure show me some comparisons that bear this out. Partial precision/FP16/FX12 versus 24bit, noticeable, 32bit versus 24bit, couldn't notice a difference without photoshop. Unless you can provide an example that no one else has.

I hate it when graphics card companies substitute textures and crap to enhance thier speed. It takes my hard work and makes it look like crap (sometimes).

Sounds like FartCry right now. Partial precision on a card that really doesn't need it to be playable.

Carmack doesn't spend months on making 1 shader either.
No but he spends months on a seperate path for the NV3X which he then drops. Explain that, beyond sponsorship. The R3XX series has enough presence to have the same intertia for coders as the FX series did. Everything will run on the R3XX and by extension the X800 as long as it runs on the FX3X. By the time it makes a difference we'll be talking about DX next and Longhorn with the NV5X and R5XX.

Think about Valve and Nvidia, was it truelly Nvidia's drivers that caused the abnormalities in HL2 beta?
Of course not, the drivers did little to improve the situation. They simply added a run-time compiler to change the game to match their product. As long as it provides the same image that's fine, but it's not a driver bug, it's a lack of a workaround in the drivers. The hardware is still flawed/crippled.

There was a problem with the Source engine when it came to nvidia's cards.
No there wasn't, the problem was with the nV cards, the source engine worked fine on other cards,even the GF4 series, so you can't blame the engine.

They should have used nv specific code paths which they didn't till later.
Why should they? That makes no sense, and Carmack obviously thinks so as he dropped NV specific code paths in D3. Release a standard path for all cards, that's why we have DX/OGL standards. If your card can't run that's your problem, not the engine's.

Please don't buy into the marketing crap,
Which is all you have from both sides right now. The promise that the future will make things look better (hmm similar to all of nV's FX promises?), or the promise that the other wil meet any challenge. Either way it's all talk right now.

if you want a card that will last you and will be a good back up card when they become old, use your head, the x800 won't be able to keeep up.
That argument is similar to the one people make about the FX5200 being more 'future-proof' than the GF4ti series.

We'll see how it ends up, but these cards will not become back-up cards as the ones they are replaced with will be PCI-EX cards, so your statement isn't even relevant to the 'future', unless it's a backup to another X800/GF6800. Seriously, USE YOUR HEAD, is right! There are valid reasons to pick one over the other, but as a 'backup' in some future rig is not one of them.

In reality both are a let down.

The GF6800 still uses partial precision in games, and still doesn't use low-k in it's manufacturing process, thus limiting it's speeds and power. The X800 has only half of the promised features we expected. The nV does have SM3.0 support, yet even nV hasn't been able to come up with a demo to expose the advantage, so it's unlikely anyone else will be quick to bring much to games until the Unreal Engine 3 era stuff, which may already be partially in use, but won't expose that aspect until at least mid 2005, when games built AROUND the enbgine and not borrowing bits and pieces appear. And 3Dc is in ATI's stable, but that may be as 'winning' a feature as truform. Hey it's cool on paper and very nice when it's adopted, but who's adding it to their games.

The X800 may consume less power but who cares about 350w versus 400w PSUs!?! While the GF6800U 'MAY' struggle with smaller PSUs, the claim of the NEED for 450w is overrated, but the need for both molex connectors is true on the early models. But it's not like one's not using extra power, like the FX5600UvsR9600P, so who cares, if you're not willing to buy a $60 Antec 450W to go with your $500 graphics card, then why bother getting a card at all.

Seriously, neither card is a clear winner, they both have their features which everyone seems to think is ground breaking despite the lack of any real applications to prove it.

You want checkbox features with less speed for current and near term games, buy the GF6800 series, you want raw power with less features for near term games, then go with the X800. Either way one camp or the other is going to say you made the wrong decision.

But I can promise you one thing, By the time any of these next generation games come out, I can buy a new card for less money that will blow the doors off of both of them, Guaranteed!


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK :evil:
 
But seeing the performance of Ogl programs I would suspect they won't be doing any lower precision on Doom 3.
I would hope not. The GF6800 really should be able to take a HUGE lead in D3, it's really the card for D3 not the FX line. I would expect to see major advanatages to the GF6800 there. ATI is doing a few things to make themselves D3 ready in their adoption of new extensions for OGL2.0, but I doubt that will come close to evening the playing field. They did add Hyperz and other features to improve over the R9700 series, but D3 should be like Q3, a very nVidia favourable title, and there will be no need for any optimizations (of any kind). It's the only title in the near future that will really expose a major difference between the two solutions. I don't think HL2 or anything else will come close. There is very little difference on every other level, but I expect D3 to show us something different. That doesn't mean it will be unplayable on one platform (except maybe the FX5200/MX4000 :lol: ), but there's every reason to believe that D3 will be eye-opening. And likely force more people to upgrade their GF3/4/R2XX based cards than anything else.

But that's still speculation like so much else here.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK :evil:
 
Oh yeah hehe

the 5200 and mx lines were mistakes never should have been released 😛

Thats true the previous generation of cards should be ample to play D3.

Yeah D3 will show the differences in the two architectures, Nvidia really made some big strides with their Dx performances. Hopefully ATi will do the same for Ogl. Its just not support of features they have to wonder about why thier shaders aren't as compatiable with Ogl as they are with Dx. Nvidia has always been good with Ogl but 3dlabs really doesn't give Nvidia any special attention that I know of that they don't share with ATi.

Plus its open source lol.

But Nvidia does have Ultrashadow 2 which is a real nice feature for D3 :)
 
UltraShadow has been addressed by ATI and that was the feature I was talking about which the R9700s don't have. The reason for nV's performance benifits is really their extension set in OGL. ATI is behind them by a long shot. I've seen a few examples that may come into play for D3. nV has some shadow culling extension that is proprietary, that is already in 1.5, and ATI is adding one for 2.0 (conference with Intel next week for finalizing 2.0 additions). We'll see what affect they have. nV is definitely far ahead, and really it's not 3Dlabs that decides the extensions alone. It'll be interesting to see what really happens with a completely new OGL engine, instead of just the rehash of the Q3 engine.

Anywho, off to watch the Flames Pummel the Lightning, enough graphics stuff for a while.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK :evil:
 
The Flames should have pummelled them based on the first 2 periods. Now onto overtime! :)


ABIT IS7, P4 2.6C, 512MB Corsair TwinX PC3200LL, Radeon 9800 Pro, Santa Cruz, TruePower 430watt
 
Aha!

Yeah that's a cup up on top of this post baby. Cup Crazy!



What a brutal penalty in the 3rd!

At least they got it in OT, or else I'd say Sutter was right, the fix is in. :evil:

BTW, I'm not sure if I like the title of Bathroom Fixture, I'd prefer Fester or Finster, or just plain Mr Purple, ala Mr Blond.

Anywhooo, all in all not a bad night. A 6oz Steak Sandwich and Fries, 5 Alexander Keith's (one per period plus post game), Bottomless Potato Skins for the whole table for overtime (I think I had 9-10 from the platter), all topped off with Raspberry Chocolate Truffle cake, and a CC and Ginger. 😎

BTW, go check out www.FlamesGirls.com and crash the latest server (taken down 3 so far), you'll be glad you did once the site recovers from what is sure to be another onslaught tonight. Working right now though.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK :evil:
 
No 3dc is just a new type of texture format, we already added that into our engine actually just like adding in BMP or TGA file format support. Its would be nice if other cards nVidia *hint* would use it would decrease the amount of different texture we would have to use.


The reason we are switching over to the 6800 line is because the artists and programmers have to see what they are doing lol. With the older line of cards (we were all using ATI) just can't see shader 3.0 on them.
 
ummm all this is great, but... so does this mean that we should pick a 6800 or X800 or wait till PCIe comes out? not to be offensive. its just we know there is an arguement that can go on till were all blue in the face about what is better. the point is youve got a great advancement and a horrible price and ive seen "your not gonna be upset with either purchase" on almost every board on this topic.

correct me if im wrong but most of us are using vid cards that are a year or more old. so if you run out and purchase one of these cards (no matter if youve got a ton of money to waste) will you still be able to use it in a years time in a new computer?

GA-7NNXP, XP3000+ Barton
1 gig corsair pc2700, 2 Maxtor 80GB SATA 150
1 Seagate 160 gig ATA 133,
Asus GeForce4 TI4800, Samsung 172X
D. VINE 4 Chassis (moded)
 

TRENDING THREADS