Take look around the next set of drivers are already out and the benchmarks prove the point 15-30% incease on all applications.
Can you show me a link. Remember the 61.11 is not the latest driver, nor does it provide overall 15-30% increase, it's quite limited to the performance increases. Also it still has partial precision issues in FartCry.
It wasn't a revision thats why Nvidia dropped the Extreme Edition, there was no need to clock it that agressively yet
More along the lines that they couldn't get the majority of their parts to reach those speeds. The yield of even Ultra capable parts is limited. Once TSMC gets on board that may change, but not with their current batch.
ATi's 24 bit is bump the developers have to program around. A very irritating bump.
And NOTHING compared to the floptimisations they've had to add for the nV lines.
This is why Epic dropped anything below 32 bit for Unreal 3 because anything else looks like crap compared to it.
Sure show me some comparisons that bear this out. Partial precision/FP16/FX12 versus 24bit, noticeable, 32bit versus 24bit, couldn't notice a difference without photoshop. Unless you can provide an example that no one else has.
I hate it when graphics card companies substitute textures and crap to enhance thier speed. It takes my hard work and makes it look like crap (sometimes).
Sounds like FartCry right now. Partial precision on a card that really doesn't need it to be playable.
Carmack doesn't spend months on making 1 shader either.
No but he spends months on a seperate path for the NV3X which he then drops. Explain that, beyond sponsorship. The R3XX series has enough presence to have the same intertia for coders as the FX series did. Everything will run on the R3XX and by extension the X800 as long as it runs on the FX3X. By the time it makes a difference we'll be talking about DX next and Longhorn with the NV5X and R5XX.
Think about Valve and Nvidia, was it truelly Nvidia's drivers that caused the abnormalities in HL2 beta?
Of course not, the drivers did little to improve the situation. They simply added a run-time compiler to change the game to match their product. As long as it provides the same image that's fine, but it's not a driver bug, it's a lack of a workaround in the drivers. The hardware is still flawed/crippled.
There was a problem with the Source engine when it came to nvidia's cards.
No there wasn't, the problem was with the nV cards, the source engine worked fine on other cards,even the GF4 series, so you can't blame the engine.
They should have used nv specific code paths which they didn't till later.
Why should they? That makes no sense, and Carmack obviously thinks so as he dropped NV specific code paths in D3. Release a standard path for all cards, that's why we have DX/OGL standards. If your card can't run that's your problem, not the engine's.
Please don't buy into the marketing crap,
Which is all you have from both sides right now. The promise that the future will make things look better (hmm similar to all of nV's FX promises?), or the promise that the other wil meet any challenge. Either way it's all talk right now.
if you want a card that will last you and will be a good back up card when they become old, use your head, the x800 won't be able to keeep up.
That argument is similar to the one people make about the FX5200 being more 'future-proof' than the GF4ti series.
We'll see how it ends up, but these cards will not become back-up cards as the ones they are replaced with will be PCI-EX cards, so your statement isn't even relevant to the 'future', unless it's a backup to another X800/GF6800. Seriously, USE YOUR HEAD, is right! There are valid reasons to pick one over the other, but as a 'backup' in some future rig is not one of them.
In reality both are a let down.
The GF6800 still uses partial precision in games, and still doesn't use low-k in it's manufacturing process, thus limiting it's speeds and power. The X800 has only half of the promised features we expected. The nV does have SM3.0 support, yet even nV hasn't been able to come up with a demo to expose the advantage, so it's unlikely anyone else will be quick to bring much to games until the Unreal Engine 3 era stuff, which may already be partially in use, but won't expose that aspect until at least mid 2005, when games built AROUND the enbgine and not borrowing bits and pieces appear. And 3Dc is in ATI's stable, but that may be as 'winning' a feature as truform. Hey it's cool on paper and very nice when it's adopted, but who's adding it to their games.
The X800 may consume less power but who cares about 350w versus 400w PSUs!?! While the GF6800U 'MAY' struggle with smaller PSUs, the claim of the NEED for 450w is overrated, but the need for both molex connectors is true on the early models. But it's not like one's not using extra power, like the FX5600UvsR9600P, so who cares, if you're not willing to buy a $60 Antec 450W to go with your $500 graphics card, then why bother getting a card at all.
Seriously, neither card is a clear winner, they both have their features which everyone seems to think is ground breaking despite the lack of any real applications to prove it.
You want checkbox features with less speed for current and near term games, buy the GF6800 series, you want raw power with less features for near term games, then go with the X800. Either way one camp or the other is going to say you made the wrong decision.
But I can promise you one thing, By the time any of these next generation games come out, I can buy a new card for less money that will blow the doors off of both of them, Guaranteed!
- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK
