ATI LIED!

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
But this is the case, why do you think the ATI PE came out?
Conjecture, seriously. Why did the GF6800Ultra-Extreme-Profound-Excel get announced yet won't be released? ATI waited for nV to play it's hand and then clocked within it's limits. Why come out of the gates clocked to the max, it undercuts their future 'refresh' products. And like many have said, this is not new, same thing happened with the R9800Pro-256, the FX5700. Everything else is just speculation. And AF is not the biggest difference, and the AF algo is a minor boost (around 5%), especially compared to FartCry. So the PE has little to do with this.

Run Time Compiler optimizations are done by both cards thats different then shader replacment.
Be sure to point out that's both nV cards. ATI doens't do that. Shader replacement is involved in the run-time compiler, that's one of it's major benifits, even according to nV. The dumb it down when there will be little IQ differnce. It is effective NOW and quite a good optimisation for actual game play (not benchmarking), but initially it was not programmed well and IQ suffered greatly.

Shaders are compiled at run time, so the better the compiler the faster they run.
That is part of it, but some are simply substituted. If you want to nit pick between the term, and the way nV uses it, fine, but nV's application of the run-time compiler also includes shader replacement and instruction re-ordering. nV does make some shaders more compatible and that's all that should happen, but to increase efficiency replacement (even as low as FX12) occured as well for the FX series. Call it a runtime compiler or as nV calls it "<i>next-generation automatic shader optimizer</i>" the result is the same.

You wounldn't call mip map detection application specific?
Prove to me that is going on. When we first looked at this, and when Lars first posted about this in B3D, yes that's what it LOOKED like, because there were obvious drops in performance with turning on MipMaps, yet when people looked into the detection, nothing, and as explained by ATI it does fit that their algo treats the mips different as they are different. That initial theory based on correlational data that there was a performance difference has neever been proven, nor is it still being promoted as the cause of those difference.

Application detection like FartCry, and UT2k3 are obvious, and quite reproduceable. This may be a higher level of application detection, but everyone including LARS has dropped that initial accusation, as there is no proof.

Ati made the following statement, and so far, it hasn't been shown to be wrong/lie;
"Our algorithm for image analysis-based texture filtering techniques is patent-pending. It works by determining how different one mipmap level is from the next and then applying the appropriate level of filtering. It only applies this optimization to the typical case – specifically, where the mipmaps are generated using box filtering. Atypical situations, where each mipmap could differ significantly from the previous level, receive no optimizations. This includes extreme cases such as colored mipmap levels, which is why tests based on color mipmap levels show different results. Just to be explicit: there is no application detection going on; this just illustrates the sophistication of the algorithm. "

The biggest issue here is ATI's request that optimizations be disabled, and that MipMaps be used to verify quality. Obviously this algo's treatment of mipmaps makes this an inappropriate apple-to-apple comparison. That's an issue to be sure, and ATI's pointing to that in their presentations in Toronto have now come back to haunt them. But at least criticise them for the correct issuse. So I'm still waiting for proof of the application detection you speak of.

If anything this is worse.
No it's not, it's just MUCH harder on reviewers because the programs don't do all the owrk for them. People have already adapted to the new situation by not using MipMaps alone for quality comparisons. Look at Digit-Life's reviews, or EB's filter review, and you see that they've moved to new techniques. [H] has always done actual IQ difference tests, so that will remain too. The people who are most pissed, are the ones who actually have to look closer now when before it was as easy to see the differences as <A HREF="http://www.driverheaven.net/articles/driverIQ/" target="_new">THIS</A>, Lazy!

Because if ya want to specifically test AF with any program using color mip maps it will show that the x800 is using full trilinear but in game conditions its not.
Which is exactly why in game tests are best. I don't disagree that the testing methods are now suspect, but application detection is different, because unlike something tweaked just for tests, this will work in the benchmark and the game.

Right ATi has always show a good face uptil now. I was surprised they lied about it 🙁 even after people saw what was going on. I would expect the from nV not ATi
The thing to me is what ATI is going to do to address this. I'm not sure if there's anything in the new drivers (I may be able to check with the new AF tester), but like is commonly said in gove't, it's not whether or not justice is done, it's whether the apperance of justice being done is there.

This cuts both ways for this issue, they could remove the new algorithm, or allow people to disable it, but of course that would simply reinforce the idea that having it there in the first place was wrong. By the same token being unreceptive to the obvious backlash would make them look like nV last year when they were unrepentant about their floptimizations.

To me I like options. Even if I don't knowor can't see the differences between two methods, hey it's nice to be able to turn it off myself, and not need a driver hack or some other pain in the a$$ to accomplish it.

Regardless of what happens, for those who's alligeance are with nV do or die, this one things to them will equal all of nV's previous sins.

Anyone who's objective can see that wrongs have been done/made, the real question is what should be done if ATI doesn't remove or allow disabling of their new Algo? The very worst IMO is that we'll finally have real IQ comparisons from reviewers instead of simply raw numbers and the assumption that all is well.

Look at [H] review of the R9800 vs the GF4 vs the R8500, and see what a review SHOULD look like. Sure they are unequal platforms, but as you move towards these types of things, even supposedly 'equal' platforms will perform differently. I'd like Apples to Apples comparisons, but as LARS even mentions, he's never disabled nV's optimisations to begin with, so you never had that from the start.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <font color=green>RED </font color=green> <font color=red> GREEN</font color=red> GA to SK :evil:
 
I don't expect anything of them, I only expect them NOT to do certain things, like lie. It doesn't bother you that when questioned by THG staff, they lied?

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
 
Well you are right most of this is speculation.

Shader replacement is wrong and nV knows that :). I know alot of developers were pissed when they did it and they got alot of back lash from the developer community. Shader compiling is different. I think Carmack went into the differences abit. The 6800 doesn't do shader replacment just down grades some applications to 16 bit. Well from what I've seen so far.

If code is written genericlly for in HLSL (dx's shader language) each card has its own structure, so the compilers will of course be different, so optimizations will also be different. ATi's do optimize but to a different degree they don't switch to card specific extentions.

If ATi stated that they were using optimizations for AF no one could argue about them decieving the reviewers and it could have been turned off or on. What they did was good keeping the IQ very similiar but performance increased by 30%. Thats awesome. The general feeling is they are trying to hide something. Nvidia didn't have to change they came with crap with their Fx line and everybody knew it, so thier AF cheats were obvious fixer uppers.

Well I don't think ATi will change that if they didn't own up to it yet its highly unlikey they will do later down the road. Hopefully they do that will help heal thier reputation.