Nvidia cheating on 3dmark?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Who cares if they cheat in real games! It's more power for the end user!

I think getting these massive scores with the CPU test is BS because it's made to stress the CPU, not the GPU..
 


No it's not, it's less power to the end user, more fake numbers.
You want to make sure it's displaying what it should be, not simply inflating the benchies.
Either company could have the setting show high and DX10 and instead render low quality and DX9/8/7 partial precision or whatever, would that really be giving you more or less power?
The OPTION to have questionable optimizations but also know about them and also have the ability to disable them should you so wish.

This has been discussed many times over during the FX era and the shimmering GF7 era, and the end result is better performance is good, but questionable fl/optimizations need to be open to the public and options that can be controllerd/disabled by the end user.

 
That's the difference between opitmization and floptimization to me.

However an optimization may still be a cheat depending on the guidelines and role of the benchmark.

In the past ATi did a very good job of shader combining and re-ordering in their drivers to improve performance in some games, this is a good thing, however when they did those same things for 3Dmark (can't remember if 2003 or 2005), MadOnion/Futuremark took offence and said, that's not right. Now it's something that kept the same render result, was not dependant on camera angle or dependant on shader replacement, just reordering a call. Where the standard HLSL or app implementation might say do it ABCD, instead to make it more efficienct for their design, ATi told the driver to send it through as ADCB which made it faster. Everything is rendered correctly, and it is still processing everything properly regardless of input. And it's also what they do for games. So is it a cheat or what? IMO if either the developer or the user feel it's not kosher, then they can call it a cheat (needing to back it up with something more than name calling). However I would call it a legitimate optimization, not floptimization, which is why I made up the distinction (since everyone got their panties in a bunch over the word cheating).

I agree with FM on that call, however I don't understand how they call that cheating because it reorganizes the shader to flow differently through the GPU, yet when nV makes the PPU calls go through a GPU instrad (contrary to the 3Dmark licensing) why they said that they might approve of that as acceptable. To me it's not only at least the same, for the benchmark in question it's definitely not doing the same as the previous setup/task.

A dedicated PPU didn't suddently become a GPU when not in use, and in a game the GPU will likely never be dedicating all it's resources to physics. so it's artificially inflated in a way that will likely never occur in a real game. If it involved 2 GPUs and one only ever did physics and the other only ever did graphics, then it would replicate the old setup that the benchmark was built for, as well as also play that way in a game.
However since the game might let it go 60/40 GPU/PPU workload split but doubtful 0/100 in any game, then the inflated numbers experienced in benchmark won't reflect the improved performance in the game of adding a 10X more powerful actual CPU, nor will it give you the performance of a 2-5X more powerful standalong PPU that it appears to be in that benchmark.

While a little long, hope that explained it, as it's my more complex than "it's just cheating" position.

No my easy statement is, It's Bungholiomarks, they shouldn't count for anything to anyone other than a stability test for that one system.

And tha's the short and sweet of that one. 😉
 
thats great info from you thegreatape. i think i can see the situation in another angle now.

now i think maybe intel and amd got something similar, but nvidia are the "dumb" one to try it and got these results. oh well but i think one day its going to have a influence in game physics using GPU.
 
Oh yeah it's definitely the future, the question is how you go about implementing it. And I think that's what's causing the problems all around right now. I think this would be less of an issue if it were someone like M$ controlling the API through something 'more open' like DirectPhysics.
 
you think Intel+AMD have something similar as wel,l like Nvidia's driver "optimization" for physics? because there are lots of speculation saying that they got a "dodgy" version of the driver somewhere in their company's computer database.
 
Intel has Havoc FX as part of the Havoc IP they bought. AMD has signed on to use Havoc as it has both a CPU and GPU option also, and IMO that's the more realistic long term side-by-side future in products like Larrabee and AMD's Fusion future. More similar development strategy. However I still would prefer an open standard not controlled by any of the hardware makers.

However it's very early in this competition. Most phsyics right now is done the traditional way, even those listed as PhysX titles (GRAW and UT3 both use CPU based physics as their underlying engines, GRAW=Havoc , UT3=Epic's own engine).

It's easier to develop for CPU right now, but the power advantage is obvious in the GPUs, so the question becomes what do developers go with as the 'killer app' and not just demo levels like the scant ones seen on GRAW and UT3.
 
That's not what I meant. I definately don't want inflated benchmarks, or the "Way it's meant to be played" trash in Assassin's Creed.

But if it's an edge that ATi can't get (as much as i'm loving ATi/AMD right now) like using PhysX in Games (again, not 3DMark CPU test because that's bs), then yes I would consider that good cheating, more power to the end user.

I do agree with you, Ape.