GF7800GTXs automatically overclocking themselves?

Interesting blurb @ Guru3D about some finding by Unwinder regarding the G70 boosting clocks by 40mhz (geometric delta clock diff.) when running 3D apps, even on overclocked rigs.

<A HREF="http://www.guru3d.com/newsitem.php?id=2827" target="_new">http://www.guru3d.com/newsitem.php?id=2827</A>

Very strange. No longer can you do a clock per clock comparison until you measure the 'under load' clockrate.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <A HREF="http://www.redgreen.com/" target="_new"><font color=green>RED </font color=green> <font color=red> GREEN</font color=red></A> GA to SK :evil:
 
Wow... great way to cheat in benchmarks. Does this mean if we manual overclock to let's say 460, the card will hit 500mhz during 3d? Oh yeah what about BFG's card, they say it's at 460mhz by stock, so that means it'll hit 500mhz stably during 3d at stock?
 
thats exactly what i was thinking, a away to skew the numbers in their favor.

"Like a scrotum, there it is in a nutshell."
<font color=red>Roll Tide!</font color=red>
<A HREF="http://www.cameronwilliamson.com" target="_new">-={Apathetic As<i></i>shole.}=-</A>
 
I noticed the higher clocks from the very first 3DMark I ran. I thought that I just had my cards stock settings confused with one of the others. I can confirm that they do run faster than what you set them to run. Mine doesnt do it on every run, but at least half the time.

One thing that could play a role in this would be the PEG LINK setting. PCIExpress brings with it new BIOS options that havent been given any time in the reviews. Ive tried to start this discussion 3 or 4 times but so far nobody has been interested.

ASUS P5WD2 Premium
Intel 3.73 EE @ 5.6Ghz
XMS2 DDR2 @ 1180Mhz

<A HREF="http://valid.x86-secret.com/records.php?PHPSESSID=792e8f49d5d9b8a4d1ad6f40ca029756" target="_new">#2 CPUZ</A>
SuperPI 25secs
 
After checking some of my notes, it sounds more and more like PEG LINK could be the cause. FROM ASUS:
PEG LINK MODE set to AUTO allows the MOBO to automatically adjust the PCIExpress graphics link mode to the correct frequency based on the system configuration.
I ran some test at various settings with my X800XL and when the PEG LINK is changed the card does OC itself in certain areas. ASUS has done an outstanding job with their PCIexpress Mobo's. Most OCers that I know cant stand to leave anything in BIOS at AUTO but ASUS has really changed the way that software interacts with BIOS settings. Ive hinted around in the past that some of the newest OC software had come a long way and that the reason that more people arent able to get their OCs any higher is because they refuse to learn the ins and outs of how it worked. It wasnt that long ago that TOMS LN2 project was a really big deal. Have chips improved since then? Probably, but not some much that 5Ghz is an easy OC. Who would have thought that it would be done with liquid cooling? There are still many that still dont think its possible with liquid. Its not only possible, but its probable if you put the time into preparing the setup. I attribute alot of it being possible to new and improved software.



ASUS P5WD2 Premium
Intel 3.73 EE @ 5.6Ghz
XMS2 DDR2 @ 1180Mhz

<A HREF="http://valid.x86-secret.com/records.php?PHPSESSID=792e8f49d5d9b8a4d1ad6f40ca029756" target="_new">#2 CPUZ</A>
SuperPI 25secs
 
This is exactly why 3DMark reports them as 40MHz higher.

BTW, <b>everybody pay attention</b>, this is not a cheat or scam or whatever. nV knows that certain parts of their core can run faster than others while providing a benefit, so WHY NOT DO IT? Every 7800GTX does it, no matter what the clocks are set at, it's something that nV realized could be run faster than the rest of the core and they will run it faster. I think it's for the better. Things should get even better when you can have control over all 3 parts of the core, IMO.

Mozz, your 05 score was high because you have an OCd processor and your 03 score was high because you have an Intel, which is faster in 03, <b>and</b> was overclocked.

Maxtor disgraces the six letters that make Matrox.
 
Yes, those numbers are 100% correct and that is why my ORB links all say I ran it at 536. As I just said, it's not a cheat...it's added performance, why complain?

Maxtor disgraces the six letters that make Matrox.
 
I follow what your saying and agree. That means that we are not totally on the same page. LOL! I take the blame, I probably did what I am best at and that is "Making another screwed up reply" I probably didnt get my thoughts out the way i intended.

I didnt mean to imply that there was anything wrong with the higher clock speeds and all that towards the end of my reply had nothing to do with the original post. DAMN theses voices in my head. What I was trying to get across is that some Mobos are better at adjusting their settings when the PEG LINK is set to AUTO. This = much better performance. The systems BIOS will in effet OC the card in areas that provide a performance boost.

ASUS P5WD2 Premium
Intel 3.73 EE @ 5.6Ghz
XMS2 DDR2 @ 1180Mhz

<A HREF="http://valid.x86-secret.com/records.php?PHPSESSID=792e8f49d5d9b8a4d1ad6f40ca029756" target="_new">#2 CPUZ</A>
SuperPI 25secs
 
My post wasn't really directed at you (but the end was) but more like scott, Phukface and GGA...I know you know what's up with the 7800GTX--you have one and it runs pretty well!

As for the system BIOS OCing graphics cards, it's a very good option and a lot more stable than software OCing. I know DFI had to remove it as an option for space reasons (but they do include MemTest86+ 1.6 on BIOS!) but it would be a great feature if we had it.

As for AUTO settings, I have nothing against them and use them until I find something that works better.

Maxtor disgraces the six letters that make Matrox.
 
Mine's at 496 artifact free (and very cool running) on the stock cooler (out of the two available coolers, I also got the shi<b></b>tty one!). I made a single run at 550MHz (first try, zero artifacts) in 3D01 but I forgot to reset my CPU's multi after RAM OCing and didn't get a good score (but forgot to look at Nature :frown: ). Anyway, I needed this PSU back for my old system and haven't run my new system since.

I'll have some more stock cooling numbers tomorrow when I get my second PSU (I only have one for two systems right now) and then some watercooling numbers (MCW-50) on Wed or Thursday (probably) after I put that together.

What's more amazing, IMO, is my 7800GTX's memory...1425 perfectly fine for an hour+ with the HORRIBLE stock heatspreader and thermal pad (looks like felt with mesh in the middle to me). I have BGA copper ramsinks and nanotherm epoxy for them when my MCW-50 goes on.

Maxtor disgraces the six letters that make Matrox.
 
The MCW-50... You got the 6800 series adaptor for it right? because without it i don't think you can mount it on your card.

That's some nice overclocking speeds, what brand's your card?


Seeing the scalability of the 110nm 7800GTX, makes me wander what 90nm low-k could do.
 
yeah i only complain cuz i dont got one; much less a pcie mobo, heheh :evil: ;

in all actuality youre correct in what you are saying, if it increases performance where it needs it then so be it, more power to ya.

"Like a scrotum, there it is in a nutshell."
<font color=red>Roll Tide!</font color=red>
<A HREF="http://www.cameronwilliamson.com" target="_new">-={Apathetic As<i></i>shole.}=-</A>
 
You got the 6800 series adaptor for it right?
Of course :tongue:

I have a BFG and am extremely happy with it.

Seeing the scalability of the 110nm 7800GTX, makes me wander what 90nm low-k could do.
Seeing the scalability of the Prescott core makes me worried for ATi...Intel is on a <b>very</b> similar (yet <i>much</i> better) process. ATi shoulda taken some pointers from the IBM/AMD 90nm tech if they wanted the best 90nm possible.

As has been the reports and what I'm sticking to the reason why the R520 is massively delayed and is already on revision A13 is because of horrible leakage. They simply can't get the number of chips they need to run at a competitive speed right now. There're many ways they can deal, here are the most general and effective: drop tons of clock speed, drop an octet (or are they still using quads in R520?) or keep pouring the scrilla into it until it works. I have a feeling they're just about sick of the latter (they are on A13, mind you!) and will be going for dropping 8 pipes for remedying the heat problem. That will give them time to work on enabling all 32 pipes in R580. I also have a feeling there will be a lot of IPC improvements in R520, so they probably won't need a lot of clockspeed to compete (which is good, since they'll likely still have heat problems with only 24 pipes).

Maxtor disgraces the six letters that make Matrox.
 
Oh yeah another pointer on that waterblock, don't overtighten the bolt! i know somebody who killed their 6800GT because the core was crushed when he overtightened that waterblock. I admit the installation did seem kinda awkward.
 
in all actuality youre correct in what you are saying, if it increases performance where it needs it then so be it, more power to ya.
The way you say this can be applied to almost any scenario, which would allude you would accept the cheats nVidia did in 3dMark back in the day, or the removal of pure trilinear and mixing both without the user knowing as ATi did.

I'd like to know when I am given a supposed optimization to add to my gaming, and if I want it included, before I play or run a benchmark. The days of honest optimization seem very numbered lately. 😱

--
The <b><A HREF="http://snipurl.com/blsb" target="_new"><font color=red>THGC Photo Album</font color=red></A></b>, send in your pics, get your own webpage and view other members' sites.
 
So you're against the ALU in a Prescott running at 2x the core speed? Or are you against it only turning on in 3D? nV already has three different clockspeeds and voltage settings in BIOS (depending on 2D, 3D and performance 3D), you're against that too?? Just because it's something that nV does to get extra performance doesn't mean it's bad. It is in no way a cheat, it's a part of the core running at a different clockspeed in 3D...THAT'S ALL. I'm sure you're alright with ATi's Overdrive though, right?

Besides, in due time, this will be individually clockable through Rivatuner, I'm sure (there are actually three different core speeds, all three should be individually tweakable by the end of this month).

The one thing I have a problem with is ASUS selling their card as a 470MHz card when it's not according to nV, but they do sh<b></b>it like that all the time, it's almost expected from ASUS.

<pre><font color=white>just because it's faster than your card and does something that you're not used to doesn't mean it's bad</font color=white></pre><p>Maxtor disgraces the six letters that make Matrox.
 
I don't know why nvidia would say their card is clocked at 430, just say the card run at 470mhz, why not? it's not false advertising.

Thinking about it, i don't think it's cheating, i think it's just Nvidia being retards, Asus is smart at saying the card run at 470mhz, though i know they'll price it higher cause i know they're douchebags.
 
So you're against the ALU in a Prescott running at 2x the core speed?
It's not really a cheat though, they use 16-bit ALUs to make that happen. We're also discussing here mainly optimizations and hidden features. Intel has always disclosed the ALU trick, nothing wrong here.

As for clock speed scaling, that's not even making any of your argumentation any more solid. That's typical of most modern systems, to clock back, like Cool n' Quiet, WHICH ALLOWS TO BE TURNED OFF. Clock speed scaling is also disclosed.

Basically cheating in CPUs is nearly impossible, because you simply build them with the goal to get the best speed output, whereas a GRAPHICS processing unit can have cheats, because we're talking visual effects. So the CPU stuff isn't pertinent to this. Plus the only way I think a company can make a CPU cheat would be to output a different result, like a mathematical one, or use the software to push the CPU to do wrong visual things, (CPUs don't require any serious drivers, merely the basic ones to tell Windows about their clocking/power management cabilities), so that it could do actual harm to scientific purposes, which I'd say is completely out of the question for Intel or AMD. Whereas in a game, they figure whatever happens behind the scenes and the user won't know of it *yet*, is ok. I also didn't say nV cheated, but I also don't approve of it not being told, that it's OCed sometimes. I was also making a small picky observation of PF's sentence, not necessarily relating to the 7800 specifically.

I'm sure you're alright with ATi's Overdrive though, right?
Uhh it's an option. You can enable it. You can disable it. Wrong again, try a better argument.

just because it's faster than your card and does something that you're not used to doesn't mean it's bad
I'll chalk that up to teasing, than a serious statement, as that has not even one bit of sense in it. If you meant it, I'll gladly discredit you.

--
The <b><A HREF="http://snipurl.com/blsb" target="_new"><font color=red>THGC Photo Album</font color=red></A></b>, send in your pics, get your own webpage and view other members' sites.
 
It's not 470MHZ if it's not entirely 470MHZ.

--
The <b><A HREF="http://snipurl.com/blsb" target="_new"><font color=red>THGC Photo Album</font color=red></A></b>, send in your pics, get your own webpage and view other members' sites.
 
They say 430 because that's what you'll get if you reverse calculate the fillrate. That's what most people consider the clockrate to be. Example: Prescott's ALU runs 2x as fast as what we know the core as...should they call Prescotts 2x times as fast because a small part of the core runs that fast? Hell no.

A <b>small</b> part of the core runs at 470MHz and only in 3D mode. I think saying it ran at 470 would put nV under A LOT more stress, trying to explain that only part of it runs at 470 and everything else runs 40MHz less.

Maxtor disgraces the six letters that make Matrox.
 
OK, I'm not going to get into the 'cheating' thing, because it's not a cheat, if it dynamically overclcocks that's fine (but unlike Overdrive which finds the stable clockrates and makes them known). The thing is to not get into the myth that these cards are getting their performance results at the lower clockrates, thus implying additional efficiencies than are actually being experienced.

It's not a question of cheating it's a question of being aware of what's happening.

All the performance they can get out of it is good for the user, but when discussing the architecture/performance this is something worth taking into account.

It's like an error I found in a version of 3Dmark (and reported to Lars and Futuremark) where you could under-report the clock frequencies. This would give the impression of a better performing card at 'stock' speeds than really exists.

Anywhoo, not a big concern just interesting considering Eden and my discussion about the efficiency of the new card.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! - <A HREF="http://www.redgreen.com/" target="_new"><font color=green>RED </font color=green> <font color=red> GREEN</font color=red></A> GA to SK :evil:
 
The Performnace 3D speed is 430/1200 1.4v, with only a small part of the core clocking up to 470...games and benchmarks use this speed.

The regular 3D is 400/1200 (not actually 100% sure about those clocks, I think they're right though) 1.3v without the 40MHz bump to my knowledge, not sure what runs at this speed tbh...it's not the throttle speed from what I've seen.

The 2D is 275/1200 1.2v, and is obviously for 2D and for throttling.

Maxtor disgraces the six letters that make Matrox.