solution for R9 290x Bad FPS

Status
Not open for further replies.

Gunstarrhero

Commendable
Jul 31, 2016
7
0
1,520
bad frames? running dual? getting stampering in frame over rate at resolution? the r9 290's are designed to run at 947mhz for one reason. threads at conduction, the rate of conduction over the thorough out is exactly 947mhz stable. that doesnt mean conduction over the convection curve at kendection wont allow a ramp over 947mhz. alpha rho * beta square. max proficency is gained at this rate: however you can boost your clock to 1000mhz safely without increasing voltage. doing so solves the problem complete. i am doing so now on both of my cards and i went from 78fps miscreated stampered to 78fps miscreated smooth as silk, safely. rust im getting aroun 70-80-90,etc, depending on modified settings. the point is dont be afraid to keep yourself within limits af your own knowledge, i dont think anyone here even though of telling you to buck your card up some, it isnt to keep up. it is however that manufacturers UNDERCLOCK ALL OF THEIR CARDS ON THE PUBLIC MARKET. they walk home with the same type running 6 or 8x more effiecient than either you or me will see out of that card. keep in mind that BIOS is also a reason for poor performence. optimizations needto be made on opterand levels of the computing hoc then ad hoc (hoc both graphics card gpu and mobo cpu) ad hoc mobo cpu. doing so increases data bus thourough transfer. opterands are compilied bits of information in whole standard computational press that manuever data in one whole sum from place to place. they are the routine stop bit and dec 1 bit that start and stop machine or baught(bot) level operon case stecture. it looks like binary, except its whole sum digit press, binary is whole some integer press. there is a difference in type. the fact is doing so would leaveate our performance needs and then the world would blow up because we all recieved working state hardware at a most effecient level.
 
This reads like the PC hardware version of TIME CUBE. Luckily, I'm fluent in crazyspeak.

The tl;dr version reads something like:

Do you have poor framerates or micro-stutters? R9 290s are designed to run at 947 MHz stock, because this is the most efficient power consumption : output ratio. They can easily be clocked up to 1000 MHz without requiring additional voltage. This solves the problems mentioned above. OP believes that we typically don't advise overclocking GPUs (something we pretty routinely do because it's so easy nowadays.) *Conspiracy rambling stating that the manufacturers have identical GPUs that are 6-8x better than anything available to consumers because conspiracies* Sometimes the BIOS is also an issue, you can change that, you know? *Weird theory about opterands which are like binary code but not because reasons* If we could fix the opterands in our GPUs the world would explode. The end.
 
thats not the point dingus, i dont format, and i was mearly stating as to why it is the industry fails with these cards and others alike. the point is, its not an over clock. 1000MHZ on the r9 290 is standard operation for threaded optimization in conduction. 947MHZ is THE STANDARD RATE of conduction. put the 2 together Einstein.
not fix, utilize, do you know why it is that coded optimization and algorythmic programming are failing in this life. because ppl tend to stay away from the high proficiencies of math. wanna know why games play the way they do. because if someone uses true calculus in a game and builds a circumference, not a radius, the engine will crash. OH NO TRUE ROUNDED EDGES. that means the worlds leading programmers would have to start optimizing cards to run more efficient at hauluzaburgd standards. rates over rations effeicency * rate over amount ^ square divide whole bendb = rate over cube. that means software game programmers and anyone else who writes "3d" applications has to remove themselves from GEOMETRY And Basic ALGEBRA programming in algorythmic technices. IF CARDS ARE STRUGGLING THIS MUCH ON SIMPLE BASICS:; THEN WHATS NEXT?

do you see what im saying? THE INDUSTRY IS DUMMING DOWN HARDWARE AND SO ARE THE SOFTWARE PROGRAMMERS MAKING HARDWARE LESS PROFIECIENT FOR THE PUBLIC CONSUMER THAT NOW COULD HAVE HANDLED THE PULL. BUCK AND FELD ALL BECAUSE THE MANUFACTURER REDCUED CLOCK RATE TO SAVE A PENNY IN OVERATURE TAXES> OverHEAD and BOttleNEck.

THIS WASNT A QUESTION IT WAS A SOLUTION AND A STATEMENT. A FAIR STATMENT. Heres another one for you SCRIPTING is the new C++. games suck because programmers have no f****** idea how to build their own type in language and put it all together in separate pieces until they OPTIMIZE.

Over clocking is never the solution. I DO NOT CONDONE OVERCLOCKING. IT its a WASTE.
once stated, 2wice said. 1000MHZ on a card that comes supposedly ready for the high end, is farthest from prolificient, it is however gate practice and therfore is NOT above the cycle norm of the condection end of conduction of threads SEmi-Conduction operation: AMD. Intel is not a semi-conduction group member. and why buy a process that resists in the center of the core when transition is required. AMD -> Transit Intel -> Dide resistence.
 
@Gunstarrhero, watch the personal insults there. You're entitled to your opinion, but try to get that across constructively.


You do not condone overclocking? Your statement in the original post "however you can boost your clock to 1000mhz safely without increasing voltage. doing so solves the problem complete" seems to contradict that.


Honestly, some aspects of your point (I think) make some sense - GPU manufacturer's are not necessarily providing the absolute best speed/performance they can, but you address the reason too; cost.

The added cost to reach certainly landmarks do not/will not play through to sales to consumers.
The extra R&D involved would expense & time to a release - pushing the final cost per item higher.

AMD are not striving for the 'top of the line' cards at the moment, they're aiming for the average consumer/gamer.

As an FYI, no GPU manufacturing 'employee' is taking home a card 6-8x faster than the consumer card. That claim is just absurd.
 
no it isnt caost, you know how much it takes to build a card? any hardware? pennies. That why its large production. and boost, was a bad choice of words yet septlibly correct. those cards are under clocked. i knew it when i opened the friggin box. i never said touch the memory clock either, then it is over clocking, however due to limits and restriction to certain partisons, 2000mhz is the limit over the precidented 1Thz gddr5 is capable.

no, and yes they are, engineers take home ready specific cards all the time.
point is when you look at this the wy it truely is instead of the nieve computer guru or user/ super user:; we are all being ripped off. and as of going for. they are going for cost effective hardware, however they are staying back behind nvidia because they dont have the means necessary to include *gasp* a physics process on board. other wise amd is OVER THE TOP. included in whole with their polaris or hawaii gpu case, amd would put them selves way over the top for around a much more affordable cost towards the public.
 
Ok, last post and I'm going to stop biting here.

How much it costs to 'make' a card? As far as the components for the manufacturing sense, not too, too much (although much higher than 'pennies). Where the cost comes into play is research, development - those 'engineers' you appear to have a problem with, staff wages, utilities, rent, taxes & marketing.......then there is ongoing support - to name a few added costs.

It's not a matter of a PCB costs X + Y + Z = a card, there's much more to it.

As for engineers taking home cards, while I don't dispute they may take home prototypes (for the research/development aspects), they are not 'ready' for consumers, they're not stable etc - and they certainly are not 6-8x faster than the cards that make it to market.

We are all being ripped off? That's quite the stretch of a claim. Your hand was not forced to buy an R9 290X (or any other card for that matter).

You seem to believe 2GHz+ is readily achievable, costs 'pennies' to product, can be stable without significant R&D, have no long-term support requirements or need to be marketed?

If that is the case (ie what you say is true), I look forward to seeing the first Gunstarrhero GPU that can annihilate the competition.

Good luck.
 
Status
Not open for further replies.