Radeon vs geforce

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Personally I think this game is awesome. The gameplay and the graphics are fresh, the controls are great and the storyline is decent. It's not very demanding either considering I get 30-60fps on high graphics, highest textures and 4X AA at 1280x1024 with my Athlon X2 4200+ and HD3850 512MB.

The game has a few quirks here and there and it can be really frustrating at times if you get stuck somewhere and keep dying. They probably should have added some more save points so you don't have to redo a whole bunch of stuff just to get back to where you were just to die again trying to figure out what to do / where to go.

Still, overall I think it's a lot of fun.


 
now all i need is a good mobo that does crossfire ackk i hate this stuff upgrading
yeah i was going to wait on upgrading and then i decided why wait.
im looking at a Gigabyte EP45T-DS3R
Asus P5Q3 Motherboard
Asus P5Q-E Green Motherboard
Biostar TP45 HP Motherboard
 



what 3dmark06 score do you get?
 


Just ran it and got 7730

Doesn't really matter though, it still plays new games just fine at my resolution (1280x1024).
 
3dmark06 is obsolete imo.



havent tried a Gigabyte board yet, but my 2 previous ASUS boards are doing pretty well.
 
wh3resmtcar how do you like the asus p5q-e green board? is it a good o/cer?
i dont know im debating between asus and gigabyte i hear they are both good manufactors. i may go with asus p5q pro cheap and does just about everything i want crossfire and all.
 
3DMarks are appropriate to the same setup/owner. If youve dropped in a new cpu or gpu, you can see somewhat the performance youve gotten, tho, even that doesnt scale perfectly, but it at least gives you an idea.
Ive basically avoided this thread, as if I say I may like ATI over nVidia, Id be labeled a fanboi. And I do like ATI over nVidia, but Ive owned both, used both, according to releases and my pocket.

Looking outside the box includes all things, whether its a gpu oriented function or cpu, the end, however one gets there is what counts, and shouldnt be aimed at 1 company, or 1 piece or HW, as to how itll get done. I realize things like this (physx) and gpgpu functions are of great importance to nVidia, but the problem remains that its all uphill for them, and it may not matter even if their solution is better, just like ATIs solution for the original DX10 was better, it wasnt what we got. M$ will be the player here, and will make the major contribution as to the future path of physics, as well as OGL. Anyone that doesnt like that, or cant see that, well, its only a matter of time
 
i think the whole physics debate is BS.

Havok is SOFTWARE physics accelerating. yes it uses the CPU, just like every other piece of software/application :lol: doesnt mean you can call it hardware accelerated.

nVidia has PhysX technology from taking over Ageia, which made the first working PPU. now physx accelerating is many times more powerful than havok and other software physics accelerators, since it is done on the GPU. The GPU has many more processing cores, and physics is generally a fairly parallel/multithreaded type of calculation. hence it's relative slowness done on a basically sequential (ok 2 or 4 threads now, 8 if you count core i7) on a CPU.

There may be a few games supporting physX, but a developer would be out of their minds to develop a game that ONLY runs on nvidia cards. the stupidest busienss decision is to deliberately cut away half your customers. just like game devs won't make DX 10.1 games because nvidai doesnt support it!

EVERYTHING must be standardised. imagine if every motherboard manufacturer had its own mouting system/placement of holes to sit inside the case? mahem! the technology industry is CENTRED on standardisation. without it, there would be chaos, confusion, and a slowing in the evolution of tech.

OpenCL will do this for graphics based physics/GPGPU computing. all sides have agreed to abide by the standards set by the...standard... nVidia PhysX, ATI Stream/AMD Brook+ will fade into the hpages of history; obsolete. both sides must agree to a standard for it to be worthwhile to programmers/devs.

the future is nearing. soon we will have a start to a technical revolution, and an end to stupid, misinformed fanboi comments.

end of debate.
 
Exactly. It may show promise in the form its in now, but once the standard is set, all HW will have to comply, which will set back nVidia, as it wont be done exclusively to their HW. So, its still a long ways off
 


i wont call that tiny 😀, the only drawback i have on my 4800 with dmc4 is that it cant keep a constant 60fps with the "60fps cap" on.





i read somewhere about the HavokFX initiative which supposedly died when intel bought havok.

the thing about havok, even though cpu limited, is its flexibility. but one must be blind not to spot how amazing PhysX effects are.. just compare the physics from fallout3 and mirror's edge. ive seen limbs falling before, but the cloth tearing/effects from mirrors edge, even though i havent experienced it first hand, is interesting.

and i agree you with the standardization. its either havok and physx die together, or one of them gets busted. another Glide scenario is in the works.



 



i cant give you a very specific info about that because i dont own that board. but the thing is, when i bought my asus p5k-se (chipset predecessor of those p5q's), i just looked @ the features and the add-ons that i needed. most of the more expensive boards ships with 2 LAN ports, a bazillion sata ports, 2 dozen usb ports, and some 12 phase energy saving modes (i have no idea what this was about at that time). but basically i was just after the chipset, the intel p35 chipset. in the end it worked for me.

i saw a picture of that p5q-e and i noticed that its using a heatpipe to cool the northbridge along with xfire capabilities, its a good buy imo. considering my boards' NB gets so hot. and having crossfire support in there gives your rig a lot of good potential for your next upgrade. i was under the impression that the next gen from ati and nvidia would start from 300$ and up, nobody whispered to my ears that a 4850 was in the works, if they did i wouldve strictly choose a xfire enabled mobo.

i can only give you what i think about the ASUS brand, its good so far. and i wouldnt mind choosing another one for my next build.

lets wait what the Gigabyte owners have to say about the boards. or you can google the reviews, too.


btw, have you decided already? i thought you were leaning to buy a 4870?
 




LMFAO the PS3 uses a Nvidia 7800GTX............ That alone wont propel it to have ANY chance of running Stereo 3D. hell they delayed Grand Theft Auto because of the PS3's inability to run the game at a decent FPS on the PS3.

http://www.bruceongames.com/2008/06/10/is-the-gpu-holding-the-ps3-back/

and the nvidia vs ATI in "safe" overclocking is BS. Both sides are overclock at your own risk, blow your card and enjoy buying a new one.


 
It's the weekend, so I'm not going to waste time with this, you may have nothing better to do though.
So, just a brief comment;



The game could already be built on Novodex's engine, as the core and not shiny physics, then it could equally be set like it is in other games to increase/decrease it's role, just like adjusting it in Crysis. It doesn't require keeping people out, and considering that it's a unified platform, that nVidia owns, you're going to tell me it's easier to use 2 physics engines for a game, one separate one for game physics and then a second one for debris, instead of adding support for the physX process into the unified engine?

theres a difference between running PhysX from a Physx Enabled GPU versus on a Non PhysX enabled hardware is there not? is there? thats the point im making which failed to get through to you.

No, there's not, that's the point you fail to get. It's the software support that's different, not the GPU or even CPU running it, make the software support it and it's the same thing. There's no PPU built into the nV GPU, it's using the stream processors to emulate a PPU in the same way a CPU would emulate it, it just does it faster than a fewer core CPU, but the efficiency is less.

who said it isnt? (re: Havok GPU)

That would be you.
Scroll up, you single out only the one current implementation for Havok, but only want to include the future maturity of PhysX. That's pretty selective on your part.

if ati has its name on it you might've jacked off from the mere sight of it 😀. and besides you can opt to not buy a geforce anyway, nobodys taking that freedom away from you.

Until you make it part of the game it's just a tack on a phoney as simulated surround sound instead of true multi-channel audio, and I won't cheer-lead it now anymore than I did before, and that's regardless of who tries to implement it. I know you have a weak argument when you have to try and make it a fanboi debate like you typically do.

You want to turn it into a Fanboi debate, that's pretty lame, you might have brand loyalties that blind you, my position is the same when it was first discussed as the potential for Brook GPU, and same as it was when it was Ageia and hasn't changed, whereas most of you PhysX fanbois are really just nV fanbois who have a new product to cheer.
 


And PhysX uses software to emulate a PPU on a GPU using the CPU.
They both use host resource, PhysX just sends that workload to the 'cores' on the GPU instead of the cores on the CPU. And it robs the performance from the GPU unless you add another GPU as if you're adding another processor. Blurring the lines of what constitutes a processor and what constitutes software/hardware is exactly what nV wants in everything else where they want people to think of them the same as CPUs where they're competing for that same role, but when they are competing against that role then they are not the same anymore because that position better suits their argument.
How is that different from running the same calls through a CPU, where more cores or a second CPU means more performance to the game that would otherwise go unused in the standard 2-4 core/thread supported game? Same effect as born out by the results in GRAW.

As we move forward that divide will blur even more with the move toward Fusion, Havendale and Larrabee.
 
man i cannot believe how much nvidia dropped their prices. the 9800 gt and gtx and + all dropped down to ati 4850 price line. better late then never i guess.
 

now they are at the price they should have been from the beginning
 


you mean this?



singled out what? didnt you read my quote? lol, you keep on failing and failing to understand do ya? like what ive said i mentioned Physics maturing. i dont know what part of that you keep on failing to understand. in which case im concluding you're a bonehead officially.

PhysX != Physics (on my posts) <<< now if you still dont get this....


there is, which you keep on failing to get. because you're a fan boy 😀 ... you cant recreate those PhysX effects on the CPU alone can you? or maybe you can but you wont be playing that game.

in which most ATI Fan boys spat "PhysX causes performance loss". of course it does, your adding eyecandy dont ya?



yeah, but running it on the CPU will result in a crawl.



so faster = less efficiency now.

so offloading the CPU with Physics is a bad thing even if it result to better physics effects in a real time scenario (not just a demo). interesting.


lol im not making any fanboi comments here (you just always fail to read and understand and comprehend and absorb what im posting, feel free to quote one of my fanboy posts).

you cant admit that PhysX is a step forward, and you being a DIE HARD ATI FANBOY will never commend a tech from NVIDIA. right MR. MOD? you sound as if Game Physics is already at its final stage, or like DICE made a step backward by implementing PhysX on ME.



and if Hardware PhysX (meaning you have a geforce card) does make your game pretty unplayable, you can always turn it off. the same way you'll lessen the details settings on a graphics option on a game, thats pretty simple to do, even a grade schooler can do that (my niece can do that). e.g. GRAW2 Physics = low. or turn it off from the drivers.

like what ive said, Fan boys will never think outside of the box. least they can do is create a fictional commentary faux compare and contrast, find flaws, and praise everything from their camp.

 
so my final statements will be:

Havok dont run on the GPU yet, but if you have a geforce on any good intel/amd platform you can play a havok game no sweat. (you found this very stressful, why?)

PhysX on the other hand will let you play on any good amd/intel platform with an ATI card sans the hardware acceleration (and no, PhysX from the CPU isnt hardware acceleration, even if you made a pointless explanation for that). in which case you'll be required to buy a PPU/Geforce 8 or higher to FULLY UNLOCK ALL THE PHYSX effects on that game. (this one too stressed you out).

note: ATI fanboys will only diss PhysX, wont admit its actually good (at least on MEdge).

and yes, the PhysX (hardware accel) effects from ME, is a good step forward.

and yes, Physics (gimme that singled out / ignoring counter again half-wit 😀) as a whole will continue to improve. theres hope of standardization with Dx11 and the OpenCL initiative.

-end
 
The problem with your summation is that even if PhysX gets really decent, its a dead end. M$ wont allow proprietary usage of something like this, and thats the bottom line. Just like baddabingbaddaboom, nVidia wants to charge for it, while ATIs solution is free. As is Intels. As is M$.

Remember the problems everyone had with Intel trying to get the proprietary edge on USB? And again with USB3? It didnt fly, because everyone else said no. When the standard is created, it may well use nVidia solutions/IP, but only if its free.

The Devs and game makers wont likely all go to this solution either because of lack of ability for the whole market. As TGGA said, its not up to nVidia on these matters, but up to the Devs, and profits etc. So, in essence, Apes right. Even if the promise is high, and PhysX meets alot of the expectations, those expectations wont amount to a pile o beans if its not in the standards which are yet to be set. And once set, it wont necesarily favor ATI nor nVidia, so all those improvements will fall to the wayside. I commend nVidia for doing what theyre doing, and tho I havnt tried it yet, Im certain its better than any Ive had before, but being the way things are, and how things go, nVidias current solution at most is just paving the way thatll have to be shared down the road with both ATI and Intel, and at worst, theyll have to start from scratch all over again ince the standards come out
 
You can make st00pid comments all you want about everyone else being a Fanboi, but it doesn't change your original statement which was concerning the current implementations, not the future of PhysX vs the past of Havoc despite your desire to paint it as such.



Right I forgot PhysX is an audio engine right? Or was it a lighting API, I'm so boneheaded I forget. :pt1cable:




Of course it can be done on a CPU Mirror's Edge reviews even mention that they can be done either on the GPU or the CPU (did you miss that?); and for games like GRAW there are ways to drive it to the CPU, the difference being performance. But it's not like the PPU or GPU are using a method not available to a CPU in the same way X86 is not available to them. Heck Pershing originally mentioned the PS3, and it doesn't use nVidia hardware at all for PHysX, those calculations are done by the CELL processor.



How many SPU cores does it take to achieve the same thing?



That whole statement is Fanboi crap, and once again hiding behind the 'MrMod' comment as if it gives you any more of a footing for your BS.
Sofar you're the one who seems compelled to post out of date information (like your GTX+ vs HD4850 information [catch any of the recent GTX285 reviews that include the older cards? or still posting summer reviews like I pointed out earlier?] yeah you tried to pawn it off on the [+] diff), and you're the one trying to say that PhysX is alone in doing certain things, when it's not, and want to talk about it's future while ignoring other solutions. I've mentioned AMD, intel and S3 in this thread, and have mentioned them along with nVidia in previous PhysX threads, you're the one with blind allegiance to team green to the point of ignoring the facts.

BTW, I'm not the only one not impressed with PhysX, even the boys at The Tech Report while liking adding some additional features to Mirror's Edge, admit it's still just shiny physics "I can see the developers kinda going; 'she's running along this platform here, can we put some paper here?' Why?" -Jordan ; .."What they did was they had like more particle efffects and bigger explosions.. .. it wasn't compelling you could probably add more debris and smoke without hardware acceleration, but it kinda looked like a bad game design choice because it looked pretty good in the default way, and looked kinda excessive when they added stuff to it." - Cyril

It's not like I'm the only one who's said this, and unlike you, I didn't change my stance because the name on the box change from one company to another.



You call me an ATi fanboi as if being against shiny physics of PhysX means you're on AMD's side of the coin, you do realize that intel owns Havok, right?
And, I guess you also missed the part where most of us here criticized and cautioned Havok when it was on it's own and now that it's under intel. Having it under Chipzilla doesn't make it better or worse, only less likely to go away anytime soon after the threat that another ardware company bought their competition.



You do realize that that would pretty much mean the end of any closed app that only supports a fraction of the market and not the whole market, DX11 & OpenCL are in opposition of how PhysX is currently implemented not a compliment to it. CUDA, like Brook+ and CAL will be separate from DX11 & OCL. Making the current implementation less compelling as a long term consideration. So that last statement doesn't support your argument, it actually goes against it.

You thing it's God's gift to gaming, I think it's interesting, but falls far short from the promises & vision both Ageia and nVidia have tried to sell to people.
 


And that's just it. The same can be said of things like DX10.1 and tesselation, they are interesting and they hold promise, however with the way DX11 will be implemented the benefits there far outweigh the short term benefit of DX10.1 and negate ATi's implementation of tesselation as the move toward the universal method in DX11. They may be beneficial in a limited number of games, but they're still far from being compelling to any major degree.

Nice to have, but not crucial, and nowhere near what was promised and hoped for.
 
its like talking to a brick wall, just ignore him ape, i would have lost it by now and resorted to name calling, i know it does not win arguments but it does make you feel good
 

TRENDING THREADS