9800GT and 9800GTX+ released: Should I still go for ATI4 4850???

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


There is a huge difference between what ATI is providing to Nvidia: ATI's solutions is a hardware transcoder only on some of their cards that works with that specific program. Nvidia CUDA allows anyone to write any program that uses the GPU for computing. Badaboom is just one example for such a program. You can go to CUDA zone and see how many have already been written.

This may not be important for everyone but for me this is a real added benefit to get more than just gfx from my GPU (and I did not even mention again physx :) ).
 


By the time the heat actually effects anything I'm sure the system will be replaced with something newer and much faster. GPU's are meant to handle high heat. If you have good airflow in the case the heat shouldn't effect any of the other components to the degree that it will degrade their life that much. My fan is on 50%, I can't even hear it. Although my card idles at 50C, I can see the molten metal slowly drip and burn holes into my computer desk...and I can't hear very well now that I think of it.
 



I can't seem to get a straight idea of just how hot these cards get. I'm just going to be running the single (4850) card (Have not need for Xfire) in a normal ATX case with the normal couple of fans ... am I going to have major problems with this card as is? I don't plan on any OC, (or tinkering as you say) and in fact I have the side of my case open from time to time for various reasons. Is the heat issue more of an issue for dual cards or am I really going to have to mess with the cad to keep it from joining my northbridge in starting fires left and right?
 

I'd note that GPGPU development started ages before CUDA ever came around, and that CUDA is simply nVidia's latest generation of such technology, namely in that it provides a layer to translate C-based code to the GPU. People have been able to use Radeon cards for GPGPU purposes since at least the X1800 series, if not earlier.

And likewise, PhsyX is hardly the only physics game in town; one could just as well argue that for (actually less than the price of a GTX 260) one could get a 4870 and a Havok card, which you couldn't get from Nvidia for that price... And it's well-worth noting that Havok already has a much bigger base than PhysX.


Well, the thing is, the Radeon 4850 has possibly the highest-ever TDP, at 110 watts, for any video card that has only a single-slot cooler, rather than a double-slot cooler that blows air out of the case. So it should be a serious consideration with that card. Of course, there are several methods to easily take care of this problem; the two biggest ones being, namely, to place fans carefully so as to suck the heat that it's blowing into the case, or, simpler yet, just replace its cooler with an aftermarket one that will blow the heat out of the case.

You don't really have to quite go overboard there, especially if your room has air conditioning during the summer, but know that while for most designs, a case and setup arranged that is normally poor at managing its own airflow and thus cooling will likely have stability problems with the 4850, that it wouldn't have with a cooler-running card or one with a two-slot cooler. However, if you have a case with good airflow and cooling, it shouldn't really be all that much of a problem.
 


Again, you don't really understand CUDA if you are comparing it to ATI's GPGPU efforts. It took NVIDIA 3 years to make their cards run real c code so that any app can be written to utilize the GPU. Even if ATI began implementing it today it will take them 1-2 years to change the architecture to support all the c data types on the GPU. This is exactly the reason why the 4870 can even compete with the GTX 260. If the 260 did not support CUDA it would trash the 4870 by a huge factor. The 4870 can only do gfx and was designed for it. The GTX 260 can do gfx AND computational C and that's why it runs gfx only as fast as the 4870 (because part of the transistors and arch. was made to support something other than gfx).

Regarding physics:
As I said before, physx is GPU accelerated unlike Havok which is CPU based. Therefore, buying an Nvidia card will enable you to play BOTH Havok and Physx games! (since Havok runs on Intel CPU). Buying ATI will only let you play Havok and not any of the physx games.
Not to mention that physx is way superior than Havok since it runs on the GPU.
 
I missed alot of this thread but it sounds like Beuwolf has bought into the nV PR machine as if Cuda is something new and GPGPU is restricted to nV-only. [:thegreatgrapeape:5]



Exactly, this stuff has been going on since the BrookGPU work and the R300 was the first that could do it, although with 24bit precision while the FX introduced FP32; and ATi's CTM (Close To Metal) has been around longer than CUDA. now AMD uses primarily CAL, which is under their Stream SDK umbrella. Both are simply a front end for turning the shaders/ALUs into processors for other calculations, and there are a TON of applciations doing so. I'm working on something right now for work, and I'm finding both ATi and nV to be of little help making their hardware work with 3rd party apps (they both want us to develop apps for them). Microsoft even have their own work with a C# back-propagation library for GPGPU (similar to AMD's ACML-GPU lib), and to me I actuall prefer someone else making this work because the proprietary stuff is a pain and they're more worried about people using it on hardware that isn't theirs. There is little that can be done on CUDA or CTM/CAL that can't be done on another platform, it's just tougher especially for the older hardware where you have to manually program things for the ALUs. However both ATi and nVidia have built the new GPUs to be general purpose stream processors, so it's much easier than it was in the GF7 and X1K era.

And likewise, PhsyX is hardly the only physics game in town; one could just as well argue that for (actually less than the price of a GTX 260) one could get a 4870 and a Havok card, which you couldn't get from Nvidia for that price... And it's well-worth noting that Havok already has a much bigger base than PhysX.

And while I haven't kept up with their work on the PhysX cross-over since the early days, if NoHQ has PhysX working on the HD4K series seemlessly, then an HD4850 would outperform a GTX280, and way surpass a GTX260 based on raw compute power alone. So anyone touting CUDA as a benefit doesn't understand the underlying implication of bringing GPGPU ops/apps into the ring when the HD4K is a monster in that area. Look at a more practical gamer application on a no proprietary platform like Ray-casting rendering and see the big (sometimes 2-4X) difference in performance going from the GTX280 to HD4K. GPGPU is definitely not an area that GTX supporters want to discuss. The Tesla 4GB card maybe for the truly memory limited apps, but other than that I wouldn't mention compute options when comparing the two. Even RapidHD has said they would not limit their appliction to nV only.
 


Actually that describes all your statements sofar, especially your statement regarding ATi limited to transcoding only (that was their AVIVO effort, not their GPGPU effort which have far longer and far wider history than CUDA which you seem to think is the only thing in the space, and that it matters to gamers at all. That PhysX has been ported to run on ATi hardware shows that your point is moot, and you're simply blowing BS smoke everywhere.


if you are comparing it to ATI's GPGPU efforts. It took NVIDIA 3 years to make their cards run real c code so that any app can be written to utilize the GPU. Even if ATI began implementing it today it will take them 1-2 years to change the architecture to support all the c data types on the GPU.

Actually it's not that simple, much C code must be adapted to run in the limited C# environment that works for CUDA. There's alot of basic C++ code you can use that will crash simply because of DLL issues or because of the length. Very basic, it still needs to be converted to run on the GPU. So it doesn't just run out of the box, it like everything else, must be adapted to run in the stream environement on CUDA. It's like using Cg, it has alot of interoperability with C as well, but it still needs to be coded with Cg in mind, just like how code needs to be adapted to CUDA. All CUDA does is let you use C as the interface to run your computations within the GPU. It's still an interface more than tied to the core.
As for the time it takes, you're including ALL of nV's development time in your figure, ATi did alot of their ground work in other areas. And BTW, you can thank a Calgary company Acceleware for alot of nV's early work with C on the GPU. That's a major reason nV bought them was for that IP. I wouldn't say it would require the exact same amount of time (shorter or longer) for any other company to follow that path should they so chose. I doubt it would take much effort to make a C-centric complier interface for the HD4K series, it seems more of a software limitation than hardware one.

This is exactly the reason why the 4870 can even compete with the GTX 260. If the 260 did not support CUDA it would trash the 4870 by a huge factor. The 4870 can only do gfx and was designed for it.

What a load of BS, the X1K-HD4K can co alot more than just graphics, there are a ton of apps that use the GPGPU power to do raw math that is not graphics-specific.
We use it for something that is graphic related with mapping software, however it's not limited to vector calculations alone.

The GTX 260 can do gfx AND computational C and that's why it runs gfx only as fast as the 4870 (because part of the transistors and arch. was made to support something other than gfx).

Who cares? If it added transistors to be a cookbook organizer I don't care if it's not helping me with games or applications I use. The OP isn't asking for it to be a swiss army knife.

Regarding physics:
As I said before, physx is GPU accelerated unlike Havok which is CPU based.

Havok does both CPU and GPU.
Just like PhysX is both CPU and GPU physics. The difference being that PhysX has tiny demo levels in some games and in 3Dmark. Whereas HavokFX is stil just tech demos.

Therefore, buying an Nvidia card will enable you to play BOTH Havok and Physx games! (since Havok runs on Intel CPU).

Havok runs on any X86/64 CPU. It's not limited to intel, AMD and VIA can both use Havok's CPU physics.

Buying ATI will only let you play Havok and not any of the physx games.

Actually nVidia themselves said PhysX could easily be done on ATi cards, they wanted to try and get AMD to use CUDA for that, but the boys at NgHQ showed you can do it without either CUDA or an nV card.

Not to mention that physx is way superior than Havok since it runs on the GPU.

Limited implementations of physX on GPU and in an already limited Physics API. PhysX is second banana to Havok's much larger game title base.

Running small add-on levels to GRAW and UT3 and not throughout the game doesn't make for a compelling argument for PhysX GPU acceleration. And that Epic chose to use their OWN physics implementation in UT3 and only use PhysX for the demo levels, and that GRAW uses Havok Physics at the core and PhysX just for the Demo island, doesn't do much to ay that PhysX is all that respected by even the developers that decided to give it a test drive.

 


What are you talking about? CUDA is just an open source C compiler with very few modifications. Any C program will compile it with ZERO work. The only difference between regular C and CUDA is they added a few types and changed the syntax for functions so you can decide how to run them in parallel (which is all optional). Do yourself a favor and d/l the sdk.

Who cares? If it added transistors to be a cookbook organizer I don't care if it's not helping me with games or applications I use. The OP isn't asking for it to be a swiss army knife.

But a lot of people like me do care. GPU is not only for gfx and it can help for ton other things.

Havok does both CPU and GPU.

No it does not. Havok FX has been talked about for as long as I can remember but it has not moved forward very much. I bet you that you won't see it coming out in the next year or so. Physx is the only option for GPU accelerated games NOW.


Havok runs on any X86/64 CPU. It's not limited to intel, AMD and VIA can both use Havok's CPU physics.

Even better. Just made my point for buying nvidia cards even stronger (maybe with an AMD CPU:) ).

Actually nVidia themselves said PhysX could easily be done on ATi cards, they wanted to try and get AMD to use CUDA for that, but the boys at NgHQ showed you can do it without either CUDA or an nV card.

Coulda, woulda, shoulda :) - right now and for the foreseable future it does not.

Anyway, I admit of being an Nvidia fanboy so this conversation will get us nowhere. Just wanted to throw my 2 cents in.
 


Actually I have used the SDK as well as RapidMind's, and that's why I know it still has alot of troubles (for me it was with some DLLs) although seems less of an issue with Linux. It's more robust than RM, but RM is a little more flexible and can be used on both CPUs and GPUS. You cannot simply drop in arbitrary code into CUDA though you still need to re-learn for it for all the 'special case / exceptions' like length, it's still a C front end for access to the core. CUDA 2.0 improved alot from the early days when it was problematic as heck, but your statements, like the other ones you make, oversimplify it, and pretend that nV is the only one that could do it as if it were a hardware limit and not a software interface limitation of the compiler. And sofar CAL looks to be even more powerful overall, but the learning-curve is ridiculously daunting.

But a lot of people like me do care. GPU is not only for gfx and it can help for ton other things.

But it's irrelevant to the thread or even your initial use for it (transcoding) because others can do the same without CUDA so who cares if nV has dedicated transistors. If in the end the the task is completed as efficiently or more so, it doesn't matter if it's because of dedicated hardware or raw power. I don't care if it's doign AA in shader or dedicated hardware as long as the result is the same and one is faster than the other. That's why, it's of note, but it's no more important than telling the OP he should buy the HD4K because it does Raycasting faster. Unless that's specifically what he's asking about it's even less important than Tesselation let alone DX10.1.

No it does not. Havok FX has been talked about for as long as I can remember but it has not moved forward very much. I bet you that you won't see it coming out in the next year or so. Physx is the only option for GPU accelerated games NOW.

Which is still barely more than a tech-demo itself, not the underlying physics of either of it's big title games, and the ones it is underlying is CPU-only, just like Havoc.

Even better. Just made my point for buying nvidia cards even stronger (maybe with an AMD CPU:) ).

Once again you're skipping over the facts to colour your comments, you said it's Intel CPUs, whereas just like PhysX it's not limited to a specific architecture, as before nV has offered both CUDA and PhysX to AMD for their ATi hardware. And remember the PPU, essentially nV is just doing a wrapper treatment to make it work on the GPU. So is doesn't support your point it actually shows you myopic view of the reality that the things you are focusing on are software limits that can change if AMD so wihes (nV is offering AMD is not taking), my point i very contrary to yours.

Coulda, woulda, shoulda :) - right now and for the foreseable future it does not.

What like PhysX was GTX, then GF9 only? Oops for got to mention it's doable on the GF8600 and G92 based GF8800 too, now what?
Artificial limits are not the same as hardware limits, and you're confusing the two.

Anyway, I admit of being an Nvidia fanboy so this conversation will get us nowhere. Just wanted to throw my 2 cents in.

Admiting you're a fanboi is the first step, the next is to let go of that, and then the final one is to be IHV and APP agnostic.

The biggest thing is if you were even a competent nV fanboi you would've discussed the benefits of the GTX260 from the start not just the GF9800GTX+ initially, because that's the only one that has value for the uses you later illustrate, the 9800GTX is pointless from that perspective it even more limited, especially with the lack of DP support, which for what you say you're focused on would be more important than the limited PhysX support.
 

TRENDING THREADS