GF100 (Fermi) previews and discussion

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


actually, i would love a 3 slot 5970. it would run cooler, and maybe it wouldn't be so damn long. a 3 slot 5970 is still smaller then 2x2 slot 5870s you know 😛.
 


:non: I'm willing to bet they were compairing hawx at dx10 for the gtx285, and 10.1 for fermi. dx10.1 is all about improving AA performance and IIRC dx11 is backwards compatible with dx10.1. i can think of no other reason they'd use HAWX, why not crysis or some other nvidia branded game?
 

Id wait a year see what the 6k ATI's do. Nvidia is going to price this stuff to hell may as well wait for price drops.
 
Well for myself i only buy nvidia(i have my "reasons") doesn't prevent me from pushing ATI cards on others.

I just rather wait for GF100 refresh, if i'm lucky nvidia can pull another miracle like the G80(8800GTS) -> G92(8800GT) and i can get a 1 slot power house of a card on the cheap
 

Whilst not outside the realms of possibility, I fear we are going to have a bit of a wait for that but as there are no DX11 'only' titles and my 88's still run everything @ 19 x 10 then there's no need to rush into any rash purchases as far as I'm concerned.
 
I understand where you're coming from but for me, I picked up 3 x 24 inch monitors with a 5870 for Eyefinity and never looked back, sure I can't max out Crysis but to me on Crysis; medium @ 5760 x 1200 is better than Very High at 1920 x 1200.

I really have to agree with HardOCP's love of Eyefinity, you just can't describe it.
 
Eh, Eyefinity and whatever nVidia's tri-monitor gimmick is called again isn't good enough yet imo.

The fishbowl visual affect on so many games is brutal to me, as is the bezel in between the screens 😛

So they need to start making bezelless screens and fix the fishbowl shiat, then I need to wait for prices to go down on monitors, and THEN maybe I'll invest in a multi-monitor setup. Until that point, one large monitor serves me just fine 😛
 

You should be in great shape, as we move to 28nm and HKMG, Fermi will do well in this node shrink
 
But what about Assassins Creed II, sabot? 😀

Actually, that will probably work fine too. I got a 5850 just because I wanted to play Crysis maxed (and my 4850 certainly wasn't going to cut it, even at my current low resolution.
 
Heres another tid bit. For all those hoping for DP on Fermi, unless you want to spend the big bucks:
By all rights, in this architecture, double-precision math should happen at half the speed of single-precision, clean and simple. However, Nvidia has made the decision to limit DP performance in the GeForce versions of the GF100 to 64 FMA ops per clock—one fourth of what the chip can do. This is presumably a product positioning decision intended to encourage serious compute customers to purchase a Tesla version of the GPU instead. Double-precision support doesn't appear to be of any use for real-time graphics, and I doubt many serious GPU-computing customers will want the peak DP rates without the ECC memory that the Tesla cards will provide. But a few poor hackers in Eastern Europe are going to be seriously bummed, and this does mean the Radeon HD 5870 will be substantially faster than any GeForce card at double-precision math, at least in terms of peak rates.
http://www.techreport.com/articles.x/18332/5

So, theyve neutered the desk top version for DP. Not that this has a huge impact for most, but in case someone had plans, best to buy a ATI 5 series I guess
 


interesting if they are nuking the DP for the desktop parts, yet also trying to say how they are pushing GPGPU
 
i was just saying it makes it look counter intuitive, i know that the money is to be made from the professional setting (including support)

though it loses them sales from programmers willing to pay for a desktop part like fermi that program in opencl (i am a student so...poor)
 
Theoretically Nvidia merging its gaming GPUs with its GPGPUs will save itself the cost of producing two different dies and driver sets for each, so it cant be all bad. I believe that the first generation and a half will be very expensive aswell, given that GPGPUs are for industry/money making tasks. The cost to buy one will be high but over the different generations the cost should come down and GPGPUs of a wider/more affordable range will also be released.
 
Truthfully I don’t know how Nvidias corporate structure works but basically what I’m trying to say is if Nvidia spends R&D money into making a gaming GPU they would also need to spend R&D money into producing a separate GPGPU for industry end. If Nvidia where to only produce a GPGPU designed for both its industry and gaming, then theoretically it can save some R&D money because it only has to produce one die.

Similarly with its driver programmers. you have X number of people to work on DX drivers for gaming cards and Y people working on OpenCL drivers for its GPGPUs. Well now that you only produce one type of GPU you can supposedly afford to fire a handful of your programmers and thus save some money in that area.
 
In reality this will probably be the most different professional vs graphics release they have made. If rumors hold true Tesla will have different binning in clock and core count. With certain features enabled (such as ECC).

Up until now the quadro cores were identical to the gaming cores. Often the clocks were a bit lower on the professional cards mind you. The only differences were in drivers and card features (such as early support for Displayport, and proportionately more RAM).
 

+1 nVidia reduced DP performance at levels below cypress but this wont have an effect on Games performance only on GPGPU programs.
 
Status
Not open for further replies.