GF100 (Fermi) previews and discussion

Page 32 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I think Nvidia put too much emphasis on the GPU computing market, when the know-how has not matured enough for many institutions to fully utilize it.
Too bad.
 
@elie3000 and nFail, some people would rather have the fastest single GPU since you don't have the problems with SLI and Crossfire, and you have to remember that there are people who don't care about price and will Tri-SLI (or quad if allowed) the GTX480 and have the top performance

though i do think that cards should be compared on price level and not single/dual gpu

EDIT: @may1, yes they are pushing GPGPU, though this work done on it is something that doesn't have to be done in the future for super computing, though i disagree on using the GPU for physics in game since it kills 3d performance, quad cores are getting more and more common with 6/8 cores on the horizon and they can do physics just fine, Havok is an example of this, great physics and only using the cpu
 



Whether people were brand loyal morons back then doesn't mean anything to todays market.

Besides that, who in their right mind would compare a 3870 to a 9800gx2. the 1950xt compared to the 7900gx2 is closer to this situation.

The simple fact is that people are going to have to stop grading things on what will eventually be meaningless metrics.

Instead of folks logically sitting down and talking about where dual GPU's fail and where the win we get useless ranting about how "the performance increase doesn't matter because dual GPU doesn't count." Well there are many viable reasons not to like a dual GPU card... why can't we just talk about those?

The number of GPU's will soon mean no more than the number of cores in the GPU itself. No sane person is on about how ATI needs twice as many shader cores to get the same performance as Nvidia because they are fundamentally different approaches. Do companies have to start putting the same heat spreader and smoosh the 5970 onto one package for it to count? Who cares?

What we have is a company that wants to build a huge die and one that does not. Right now it could go both ways. The high end seems to agree that a single GPU is better, yet the price/performance and profitability seems much greater for the small die. There are legitimate problems with both strategies, if you want raw power dual GPU might not matter, if you want to avoid stability and consistency issues a single GPU is currently leaps ahead. Thus we are accomplishing nothing without looking at products and comparing rationally. It is not cut and dry.

If this keeps up we are going to have to hear about "dual GPU doesn't count" years after the engineers figure out how to make it work perfectly analogously to a single GPU... I'd much rather we all get it straight before I have to face palm preemptively for the post about how the 7970 doesn't count since it uses 8 GPU's instead of the 4 in the GTX690...
 
 


You do not buy graphics cards for future performance, you buy it for what it does now. No one bought a dx9 card for dx9 games, no one bought dx10 cards for dx10 games.
 
Besides that, who in their right mind would compare a 3870 to a 9800gx2. the 1950xt compared to the 7900gx2 is closer to this situation.

Was just thinking of that myself. But fact is, a LOT of ATI guys on this very forum argued not too long ago you could not compare single GPU's to dual GPU's.

Compare these setups on Price/Performance:
470 v 5850
480 v 5870
5890 v 480
2x 470 v 2x 5850
2x 480 v 2x 5870
2x 480 v 5890

That covers every matchup I want to see. We'll see who wins the performance crown then. Then we'll look at price, and see who wins overall.

I do find it sad, that ATI fanboys need to hold on to the fact a dual-GPU is the only thing that can beat a single 480 though, then go bonkers when someone wants to compare 2x 480's to the dual GPU 5890...
 
i dont get that argument "what about games that dont work with sli/crossfire". so what, if 1 in 10 games dont work with sli/crossfire then the 480 will win. the 5970 will win the other 9 anyway??
 


because to some people this matters, what if the one game you are going to be playing a lot doesn't support either, you would want to have the fastest single solution

though personally cards should be paired against each other based on price, but then again being poor makes it so i can't have the fastest solution
 


thats why the 300 watts limit counts? making a graphics card isnt just about making the biggest and fastest possible, you also need to make certain engineering standards or it counts for nothing.
 


what if the game you wanted to play did support sli or crossfire, and that made the game much more playable?

you can't make your argument work in any way. if sli/crossfire isnt supported, the 480 will win but in all cases where it is supported, the 5970 wins easily. you make it out like sli/crossfire is a bad thing when its a clear improvement over a single gpu in the clear majority of games.
 


sli/crossfire can be a bad thing, there are situations where even with the extra gpu the min fps isn't helped much (ie 5870 vs GTX295, 5870 usually had better min fps), and that is what counts
 


Well ok, I can understand that. There is validity in what you say. However there are facts to consider then, these are just FACTS:

1.) FACT: Intel was convicted of antitrust activity that damaged AMD when AMD's Athlon64 was more advanced than the P4. This caused AMD to not be able to afford R&D and many insiders believe that AMD would have had a CPU superior to the i7 by now if it hadn't happened.

2.) FACT: You said that AMD is money grubbing but they didn't even raise the price of their Radeons to at least match that of the GTX cards that they outperform. The HD 4870 remains to this day $55 cheaper than the GTX 260 despite matching the 260's performance. The HD 5850 is still cheaper than the GTX 285 by $44 despite beating it in every game in which they've both been tested along with using less energy and running cooler. They haven't raised prices on anything even though from a performance standpoint they would be justified in doing so. On the other hand, nVidia is NOT justified in having their prices as high as they are but they haven't dropped them. Remember when ATi released the HD 4870? It was amazing how nVidia found room to drop the prices of the GTX 260 and GTX 280 by some 72%! Now think, if they were able to drop the price of their 2 top products by 72% and still make money, just how bad were they screwing their fanboys to begin with?

3.) FACT: DirectX11 didn't come out 18 months ago because nVidia refused to comply. Since ATi didn't have the presence it had now, Microsoft allowed itself to be bullied by nVidia even though ATi was willing to go forward with it. DirectX10.1 was the result after Microsoft allowed nVidia to remove what they didn't want to do, making it no longer DirectX11. Then as the final insult after all this, nVidia doesn't even bother making any DX10.1 cards until the GT 2xx series and keeps the prices of its DX10 products higher than ATi's DX10.1 parts.

4.) FACT: There have been several reports including one from anandtech.com that nVidia was using intimidation and arm-twisting tactics to force review sites to lie or at least tell half-truths when comparing nVidia to ATi in performance tests. Also, any site that mentioned that the GTS 250 was actually a relabeled 9800 GTX+ was in danger of never receiving another nVidia part for review. Another thing that nVidia did was refuse to allow review sites to compare nVidia to ATi parts on games that it did not specifically supply to the review site.

5.) FACT: Intel CEO Paul Otellini claims "We never did anything anyway" but Intel still coughs up $1.25 billion in cash to AMD. I've never known Intel to be in the charity business, have you?

6.) FACT: Intel tried to get the FTC to bar Commissioner J Thomas Rosch from sitting in an antitrust enforcement action against it because they say that Rosch, who has been an FTC commissioner since 2006, had advised Intel on a range of antitrust issues. Pardon me, but if I was as innocent as Intel repeatedly tries to claim, I would be OVERJOYED that someone who had been on the inside of my organization/corporation was on the panel. They would know for sure that I was innocent! I wouldn't try to have them removed!

7.)

These are not money-grubbing things for a corporation to do. They are DISHONEST things for a corporation to do. AMD did have the TLB errata bug but all that really happened there was a manufacturing defect that anyone could have done. AMD didn't keep it hush-hush until something went wrong, they informed the public as soon as they found out. Since they couldn't afford to replace the chips that were sold they did the only thing they could do which was give out the fix and suffer the PR nightmare. I'll accept it when a company makes an honest mistake but I will NEVER accept deliberate dishonesty. Intel and nVidia are both VERY guilty of that, AMD isn't. Even Fudzilla didn't attack them over the TLB bug:
http://www.fudzilla.com/index.php?option=com_content&task=view&id=4629&Itemid=1

I'm an honest man and I searched for any article or review that called AMD underhanded, dishonest, sneaky, greedy or unfair. I found ZERO. Intel and nVidia on the other hand... well you know as well as I do. These are the facts, and ONLY the facts.
 


But whether two GPU's beats 1, or 1 beats 3 is wholly non important to a consumer unless they are searching for frivolous things to base their opinion on.

In the end I base my purchase on a combination of the following:

- Price
- Performance
- Special Features
- Physical Size
- Power consumption
- Heat
- Compatibility/stability

I don't give a rats ass if It takes 18 GPUs... If in the end I spend the least possible and get a powerful reliable system that fits in my box without limiting my expansion slots or over clocking I am a happy camper.

It seems that folks like to give one method of achieving performance over another the "purity" moniker. Thus regardless of other considerations for some reason doing it one way is not comparable to the other because of some pointless line they drew in the sand themselves.


What I want to see is every match up possible... and an absolute end to the ridiculous spamming of how "2 better beat one," "2 doesn't count," blah blah blah. There is no fundamental reason two GPU's has to beat one, the gtx480 could very well beat a 5970 and I could still consider the 5970 a good card if the its price is low enough to warrant the diminishing on the other areas.

I can't stand dual GPU cards. Until the scalability and reliability are improved I will never buy one (and the over clocking)... but that does not mean they are any less relevant for what they are. The performance is till there, and it is exceptional for the price. That being said, I do enjoy dual GPU set ups... just not smashed into one card. But just because I don't like dual GPU cards for my own systems does not mean I don't see their potential, and understand their use in the market they are just not for me, yet anyway. (A bit off topic... but I'd consider getting one if I could easily over clock just one core for use with windowed dx9 games). I buy a graphics system, not a GPU. I usually end up with two cards... but a product is a product worthy to be compared to another even if I don't want what it brings over the other.
 


in that case you can always switch off the second gpu. the thing about having two gpu's is it gives you a choice too.

i just cant understand why anyone would claim the 480 is somehow better than a 5970, chances are they will cost close to the same and the 5970 will probably smoke the 480 in at least 8/10 games.

not only that but the 5870 can probably overclock to beat the 480 if the rumours are true. also, ati are holding back the 2gb eyefinity six card and that might even be enough and even if not it is pretty sure that the 1ghz 5870's will be enough.
 


I agree completely. Whatever it takes to get the job done. The HD 5970 has 2, the GF 100 has a maximum of 512 (although they didn't say how many were in the GTX 480). The rest is symantics. It's like listening to people when they first complained about multi-core CPUs. :sol:
 
The argument was just used in whatever way one wanted to make a point with the GTX295 beating the 5870. Nevermind embarrassing the 5850. But as soon as you made that point, you hear the whining from the ATI camp , its a dual gpu. Now they want their cake and eat it to with the 5970.
 
difference is notty22 we all knew that the 5970 was coming out soon after the 5870. where is the dual 480? its not coming out and you know it.
 


it all depends on price, if the GTX480 is the same cost as the 5970, no reason to buy it (some people still will, just because they want the fastest single GPU, or use 4 in quad sli), though if it is priced say US $500, then it might be worth it over the 5970

something to keep in mind about this is that the Fermi chips have to use the CUDA cores (compute shaders, stupid alternate naming) for tessellation which with heavy amounts of FSAA (say 8x+), might falter a bit, though this is something time will tell with benchmarks after release

and i am not saying the 480 is better than the 5970, its just that some people (not myself) would rather have a faster single GPU, usually its because of power and heat, but we have to wait to see how much power the GTX480 actually draws, a good estimate is probably 225-250w with power connectors for up to 300w (possibly for OC'ing)
 


Well my problem is that I'm a Political Science major and I tend to have political views about EVERYTHING. I choose AMD because I don't want to see underhanded and dishonest businesses succeed. Who knows, maybe AMD is just as bad as the rest but they haven't shown it yet and they've had longer to show it than nVidia has since AMD has been around so much longer. Intel has shown dishonesty several times and so has nVidia. As long as the AMD products are competitive, that's good enough for me. I'll probably never buy the state-of-the-art anything so as long as AMD has what I need, until they show themselves to be as bad as Intel or nVidia, I'll stick with them. If they ever do show themselves to be that way, I will no longer have a preference and just buy whatever because I won't care anymore. :sol:
 


Check yourself there pal. The GTX 480 has somewhere near 512 processors. :sol:
 


i agree, people did say that and now are turning around and using it as an excuse for the 5970, personally it was always a price point for me, $400 v $500 with little to no gain (depending on game)

@nFail, the only way i see a dual GTX480 is without regard to the PCI-SIG power limit on cards
 


rumor has it that there will be 2 version of the card, one with 480SP and another with 512SP

personally i hope they don't do that and call them by different names, too much confusion
 


Fermi uses the polymorph engine to do the tessellation, it is independent of the compute shaders. The polymorph contains all of the geometry units, there is one per SM cluster.

I do not believe the common consensus, that I used to hold as well, is correct. The gf100 is likely faster than cypress at tessellation at all times, AA or not. The issue seems to be that either the shaders on Fermi are very slow (unlikely) or tessellation is just a very insignificant cost to performance on its own. That is, computing the polygons takes far less time than processing all of the textures and image effects to these polygons. i.e. it won't much matter if the tessellation is 100% faster if it takes 1/100 of the time compared to applying AA.
 


I am quite sure there will be a multi-gpu GF100. Don't get too caught up in rumors. We heard about heat issues then we see this bastardized driver that cooked GPUs. The problem reported may have had nothing to do with the GPU itself. The GTX295 was essentially a pair of hot rodded 260 cores. Their dual gpu solution may come in the form of dual 470's rather than 480's (maybe even some odd derivative that never sees any light of day on its own) but I don't expect anything so bold for a revision or two but definitely expect to see them. I would guess around Christmas lol
 
Aren't GPUs designed to scale using multiple sectors of the same calculation group? Aka... multiple cores? So could someone explain to me why the cores need to exist on the same chip to be effective.

I know that additional SLI/Xfire configurations add diminishing returns. The calculating load is split between the chips, and the frames need to be repackaged out the primary card once complete. The diminishing results happen because the calculated results between the cards don't sync up exactly, and the cards spend overhead syncing up the frames instead of sending them out the door on time. Or is it something else?

If the connecting hardware and drivers allow two or more separate chips to operate as though they were a clean single die, what should it matter if they aren't actually a single chip? If two evergreen chips are cheaper to make than a single but larger fermi chip, but paired evergreens operate faster than a single fermi, who cases if there are multiple chips?

Gamers just care how many dollars get the more fps. Whoever offeres the best fps for a given price will win, regardless of how many chips the card has on it.

You correct, though engineering something that can exist latency free (or at least latency light) yet be separated by several centimetres is no easy task. There will always be some overhead since the primary card will always have to do a touch more work, but we are approaching the point were in the next 2 or 3 generations the performance/transistor of a multi GPU part and a single GPU part will be very close to the same. We are already seeing the "driver" issues difference disappear quickly, it will also be totally gone soon.

So far as I am concerned, the only time I notice I even use crossfire is when I update drivers and forget to turn it back on, or when I play a dx9 game (there are still a lot of these) and am unable to use crossfire/sli in windowed mode (this is a big issue for Warcraft and other RPGs for me).
 
Status
Not open for further replies.