GF100 (Fermi) previews and discussion

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Theyre saying what Rys is saying, but omitting what this really means , other than " These changes seem logical, and encouraging, but without knowing clock speeds actual shader performance is anyone’s guess. "

Rys tries to answer this, and goes on to say how this works, and how its effective or not.
Scaling down from here, what we do know, means going at it from what we do know, not suppositions, means clocks have to make up for less CCs and FF HW.
The only thing Klye etal came in with is we dont know the clocks, but going by history, looking into smaller iterations of same arch, we can conclude these clocks arent sufficient enough to overcome the FF HW scaling loss
 
I'd be interested to see if fermi can help with games that are traditionally cpu bottlenecked.

The hair demo looked pretty cool, but at 23 fps it'll have to be at least 5 times faster before anyone can think about using it in a game.

I have to say though I'm pretty excited about fermi I think it will live up to the hype and I'm glad to see nvidia moving away from quad sli solutions. I bought a 1050 watt power supply to be future proof, I wonder how that's gonna work out for me.
 

450 on that die? i dont see that for at least a year or more. 2 billion transistors !! I cant imagine a scenario that will compliment high yields out of the gate. maybe after the process matures .
 
Heres a good example of PR. In my link, notice 2 things. The game, and the improvements.
The game is Hawx, and the improvements being shown are AA usage, at 4x and at 8x.
Now, what we know are 2 thing about the previous nVidia series.They didnt do well at all using 8xAA, and secondly, they werent DX10.1.
Now, DX10.1 makes for a nice increase in fps due to the 1 less pass, abd that applies to the 4xAA, and as if we look around, using 8xAA is various benches, the G200 series fell quickly in fps using it, and was often ignored by many of its users.
To expound upon it now, and give both the accomplishments of DX10.1 and the new gen over the woeful 8xAA fps in last gens perf is pure PR
http://www.hardocp.com/image.html?image=MTI2MzYwODIxNHh4VHN0ekRuc2RfMV8yNV9sLmdpZg==
 


Your making an assumption though: That the amount of work that can be done per clock is the same as the G200 series. And considering all the work NVIDIA put into cleaning/optimizing the pipeline, I don't think that will be the case. We'll see, but everything I've read about for the past year (thats actaully credable 😀) points to NVIDIA being back on top.
 
http://www.guru3d.com/article/nvidia-gf100-fermi-technology-preview/1

From page 3:

One thing I'd like to add is that a lot of improvement has been made in the ROP side of things, the AA performance will go up significantly, in fact if I can sidetrack and relate directly to a game; take HAWX for example at 8xAA it will perform roughly 2.33 times faster than a GeForce GT 285.

Holy O-0; thats no small improvement if true...

From page 8:

For our second demo we'll look at a side by side comparison of a GT100 (left screen) and GeForce GTX 285 (right screen) based graphics card rendering Far Cry 2. Settings are 1920x1200, 4xAA 16AA with ULTRA settings (Guru3D tests at very high). The time-demo is ranch small.

System specifications:

Core i7 960 processor @ default 3.2 GHz
6 GB Memory
Windows 7 64-bit
The average FPS end result for the GTX 285 is 50 frames per second and just over 84 Frames per second for the GF100.

So about 66% faster there, which would take back the crown from even the 5970 if those results hold across less NVIDIA friendly titles...
 


:lol: :lol: :lol: :lol: :lol: :lol: :lol:

Check this out:
http://www.pcper.com/article.php?aid=820&type=expert&pid=6

Testing Configuration
ASUS P6T6 WS Revolution X58 + nForce 200
Intel Core i7-965 @ 3.33 GHz
3 x 2GB Corsair DDR3-1333 MHz
Intel X25-M G2 160GB SSD

An wait for it...wait for it....the 5870 at the same quality settings gets an amazing: 83.8FPS. Does it hurt troll?
 


My point was that basing your whole assumptions to a benchmark which we know is not 100% reliable is very stupid. I replied with a cherry picked benchmark also. Techpowerup and other sites give the 5870 less than 75 FPS in that benchmark.

The whole point of this move by nVidia was to get some fanboys their hopes up. This showcase is very ambiguous to say the least, we don't even know which card is there. The 360 or the 380.
 
It could be a 911 for all we know, I don't think GF100 has a naming retail scheme yet 😉 Worst case is it's the top-end GPU with unfinalised clocks. If it's the top end, NVIDIA have a problem. If it's the second top, they may have an ace up their sleeve. But usually you show off your top end hardware so it's safest to assume that.
 


That is what I said at the beginning of the thread:

1. The GTX360 tested (448 Cuda processors) =>
- 5870 ~ GTX360
- 5870 < GTX380 (512 CP) < 5970

2. The GTX380 tested... => nVidia screwed.

In my opinion I think we look at the first one with the GTX360 being better than the 5870 by 15% max. Otherwise nVidia has some problems on its hands and no amount of physiX can lift that one.
 


Now i need the string of Lol's!

Using a faster CPU (both in speed in model)? Thats hardly fair. Same with the SSD drive, as that would eliminate a major bottleneck right there. Oh, what about RAM timings?

If the GF100 is constatntly >50% faster then the 285GTX (which was the card I was comparing to), that will make it performance champ, period.
 
OK, so some have to have it shown to them, I was hoping to avoid this, and hoping people would use common sense first, when dealing with hopes.
8xaahawx.gif

Now, 2.33 x 36 equals 84, so its 20% faster, as expected, for the FC2 numbers, theres too many ways to fudge those nymbers as well, and guess what nVidias PR will be doing?
As Ape says, hope for the best, expect the worse
 


Faster CPU by what.... 130 MHz? And the model difference is ...what...none except the unlocked multiplier. And we all know that SSDs help in FPS dramatically. Read my post below that in reply to randomizer. And I replied mostly to give you something to think about before saying the 5970 will lose its crown to the GTX380. Come on, have a little common sense there. This hype is the same as the one for the 5870 that had 40% performance boost against the GTX295. It's plain stupid to think like that.
 
^^ Considering its already been depriciated? Frankly, which made more sense for NVIDIA: Going right for DX11 [which required a total re-design anyway], or slipping in 10.1 with a new API and re-design around the corner?

And again, I'm not trying to compare the GF100 to the 5870 right now (like you are trying to do); I was comparing against the 285, as those are the numbers we have to work with on a common test system. Considering how many benchmarks there are of 285 v 5870, and the first numbers of GF100 v 285, its a safe assumption that if the GF100 is ~50% more performance of the 285, the GF100 will end up as the performance king.

Basic math people, if card A (5870) is X% stronger then card B (285), and card C (GF100) is 50% stronger then card B, you can figure out which card (A or C) is stronger without directly comparing the two. And frankly, I haven't seen many benchmarks where X is >50%...

Everything NVIDIA will put out will explain how the GF100 is faster then the 285. If we know how fast the 285 is, and how fast the 5870/5890 is, and how much faster then the 285 the GF100 is, we can guestimate to a certain defree how fast the GF100 actually is in games, independent of testing against the 5000 series cards themselves. Trying to directly compare performance on a game by game basis at this point it idiocy; based on what numbers I'm seeing, >50% performance over a 285 seems reasonable, and if you add 50% to all 285 scores, the GF100 comes out on top the majority of the time.

As we get new numbers, we get new conclusions. But based on this small sample, GF100 comes out on top. Whats so contraversial about that?
 


You compared the 5970 to the future GF100 cards. Come on. Now you try to spin it.
 
gamer, first of all, youve alwats claimed DX10.1 was worthless, and it didnt really give those kinds of benefits, which now youre embracing and accepting.
Secondly, who here is saying that Fermi wont be better? Theres a huge difference between alot better, as youve been saying, which means itll be much better than a 5870, and better, which means its as expected.
You just dont make a chip 50% larger and not expect it to be better than the competition.
My point is to the hype, of which youve escaped from here. Thats been my whole point. And guess what? It will be compared to the 5870 as well as the 5970.
But getting to my point on nVidia PR hype, its overblown, it isnt 2.33 times better on its own, without acknowledging the improvements from DX10.1, its better AA resolve, then we get down to actual perf improvements, as this wont carry over to every game (AA), but the arch improvements will. And what theyre showing, omitting whats really happening here, Im just filling in the blanks, as nVidia wont be mentioning the DX10.1 improvements, not their PR people, thats for sure.
Now, yes, since youre limiting your comments on the perf as it pertains to the 285, youre right, Fermi will be better, at least we all hope so
 
You compared the 5970 to the future GF100 cards. Come on. Now you try to spin it.

Not directly, and thats the point you don't understand. We're not going to get concrete FPS numbers, much less against a 5870/90 on a single test system. Every number we get will be "X% better then the 285". Since we know the 285's numbers, we can extrapolate from there [IE: We can figure out that the GF100 is 84 FPS in HAWX because we know its 2.33% better then the 285, and we know how the 285 performs on those settings].

What you are trying to do is use a game-by-game basis as a way to draw conclusions, which is insane. [IE: That because the 5870 runs at the same speed in HAWX, the GF100 will be about as good as the 5870]. I'm taking a number (which I admit is an educated guess at this point) that represents %improvement over the 285, and applying it across the scale. The idea is to get a general idea of GF100 performance, not worrying about game by game comparisions. We have one case of 133% improvement (HAWX), and another of 66% (FC2). So I'm using an even lower (pessimistic) number of 50%. Take the 285 numbers, add 50%, and tell me which card runs faster in the majority of cases; thats how I'm guestimating performance.

To Jaydee: Call me when 10.1 is used in more then a handful of games. Its useless if its not used. I call 10.1 useless, you call PhysX useless; fair's fair I say. The only difference, is PhysX is actually used...
 
I'll dumb it down: You can argue this two ways:

1: Argue that in HAWX, the 5870 and GF100 scores the same, so therefor, the 5870 and GF100 offer simmilar performance on average.
2: Argue that because the GF100 is X% (I'm using 50% for the time being) faster then the 285 on average, the GF100 is X% faster (negative number would indicate slower) then the 5870 on average. [Obviously, game-by-game benchmarks would give the 5870 a few victories, just like the 285 wins in a handful of benchmarks...]

Hallowed_Dragon is arguing the first case, I'm arguing the second.
 
Regardless of the results, this will be interesting I think as the ATI and NVidia approaches are so drastically different now to achieve the same basic result (DX11). It will be very interesting to see the relative strengths and weaknesses of the cards once credible benches come out. Perhaps we will see less consistencies now in the results (as one will dominate in some games and get crushed in others, all depending on optimizations and what the game stresses).
 
Status
Not open for further replies.