GF100 (Fermi) previews and discussion

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Perhaps we will see less consistencies now in the results (as one will dominate in some games and get crushed in others, all depending on optimizations and what the game stresses)

That would be bad for the average consumer, because in the long run he wouldn't have a strong grip on the performance of any of the cards. A consumer likes to know which card wins almost every time, not which card wins games that use some sort of feature. So if he buys something he will never know if what he actually bought at a given time will be good enough for future releases. I hope that doesn't happen.
 
"To Jaydee: Call me when 10.1 is used in more then a handful of games"
Im calling you out right now. Its nvidias PR dept thats using these lil used DX10.1 games here, so your point is frivilous at best, and only strengthens my point.
nVidias PR is using something, like gamer is saying, is unimportant, all the while, claiming credit for its improvements, without explanation, which I bring, and gamer, whatd you say? Its unimportant? Ill ask you again, are you connected to nVidia in any way? This is their attitude, and almost word for word.
I ask this, as it may explain a few things, and doesnt change what Im sating, but if it does, it only strengthens it yet more.
You talk like its not important, act like its not important, much like nVidia has, then, when their most impacting for change card theyve made possibly ever comes in waaaay late, and people are waiting, they just happen to use a DX10.1 game for their AA improvements. What a coinky dink.
Since nVidia does make DX10.1 cards, then your whole premise of doing it at all is somewhat pointless as well is it not?
Id be asking nVidia these questions, as somehow you didnt know, and still cant figure out why they used a DX10.1 game for their AA comparisons over their old cards that didnt use it, while we all know, it gives gains of 20% or thereabouts.
So, I hope I woke you up, and explain to me please why they chose this meaningless game? Coinky dink I think is what youre trying to say, but Id rather have your explanation
 
^^^ I love PR work from both companies. 2.33x more performance my ###. This is almost like AMD's performance graphs starting at 50% and showing a HUGE difference between products when in fact there is only 30% or so. The only difference is that nVidia has bigger douchebags in their PR department which spin things better than the ones at AMD.
Some people fall for it, some don't. The more naive will always bite the bait from PR departments.
Both companies want your $$$, they don't care for your well being or the fact that you like green more than red. If you are an idiot fanboy you just push the sales of some big organization which doesn't even know you exist.
What I don't like is some policies from nVidia, but I don't have anything against their engineering department which does a brilliant job sometimes.
If anyone is expecting this cards to be the best thing since sliced bread, then frankly you should continue being a mindless zombie for all I care.

/chaotic rant
 
I agree, they both do it, and this is nothing more than PR best foot forwards.
Im trying to show that it isnt all rosy and pure here. Neither is graphs from both companies as of late, like you said.
You can make an ant look really large if you use a microscope.
 
Well, theres mention of nVidia using a "quad" setup, which theoretically allows for better scaling, for lower end cards.
These wont be here til latter qtr3 from the rumors.
They wont impact DX11 anyways, but they will make money for nVidia, when they do get here
 


Agree. Part of the beauty (or idocy, depending on your viewpoint) of DirectX is the ability of each vendor to define their own implemntation of the API; all that matters is the result, and the implementation is up to the vendor to decide.

As I said, I'm guessing 50% based on NVIDIA PR. As we get more numbers, I'll adjust accordingly. Sounds about right for a brand new generation to me though...
 
JayDee, even i think the HAWXs data is BS, thats why I'm using 50% as my base instead of 233%...10.1 is essentially obsolete due to the emergance of DX11, a point you yourself has made on more then a handful of occasions. (Or, should I start quoting some of our "DX10 is dead" arguments?). Even then, you can't possibly be arguing that AA improvements in 10.1 leads to ~50% performance increase. [putting aside performance increase from the GF100 being a better card]

I also note, I have questions about weather the AA algorithm in 10.1 even works on non-standard AA modes (TSAA, etc), which would farther depriciate its usage.

As for NVIDIA's 10.1 cards, I doubt they are a unique design [IE, I believe they are based on their DX11 cards, with the DX11 features left out].
 
Also: http://www.anandtech.com/video/showdoc.aspx?i=3721&p=2

Reading through now, will edit with opinions

EDIT1

At any rate, internally each PolyMorph Engine is still a simple in-order design. NVIDIA hasn’t gone so far as to make a PolyMorph Engine an OoO design – but because there are 16 of them when there used to be just 1, OoO hazards can occur just as they would in an OoO executing CPU. NVIDIA now has to keep track of what each PolyMorph Engine is doing in respect to the other 15, and put the brakes on any of them that get too far ahead in order to maintain the integrity of results.

To resolve the hazards of OoO, GF100 has a private communication channel just for the PolyMorph Engines that allows them to stay together on a task in spite of being spread apart. The fact of the matter is that all of the work that goes in to making a design like this work correctly is an immense amount of effort, and NVIDIA’s engineers are quite proud of this effort. They have taken the monolithic design of prior GPUs’ geometry units, and made it parallel. We can’t overstate how much of an engineering challenge this is.

However all of this work came at a cost, and not just the significant engineering resources NVIDIA threw at GF100. The other cost was time – we believe that the PolyMorph Engine is the single biggest reason that GF100 didn’t make it out last year. It’s the single biggest redesign of any component in GF100, and is something that NVIDIA had to start virtually from scratch on. When NVIDIA told us that designing a big GPU is hard, this is what they had in mind.

Now why did NVIDIA put themselves through all of this? Because in their eyes, they had to. The use of a fixed-function pipeline in their eyes was a poor choice given the geometric complexity that a tessellator would create, and hence the entire pipeline needed to be rebalanced. By moving to the parallel design of the PolyMorph Engine, NVIDIA’s geometry hardware is no longer bound by any limits of the pipelined fixed-function design (such as bottlenecks in one stage of the pipeline), and for better or for worse, they can scale their geometry and raster abilities with the size of the chip. A smaller GF100 derivative will not have as many PolyMorph or Raster units as GF100, and as a result won’t have the same level of performance; G92 derivatives and AMD’s designs both maintain the same fixed function pipeline through all chips, always offering the same level of performance.

Very, very interesting...

EDIT2

To differentiate themselves from AMD, NVIDIA is taking the tessellator and driving it for all its worth. While AMD merely has a tessellator, NVIDIA is counting on the tessellator in their PolyMorph Engine to give them a noticeable graphical advantage over AMD.

To put things in perspective, between NV30 (GeForce FX 5800) and GT200 (GeForce GTX 280), the geometry performance of NVIDIA’s hardware only increases roughly 3x in performance. Meanwhile the shader performance of their cards increased by over 150x. Compared just to GT200, GF100 has 8x the geometry performance of GT200, and NVIDIA tells us this is something they have measured in their labs.

So why does NVIDIA want so much geometry performance? Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better. With more geometry power, NVIDIA can use tessellation and displacement mapping to generate more complex characters, objects, and scenery than AMD can at the same level of performance. And this is why NVIDIA has 16 PolyMorph Engines and 4 Raster Engines, because they need a lot of hardware to generate and process that much geometry.

That explains the Haven benchmarks pretty well right there...Thats a lot of power for Tesselation...

EDIT3

Page 4 is dedicated to AA performance; didn't I just leave that discussion? 😀

Edit4

Im making a conclusion on clock speeds: Lower then the 200 series [god, thats going to get confusing...], but the cards will be faster overall [just like how the 200 series had lower clocks then the 9000 series]
 
There are no DX11 cards out from nVidia, so no those cards arent based on nothing, theyre DX10.1 cards.
Whatever excuses they now use is simply more PR. Theyre late, and theyve shown us nothing.
Why didnt they use a DX11 game if its so good? I could care less about bungholio marks, worhtless, and on a 1.0 make as well. Its interesting that its new, and shows whats been around for 4 months already, 7 by the time Fermi gets here, but beyond that, Im not all that interested.
Wheres the games?
Them using a DX10.1, when they could have as easily used a DX11 means something, and having it both ways wont play. They mask their DX11 perf for whatever reasons, most likely to keep up anticipation, again, PR, yet they use a useless, unimportant, not needed DX10.1 game, which was DX10.1s true prupose, to create improved fps by lessoning workloads on gpus while using DX10.1 while using 4xAA, all the while, they use it for their AA comparisons, and thats crap. I say its too rare a game, and shows us nothing, according to their own attitudes. So, theres really no AA improvement, until we see it on DX11. The rest is pure PR. They want out there? They get out there.
I recall ATs earlier comments on why Fermi was so late, their engineers said, 3.2 billion tranny chips are hard to make work. Ill stick with that thank you, not anyone elses excuses for its being late.
I agree, as Ive said, this is the seed part for future gens for nVidia, and is showing less shader growth and more geometrical improvements compared to earlier iterations.
DX11 has done alot, as i tried to tell you waaay back when, and allows for this very freedom, as itll also do for devs, so now maybe youre getting it after all, once its where you can understand it, or in nVidias grasp.
So, since youve been saying XP is more important, and that DX11 wont be abything that important for another 2 years or more, whats the big deal here about all this never to be used perf increments?
Did I miss something? Or, is it now important all the sudden? And Fermi being late is the supposed reason? To have something that doesnt count again? Like DX10.1?

OK, lets get real here, DX11 is important, the hell with XP, and yes, this is nice to have, but only per usage, which according to you wont be needed anyways. I somewhat agree here, as every DX model increases in ability as we move along, as does the HW that supports it. But again, as long as its nvidias PR, were supposed to forget all this, and just accept what theyre saying.
Gonna be tough having the new DX11 tesselation using XP, but who knows, maybe someone will make a fix for it
 
Jaydee, stop it, please. From a consumer standpoint, DX11 isn't imporant yet, and won't be for some time. That being said, its idiotic to not start getting the first architecture completed. I'm focused entirly on performance, period, and couldn't really care what DX version is used, provided everything is the same in the comparisions.

You argue they didn't use a DX11 benchmark, although to be fair, theres only two out there; Dirt2 and Battleforge. Battleforge hardly used DX11 besides for some shaders (no tesselation), and Dirt2 kills in DX11 performance (at least on ATI). Not exactly the best examples to show off the hardware (and we return to my "DX11 needs time before its used" argument). They did use Haven, but you argued that because its Tesselation heavy, its not a valid benchmark. What is then? Something that doesn't use Tesselation? DX9 only? What benchmark would be suitable to compare performance?

You argue NVIDIA is late; I argue ATI was early. Lets face it, the 5000 series, at least sub 5850, is too weak. The 5770 struggles in Eyefinity/Tesselation, while the 4890 offers better P/P. The 5670 is beaten soundly by the simmilarly priced 4850. Until DX11 comes out, the 4850/4890 are better buys. Even the 5870 isn't that huge an improvment over a 285 [Im NOT arguing the 285 is a better card, or even worth buying; I'm arguing that theres no performance reason for a 285 owner to upgrade to a 5870!]. And frankly, considering the total amount of DX11 games when GF100 releases will be ~5 (we're at 3 if you count Stalker), its really hard to argue the early point simply based on lack of DX11 titles. Nevermind TSMC's issues even making ATI cards; imagine if they also had to produce NVIDIA cards!

As for your last point, OpenGL can do Tesselation fine in XP; thats a restriction unique to DirectX...
 
No, youre not focused on perf only, as you always doubted the DX10.1 perf increases, since it wasnt in nVidias interests. You denied them, and now, thru a few PR calls, youre buying into DX11 line hook and sinker. Problem here is, words dont lie, people do, and I dont put it beyond nVidias PR department to do so, anymore than its wise to use wooden screws in a PC.
So, you argue theres only 2 DX11 games, all the while, discrediting DX10.1 games, while mentioning their rarity, yet defend nVidias using it for obvious reasons? Lucky and thanks to ATI we have ANY DX10.1 games, else nvidia would have maybe actually had to give us really something in this long drawn out drama.
As for me arguing it being tesselation heavy, I think I agreed somewhat with someone whod said it was, but its better to see DX11 games usage, which you defend nvidia for not doing so.....hmmm
Now all the sudden, its ATIs fault nVidia has said, itll be here soon, while precious leader was holding up wooden screws? Or, itll be here by Christmas? Or qtr1? Or January?
Yea right.
And all the while, Charlie gets burned by people when he told us all the truth. Now revisions of history get written? I dont think so. Lets see if hes right again shall we? hwne fermi comes out, you think itll have a A3 on it? If thats what itll be, then whos late?

As for eyefinity, its meant more for business using lessor cards, and its free, whereas nvidia charges for their rendition til now, where ATI forced their hand. Just where have I said eyefinity was good on lessor cards? Ever?
As for pricing, surely your joking right? Wheres the DX11 nVidia mids? And whats the price of a 285, currently nvidias best single? Im argiuing theres no reason to buy a 285, period.
OK, and now we come to DX11 worth. Since youre all over the map on it, then the tesselation doesnt matter at all, right? So Fermis doesnt matter either right? By the time, according to your "worth it" timetable, nVidia will be staring at a R900 new gen or later.

So, let me know where Ive gone wrong here. Lets brag up a part no ones seen, about things that wont be used, on OS' you wont use for years, because perf as seen on a DX10.1 game thats worthless, on a DX that was worthless, promoting AA, which the improvements are disputable by your own words, while we wait another 2 years for this card to actually be worth having, while ATI came out and jumped the gun by 6 months, and nVidia is just right, only jumping the gun by 18 months, so you can go home, play 1 of only maybe 3 games on opengl on XP for perf reasons, which are still unknown.
Gotcha
 
Again same debate heres the real future:

1- Fermi 380 will be the fastest Single Gpu
2- 5970 will remain the top single card until Nvidia release its Dual-Fermi card.
3- Dual fermi is not currently or in the near future possible because of the cost and the very big die size and the 300w limit. Imagine the cooling it would take to cool a 400-500W card. A dual Fermi would have to cost roughly 800$ to be profitable.
4- ATi tend to make buggy drivers and improve the performance a lot through Catalyst releases. 1000-2000 pts increase in vantage. ATi now has more experience and more time to optimize its DX11 performence.
5- Fermi will be the hottest single GPU ever seen as usual. It wont be as efficient W/Fps as the 5870. Which means very little room for OC.
6- Fermi will shine in GPGPU programs like F@H.
7- nVidia will have the fastest GPUs configuration possible but will only attract people with very deep pockets and that don't care about efficiency and environment.
 
Again same debate heres the real future:

1- Fermi 380 will be the fastest Single Gpu
2- 5970 will remain the top single card until Nvidia release its Dual-Fermi card.
3- Dual fermi is not currently or in the near future possible because of the cost and the very big die size and the 300w limit. Imagine the cooling it would take to cool a 400-500W card. A dual Fermi would have to cost roughly 800$ to be profitable.
4- ATi tend to make buggy drivers and improve the performance a lot through Catalyst releases. 1000-2000 pts increase in vantage. ATi now has more experience and more time to optimize its DX11 performence.
5- Fermi will be the hottest single GPU ever seen as usual. It wont be as efficient W/Fps as the 5870. Which means very little room for OC.
6- Fermi will shine in GPGPU programs like F@H.
7- nVidia will have the fastest GPUs configuration possible but will only attract people with very deep pockets and that don't care about efficiency and environment.

GG. I vote for BEST SOLUTION.
 
No, youre not focused on perf only, as you always doubted the DX10.1 perf increases, since it wasnt in nVidias interests. You denied them, and now, thru a few PR calls, youre buying into DX11 line hook and sinker. Problem here is, words dont lie, people do, and I dont put it beyond nVidias PR department to do so, anymore than its wise to use wooden screws in a PC.

The only stance I've had on DX11 is that Tesselation would be computatonally heavy, and another SM with more potential would be released, and it would take 12-18 months for development to get moving. Hell, I was the one arguing that DX11 Tesselation would give a performance hit, rather then the increase you were saying would happen. 10.1 made no sense with its superset DX11 just around the corner. The price of going out of their way to focus on DX10.1 when it came out simply wasn't worth it for NVIDIA. Call it an extra, just like AA in Batman:AA for NVIDIA if you want.

So, you argue theres only 2 DX11 games, all the while, discrediting DX10.1 games, while mentioning their rarity, yet defend nVidias using it for obvious reasons? Lucky and thanks to ATI we have ANY DX10.1 games, else nvidia would have maybe actually had to give us really something in this long drawn out drama.

I said the hawx bench was BS. I do point out, we're working on the assumption they used 10.1 AA. If they didn't, then the bench is actually valid. I remind you though, when HAWX came out, wasn't it you who was saying that comparing a 4870 (10.1 DX level) to a 285 (10 DX level) was a fair comparison? Now you argue the opposite? Really?

As for me arguing it being tesselation heavy, I think I agreed somewhat with someone whod said it was, but its better to see DX11 games usage, which you defend nvidia for not doing so.....hmmm

I mearly pointed out there wasn't any good DX11 title to show off with; Battleforge hardly uses DX11 featues, Dirt2 benchmarks in DX11 are always underwhelming. If NVIDIA gave ~50FPS in Dirt2 DX11 (~15-20FPS better then ATI in this example), everyone would be going on how even NVIDIA's monster couldn't get 60FPS, and how big a dissapointment it was. And BTW, didn't we only see 5000 series benches like, a week beforehand?

Right now, Haven is the best way to directly track DX11 performance between the two architectures. Right now, NVIDIA is up.

Now all the sudden, its ATIs fault nVidia has said, itll be here soon, while precious leader was holding up wooden screws? Or, itll be here by Christmas? Or qtr1? Or January?
Yea right.

I admit, I didn't think NVIDIA was that far behind...I do wonder how much was related to TSMC's initial yield problems though...

And all the while, Charlie gets burned by people when he told us all the truth. Now revisions of history get written? I dont think so. Lets see if hes right again shall we? hwne fermi comes out, you think itll have a A3 on it? If thats what itll be, then whos late?

We'll see. Charlie has said a lot thats been wrong too. If you talk a lot, your bound to get a few points right. It was only a few weeks ago he claimed only 448 cores...

As for eyefinity, its meant more for business using lessor cards, and its free, whereas nvidia charges for their rendition til now, where ATI forced their hand. Just where have I said eyefinity was good on lessor cards? Ever?

Didn't you just do that? I do point out, NVIDIA is giving their multi-tech away via SW for the 200 (and below prehaps?) series cards, so it is clear they did have the tech ready.

As for pricing, surely your joking right? Wheres the DX11 nVidia mids? And whats the price of a 285, currently nvidias best single? Im argiuing theres no reason to buy a 285, period.

The 200 series is an expensive card. You can't draw conclusions on how expensive the GF100 is based on that. I'd say that based on ~$350 for a 5890, $400 for a GF380 (or whatever they call it) makes sense (assuming it wins on performance). I do expect cheaper lower-tier cards, based on what I've read though, so if that is indeed the case, NVIDIA might choose to eat a loss at the top tier...we'll see, but for a 380GTX, Id guess $375-$400. Just a guess though.

OK, and now we come to DX11 worth. Since youre all over the map on it, then the tesselation doesnt matter at all, right? So Fermis doesnt matter either right? By the time, according to your "worth it" timetable, nVidia will be staring at a R900 new gen or later.

For the consumer, DX11 is an afterthought; It matters when theres games. Its that simple. For NVIDIA/ATI, it matters now that there is the POTENTIAL for DX11 games. Get the tech developed now, so when the wave hits, you are already near your first revision. I view DX11 performance as a bonus, thats all. In about 9 months, then DX11 performance will be a necessity.

So, let me know where Ive gone wrong here. Lets brag up a part no ones seen, about things that wont be used, on OS' you wont use for years, because perf as seen on a DX10.1 game thats worthless, on a DX that was worthless, promoting AA, which the improvements are disputable by your own words, while we wait another 2 years for this card to actually be worth having, while ATI came out and jumped the gun by 6 months, and nVidia is just right, only jumping the gun by 18 months, so you can go home, play 1 of only maybe 3 games on opengl on XP for perf reasons, which are still unknown.
Gotcha

1: ATI was early. They are trying to sell the 5000 series as DX11 cards, while there are still a lack of titles, and major performance questions. NVIDIA basically got a free look at pricing/performance prior to releasing their parts.

2: DX10.1 made no sense with DX11 on the horizon. Its an extra, period.

3: On your OS point, W7 usage is still behind Vista usage, and XP still holds ~60% of the market, as predicted. I expect a slow slide down to 20% usage over the next 18 months, maybe 1.5-2% a month from here out. But still too large a market to ignore. Hence, DX9.0c with SM2.
 
1:"Rwar rawr rawr fermi is too little too late bwahahahh"

2:"noo from what i see it totally gonna rule guys stfu!"

1:"omg that's just nvidia PR bull ***"

2:"but even so there should still be a good amount of gain in performance from the previous generation"

1:"But what about when you compare it to the 5xxx line up"

2:"But i'm comparing it to the GT200"

Wait for 3rd party benchmarks and products to come up before you cast total judgment.

All we know is that it's a beast, it is huge has *** load of transistors, and because it's large it's hard to manufacture (I assume that's what kept them most busy after this long wait) and because of the higher failure rate and crap all around it probably will cost a ton of money on release, but come on for bleeding edge generally those ppl don't care too much about price.

As for my gut feeling about this, it will outperform the 5870 by a good margin but loose to the 5970 and then again loose with it's initial price. Not to mention an expected price drop if it but in the end i think there will be good price/preformance competition between the two by the end of the year, as for the 6xxx line up i'll at least wait for a paper launch and then a indepth article about the paper launch and what it means by ppl who understand *** before going gaga over it.

Just be happy if nvidia release anything cuz that means price/performance drops for everyone!
 
All wrong. Completely all wrong.

Youre sill failing the improvements we see using something other than DX10 going forwards, period, Theres the less pass of DX10.1 for example? Theres a post on here by another user who shows his findings, that DX11 gives more fps, less so with tesselation, but more. No need to look far.

So now, were supposed to take all this non info as good pure info, yet it may not have used DX10.1 in Hawx? I still thinks its a fair argument, as using what a card brings is fair, that includes any games thatre out, unlike nVidia, whon youre defending by not using DX11 games.

So, your opinion is, these games are crap, so no DX11. Well, thats your opinion, but its all we have, and nVidia didnt use it, they failed us. Now, everyone waiting for Fermi is praising Heaven heheh, such a religious experience is fermi.
Barely out of Beta, but better than those trashy DX11 games.
Hope people play their Heaven for hours, and ignore those DX11 games, as we all know that bungholios used on a first rev/edition bench are more important than actual games, which are obviously being put down for obvious reasons.
You cant get decent perf showing on 3D 06 accross different gens let alone ATI vs nVidia, using what setups where and how etc etc. Which means nothing in real games, which nVidia avoided doing. And again, we have to take nVidias word on their non game bungholio marks? With no 3rd party confrimation?

As for Charlie, whats that got to do with Fermi coming in when he said it would? Which youve somehow mangled into ATI jumping the gun? Like I said, lets see if its A3 or not, and when they actually get here.

Eyefinity? When have I ever made a claim to lessor cards and its usage? Or, did you mean that places like security, or casinos or stocks wont like having 3 monitors per card? I understand you not understanding its usage IMO for lessor cards, but, were talking gaming here, not bungholios, not eyefinity on lessor cards, but for other reasons

As for giving their multi monitor away, nVidia previously charged for it, but were forced to give it away to stay competitive, yes, thats what I meant.

As for G200 being expensive, using that expensive GDDR5, which you pointed out was costly, a wider bus etc, having less than 65% yields per wafer vs 5870 etc, you can bet itll be expensive too.

As for DX11, amongst us here, and as important as it is, since its you quoting AT by saying why Fermi is so late, tho you twist that into ATI being early somehow heheh, let us decide, or AT, or even nVidia themselves, and sure, you can downplay it, since nvidia still hasnt shown anything, which Ive been pointing out all along, yet tesselation is sooo important on bungholios and not on crappy DX11 games, in your opiniion.
Those performance questions would have been answered by nVidia benching one and showing or hinting at its perf, dontcha think?

As for Dx10.1, maybe youve forgotten the R670. Yea, that one. How long has it been? Not worth it? Better to do nothing and rename and rename and rename again eh?
But, hey, since its here by ATIs grace, well use it in our PR stunt, where it gets all the gains in AA, wont mention it at all, we wont have to really show a DX11 gaming perf, and keep speculation and wonders alive, and the hell with our fans?
Good job nVidia, good job

As for OS, therell be one coming soon, as it pertaiins to gaming too, pretty nifty for a gfx forum eh? Its closing fast, and will soon beat out Vista, and XP will be close to only 40%
 
Only if XP had DX11 😍 I will keep dreaming about 200mb memory usage lol Any pc that is not gaming should have XP, thats it.
 
Now we are extolling the virtues of eyefinity from the perspective of a bank or casino ? LOL
Truth is, its almost a useless -not going to be used feature for 98% of gamers.
Add in that ATI has not got the drivers working for the 3 popular o/s's on 1 card.
The 5970 is working because there are 2 gpu's there.
Most others aren't getting it going. Add in the 100 dollar display port cable .
Yes LOTS of people use two monitors now, most don't even stretch their gaming for even two for many reasons. Never mind 3. Its a feature about as valuable as having
a m/b that can do 4 way crossfire.
 
http://www.anandtech.com/video/showdoc.aspx?i=3679&p=6

It appears that the 99$ adapter is for those deep pockets with 3 2560x1600 30 inch panels lol The normal adapter works with native resolutions up to 1920x1200. I got a Mini DP to DP with the video card. I may get the LG 18.5 for 99$. The Dell 2408wfp has native DP intput. Now how come I received a Mini DP to DP adapter and reviews didnt lol? Its a sapphire 5970, sapphire usually cheap the accessory's lol
 
Only if XP had DX11 😍 I will keep dreaming about 200mb memory usage lol Any pc that is not gaming should have XP, thats it.
I think that any PC not used for gaming should run Linux. Why put up with Windows for no reason? :lol:

@All: Can't we all just wait for reviews? Then we can argue with substance instead of speculation.
 


Yea, thats better if people know how to use it 😀 It took me a week to make the sound card work lol but once its working, it will never crash.
 


The 5870 only uses the dedicated tessellator, while Fermi can use 512 Cores, this benchmark is good for Fermi because:
1. It exclusively features tessellation meaning:
a. Fermi won't notice a performance hit due to using SP's
b. the 5870 will suffer due to using tessellation so much (it's a tessellation bench)
 

Actually, I havnt said anything about eyefinity, other than to agree here or there about certain statements, and what Im saying is very practical for lower end cards, as theyre cheap and more useful than most think
http://forum.beyond3d.com/showpost.php?p=1383012&postcount=987

Just think of procuring a McDpnalds contract with it, thatll seell a few cards eh?
Security networks etc, financials etc, even doing work in dev of games,apps etc.
Now, if you want to confuseme with what others have said, like Im saying here, the reason why I havnt said alot about eyefinity is similar to what Im saying about Fermi, we simply dont know enough yet. Its a good option, and may be more used than we currently think, or, it may not.
Same with fermi. What nVidias released here isnt much, cant be used for clarity, and I presented a few ideas as to why these numbers cant be truted as to what theyre trying to show, which is ambiguous at best to begin with, besides all the spin.
Lets give it time, and again, as Ape says, hope for the best expect the worse, its always somewheres in between, unless theyve really done the pooch, and I dont think they have, its just not 2.33 this better, or 60% better that.
 


Agreed.

The 5870 only uses the dedicated tessellator, while Fermi can use 512 Cores, this benchmark is good for Fermi because:
1. It exclusively features tessellation meaning:
a. Fermi won't notice a performance hit due to using SP's
b. the 5870 will suffer due to using tessellation so much (it's a tessellation bench)

Except Tesselation is the hardest to compute DX11 function. Even on a lower scale, the extra work is going to cause a performance hit, so good Tesselation performance will, in my opinion, give NVIDIA the more powerful card.

We'll see in a few months.
 
Status
Not open for further replies.