GF100 (Fermi) previews and discussion

Page 33 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Depends what you mean by independent of the shaders. The polymorph engine needs to send data back to the shaders after it has completed on of the operations it handles or at least that is my basic understanding.

If an SM is not working on something that is being tesselated then they tesselators, to my understanding are not going to be used.

However, how this will affect it's performance when done in conjunction with other operations I do not know and won't know till a reviewer tests it systematically.
 


Exactly. ATI went a route that is easily understandable for us to follow. Take two gpu's that would have been over some engineering limit( and who even thought about that then) and undervolted, underclocked to make a new model. That was not what the GTX 295 was , and the next dual gpu from Nvidia may also be two modified cores. To at least closer meet mainstream guidelines. I've gone back and read 5870 and 5850 launch articles and NO, the 5970 was never highly anticipated then or mentioned,never mind announced. I don't expect Nvidia to do anything different. To Talk about needing to put two of your new cores on one card , out of the gate, would blur the point of the new Cores power and feature increase from past products.
 


It is the same for ATI though. Both implementations require some information to be computed by the shaders as far as number of samples to compute. Though it is possible this slows down the general computation of the shaders more for Nvidia than ATI. However, all of the actual tessellation is done in the polymorph engine, just as it is on ATI's tessellater. THis is something I'm very interested to see investigated.
 


As did the nVidia guys with regards to the GTX295.
So really, don't pretend it's just one way.

The reality is both arguments are valid depending on HOW you are measuring.

- Raw performance doesn't matter single vs dual just top dog, but may play a role, (ie. Four GTX480 vs Two HD5970 AReS vs Four HD5870)

- Single Card performance, doesn't matter dual or single GPU.

- Single Chip performance, but that usually matters more for any potential application of that chip, and despite expectation there will be one, right now there is no dual GPU Fermi solution, despite the statement to show one at Cebit, and now Computex which is in June.

- Performance/$

- Performance/W

- Performance/mm2

And considering the only benchmark we have from nVidia is a synthetic benchmark related to no game out there, it's a little early to be doing any kind of ranking, nor worrying about how single versus multi-GPU solutions.

It's still unknown if a single GTX480 will beat a single HD5870 (stock or OC'ed) at this point, so that whole discussion may be a moot point.
 


Well if this is to be a gaming system, you're better off not blowing your money on an i7-930, GA-EX58-UD3R and Triple-Channel RAM.
i7-930:$295 + GA-EX58-UD3R:$189 + 6GB OCZ Platinum DDR3-1600 = $704!
(newegg) (newegg) (tigerdirect)
DAMN SON, THAT'S A PANTLOAD!!! LOL

This will give you superior gaming performance (For WAY LESS!):

CPU: AMD Phenom II X4 965BE - $190 (tigerdirect)
MOBO: MSI 790FX-GD70 - $168 (newegg)
RAM: 4GB OCZ Obsidian DDR3-1600 - $110 (-$30 mail-in-rebate=$80) (newegg)

Total = $468

Now anyone who is a true gamer will tell you that gaming NEVER requires more than 4GB of RAM and it's primarily the vidcard that runs games. As long as the CPU doesn't bottleneck the vidcard, perfect gaming can easily be done. The Phenom II X4 965 is a 3.4GHz AMD 2nd-Generation Quad-Core and will NOT bottleneck ANY video card on the market. Not even an HD 5970. Why would you pay an extra $266 and get no gaming benefit? Anyone here who really knows about gaming hardware would tell you that once you only spend enough on the CPU to stop it from bottlenecking the vidcard (and giving you nice OS speed but the 965BE does that too..lol). Once you have a non-bottlenecking CPU, throw EVERYTHING you can at getting a better vidcard. You have an extra $266 this way. That means you'll only be $14 shy of already owning a Radeon HD 5850!!!

VisionTek Radeon HD 5850 - $280 (ncixus.com)
http://www.ncixus.com/products/44144/900297/VISIONTEK/

This brings me to my last point, the motherboard. The one you're looking to pay $190 on has only 2 PCI-Express slots! AIEEE!!! The MSI 790FX-GD70 has 4 PCI-Express Slots and is set up for overclocking already so that gigabyte can't compete! And just like the gigabyte board, you'll be able to run 2 Radeons at FULL x16 speed! I just think you'll be WAY happier by saving enough money to take a weeklong trip to the Caribbean...lol

Good luck pal! :sol: :hello:
 


You know Notty, there's morons in the ATi camp too, believe me, I know! LOL

I never complained about a card being 2 GPUs. Hell, ATi and nVidia have both been doing it for years! It's just silly. Whatever does the job, does the job. If someone wants to whine about the other guy having more gpus on the card, they should just shut up and put more on their card. That's really all that makes the difference.

Good on ya! :sol:
 


The 5970 has 2 cores. The Fermi has around 500. WTF are you talking about?
 


if you count fermi cores like that then you have to count ATI cores as their shaders, which would be 1600.

the number of gpu's argument is just as silly though. why should the number of gpu's matter?

when intels dual core was beating amd's single core, was that an unfair comparison? is it unfair to compare amd's current 12 cores against intels 6 cores? do you think people who buy these server chips cares about that, or do they care about the best price-performance or the best performance-watt.

that is what matters. ATI has a huge performance per watt lead, that means if they bothered to make chips the same size as fermi, they would blow fermi away.
 


They don't care at all. That was my point. It makes no difference whatsoever to me HOW the power is gained. Consider this:

GeForce 9800GX2
Radeon HD 3870X2
Radeon HD 4850X2
Radeon HD 4870X2
GeForce GTX 295
Radeon HD 5970

back and forth, back and forth... neither side is innocent so who cares? I sure as hell don't! :sol:
 


Well As you said IF this is to be a gaming machine. And well I will be using it for gaming but also for encoding and other things.
But most importantly I want a very future proof and upgradeable system. Since I am planning on adding aother video card as soon as there is a need. And as u said an i7 920 or 930 will never bottleneck my videocard so that leaves plenttyyyy of room for upgrades there. And well that ram will last me while to I suppose (I have 4gb now so I know I don't really need it). Just a question do you think I should get a better mobo or would the one I mentioned do fine?
 


Yes, that's why I made sure I said for gaming because most people who build rigs like that, that's all it's really for. Absolutely I believe that motherboard will be more than adequate and the RAM, well I'd take the 6 for now and if more is needed later, add it later. No point in spending the extra $$$ until you're sure you have to right? I think that if you're not a HARD-CORE, DIE-HARD gamer which you certainly don't seem to be, I'd stick with either that motherboard or one that's even cheaper like this one:

ASRock X58 Extreme - $160
http://www.newegg.com/Product/Product.aspx?Item=N82E16813157163

I've always had great luck with ASRock. They used to be the OEM division of ASUS (hence the "AS") and they have great quality with great prices. They're also very innovative too. Check this out:

http://www.asrock.com/mb/overview.asp?Model=4COREDUAL-VSTA

I had this board YEARS ago. It really helped me make the change from a Pentium 4 to a Core 2 Duo. It allowed me to change my CPU and mobo without having to change my vidcard and RAM right away. The board functioned so well and I would still use it today except that it only holds 2GB of RAM which isn't enough for me so I put a Pentium Dual-Core on it and gave it to my cousin to play with. It's still working to this day. I think you'd be pretty satisfied with it. I mean let's face it, an i7-930 and 6GB of RAM with a high-end vidcard? We'd be satisfied with ANY motherboard as long as it didn't toast itself! :sol:
 
Daedalus, I'm not saying people don't buy cards based on price vs perf, I'm well aware they do. I'm not trying to just dismiss the 5970 or say it is a crappy card, it is a pretty cool piece of hardware.

What I'm saying is that if you look within single generations of graphics cards, even within a couple generations, a dual-GPU card will beat a single GPU card every, single, time. I defy you to prove that statement wrong. I'm also saying, that if nVidia can produce a SINGLE GPU that can come close to matching a dual-GPU solution of the same generation, that is a phenomenal achievement. You can't argue that, it is just the way it is.

I'm not trying to argue which will sell better or be better priced, I'm just arguing pure performance and Avro apparently thinking that a 480 should somehow magically be able to beat a 5970..... and that nVidia is doomed if it can't.

Whether a single-GPU solution or a dual-GPU solution is better and is the future of the graphics industry, I do not know, but that is not within the scope of my argument, so I'm not sure why you would bring it up?



..... I hope to God you are trolling and not serious, no one deserves to be that retarded. The 480's will have somewhere near 512 STREAM PROCESSORS, but it will have ONE GPU. The 5970 has TWO GPUs, and is essentially two 5870's Xfired which each have 1600 STREAM PROCESSORS. Not in this world or any other world should a 480 be able to beat a 5970.
 


The 4850X2 came after the 4870X2 & the 3870X2 is beaten by the 9800GX2.
It's been X1950XTX -> 7950GX2 -> 8800Ultra -> 3870X2 -> 9800GX2 -> 4870X2 -> GTX 295 -> 5970.
 


X1900XT vs GF7900GX2, and arguably the X1950XTX was equal or better than the GF7950GX2, but definitely the first one.
And the HD3870X2 vs GF8800U was pretty close too at times.


The issue that comes into play for the health of nV since you reference the 'future of nV' statements made by AA, is tat for pricing, if nV fails to make the Fermi card for less than the cost of making an HD5970 (which still is unknown but worrisomely possible) then they run into a competitive disadvantage that would allow AMD to price the HD5970 below the GTX480 which could greatly affect their return on R&D and affect the money they have for R&D to improve the Fermi design.

It matters from a production standpoint more than a performance one, because beyond the fanbois, as already mentioned people will buy what gets them the framerate they are looking for, and for many people in this segment that would mean multiple card, let alone GPUs, so the single vs dual GU solution means very little to them as long as it gets the job done.

The issue becomes the two extremes (which were experienced for the R600 and GF7) where either the new pre-crowned king comes to the throne slower than the other single card, or the the dual card/GPU solution is slower than the single solution.

We'll need to wait for objective and wide-ranging benchies, but I have a feeling that the GTX480 falls somewhere in the middle of the two, but neither extreme is out of the question either, based on all the shenanigans and fakes leaked benchies.
 

Not only is the power limit a problem, but so is the trouble of fitting a 4 slot cooler in most cases.
 


The X1900XT did not beat the GF7900GX2.... the GF7900GX2 was release months after the 1900, which is fine because that's the same situation as now so it is still comparable, but it definitely beat the 1900 in performance.

I suppose however though, mentioning that, if a 480 came close to a 5970 it wouldn't be THAT phenomenal of an achievement, so I'll rescind that statement. But if it comes very close and uses 40W+ less power, that is pretty awesome, and if it can MATCH a 5970, THAT is phenomenal.

I agree with the rest of what you said though 😛 Nor have I ever disagreed with it.
 
Rush, it doesnt matter if Nvidias single gpu can come close to the 5970 (which it can't). If its the same price or more for less performance, who would care?
 
The people who don't care for dual gpus on 1 card solutions, i know i don't care for them and their issues, hell i barely care for the prominent usage of 2 slot cards.

i just dislike cumbersome things in general.

Why it doesn't matter if the GF100 beat the 5970 is that AMD would still be ahead on the yields and profit margins for subsequent silicon even more so considering they sell you the defects as the 5830
 
Yeah im fine with dual gpus, but the only thing is that if i had a 5870 (which has nice perks like dx 11 and eyefinity) i could do crossfire in the future, as my mobo is crossfire only.
 


Actually it did. You need to re-look at the extensive benchmarks, not just the 3Dmarks. The GF7900GX2 had terrible scaling and was downclocked.

The GF7950GX2 improved on both of those shortcomings, and was a better card and more likely candidate for the 2>1 especially at extreme settings (although still capped by memory issues), but even then could be a tie when proper AF was enabled (no floptimization);

http://www.xbitlabs.com/articles/video/display/asus-en7950gx2_16.html#sect0

As we wrote earlier, the GeForce 7950 GX2 had used to be the fastest card in most of our benchmarks at the old quality settings, but doesn’t look so confident at the High Quality settings, i.e. without the optimizations. Its performance has fallen to the level of the Radeon X1950 XTX or even lower in such games as Far Cry, Half-Life 2: Episode One, Serious Sam 2, Splinter Cell: Chaos Theory, Tomb Raider: Legend, TES IV: Oblivion, Titan Quest, X3: Reunion, Age of Empires 3 and in 3DMark06. This is 50% of the applications we use to benchmark graphics cards! And although the tri-linear filtering quality provided by Nvidia’s flagship product has grown considerably, the overall image quality (at our settings) is still lower than what the Radeon X1950 XTX provides in its high-quality AF mode.

but it definitely beat the 1900 in performance.

I think you're confusing the two the GF7950 was arguably faster and I wouldn't disagree with either side of that equation since they are close and it depends on the game and measure IMO; but the OEM GF7900GX2 was not as good, and more problematic than you remember I think.

To me the main issue for nVidia will be the BIG #s that they have to overcome;

Transistor count, Die Size, Yield/Cost/Price (all are related IMO), Power consumption, and last but not least (although hardest to measure) PR Hype Size.

But it can be done, and if it lives up to the hype, then the other items will be easy. 😉


-Edited for some typos that made it hard to understand, so much for typing at work.-
 
Status
Not open for further replies.