GF100 (Fermi) previews and discussion

Page 28 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
While the GTX 470 is still in between the HD 5850/5870 the GTX 470/GTX 480 will have near to no over clocking head room without going to water cooling.

An HD 5870 over clocked can easily smoke the GTX 470 without requiring water cooling.

I wonder how Nvidia will combat this..Over clock+GTX 480= Say hello to the first reference water cooling GPU block on a graphics card?
 

Only reason I brought these links/benches? in was because whos behind it, whixch is in itself a surprise, otherwise, Id already had seen them, and like you, prefer to have waited, its only DK doing it that led me to do so here
 
I think fermi architecture is a backward step from g200. Maybe not the architecture but the clocks are all wrong and nVidia had to sacrifice too much to make 300 watts.

So the 480 will be 10% faster than the 5870...that is a huge, huge backward step from the last series. everything i have read says that there wont be many 480 parts anyway, and that the 470 is the real production card.

The 470 might be ok, but i dont see any compelling reason to buy one compared to a 5850 or 5870. if it's really cheap then yes maybe, but i dont think it will be cheap.
 
J, not criticizing that, just trying to avoid falling into the trap of giving too much credit to the early leaked benchies from either camp.

I think if any of these numbers are even close to true, then Fermi has fallen short of the early PR-mongering, and the early 'Fermi will destroy' wait-baiting that has lead to this time period.

I don't doubt that even if the GTX480 ends up slower on average than the HD5870 that FERMI isn't a failure because like the FX5800 it is the start of a new generation. However just like the FX series, this line of cards may be the failure that the future is built upon. However it's still too early to tell, but the wide exposure yet tight lips is something rarely seen except ahead of such individual failures (FX5800, HD2900), so that speaks volumes of itself.

The thing is it may be 10-15% faster than the HD5870, and may take the crown, belt, whatever the current token is. But it is 50-60% bigger than Cypress, and is costing much more than 50% more to make even if yields were similar. So as a competitive product, it's at a great disadvantage, and will have trouble meeting the 10-15% price range that the resultant performance would dictate (after the initial launch week/period frenzy).

Considering the last 6 months of pomp and PR, 10-15% improvement would be a little underwhelming (especially compared to that 60% number touted by nVidia).
 
Well you can never believe nvidia's 60% i remember seeing like 70% 30% improvement in game X in their driver updates and i'm like wth set up are they using that gets that huge performance boost.
 
More importantly to me is down the road.
If its clocks are off by 20% or more, and the 480s will be hot and rare, a new revision of Fermi should do quite well.
I see them wanting 20% like last time, actually more, considering the price drops, but it may be theyre losing this lead as well
 
The backlash against nvidia could be huge. Whilst memories can be short, they may not be short enough to prevent some people skipping the next nvidia cards after this release.

Alot of people said screw ATI after the 2xxx and only after the 4 series were they even remotely thinking about buying anything other than nvidia but the roles could be reversed if things do not turn out right.

One thing that nvidia have got right is the smoke and mirrors. For all the rumours, no one can actually claim anything concrete about it other than what was in the white paper, the reason for that no one knows but will be interesting to find out for future predictions of their PR department if they have one left following the release.
 


I've been trying to find the article that had it at closer to 200w but never mind, the thing is I've often wondered how some of these sites get their measurements? because the numbers can differ quite a bit sometimes.
 



See this is why I like / pick Xbit. Unlike all the sites that take system wattage max - system wattage min and then attribute the differences to the cards randomly, Xbit actually pull the power draw from the PCIe slot, and the PCIe connectors to measure the power of the card in isolation. It gives us a better idea of the power the card is using, not the system.

I agree about not caring about the power draw overall, but it is funny the amount of griping about R600 power draw by nV fans prior to it's launch too. To me it depends on the person, and in this segment, most of the users/buyers don't care one bit about power draw and as long as it's faster after overclocking and the blood sacrifice to the gods of speed then that's what they'll buy, however if it's just next door to the competition and it's much more power consuming, hotter, it might be an issue, and then pricing just adds to that if it's more, and with that you lose those who aren't looking for the monster rig because it's not worth those drawbacks.

I think 300W is fine if the performance warrants it, but if it draws alot more power for only a little more gain it makes it a little less attractive overall for most slightly less hyper gamers.
 

Couldn't agree more, but at least those who have got a 1200w PSU to run their 7600GT SLi rigs will now have a card to look forward to.
 
I disagree, watts should define a card much more than how many gpu's it has.

The 480 just beats the 5870 while having 110 watts more, the 480 loses massively to the 5970 at the same wattage.

It's clear nVidia has gone backwards on efficiency, die size etc. they cant win another price war, they cant beat ati with such a difference in technical, design or engineering skill.
 
Last time ATI had the process advantage, and it showed in power until nVidia caught up on process, where they then also caught up with ATI in perf/power.
Now both are on the same process, but like the 2900 and the 280, both drew malot of power til the next revision, which is what I expect for nVidia and Fermi this time too
 


Yep, although really, the GF7600GT was a rockin' card !! 😀

I saw a PC power & cooling 750 Silencer on sale at the local PC shop the other day for $79.99 and I thought, Oooh, I could really build a nice rig around THAT !!
 
I disagree, watts should define a card much more than how many gpu's it has.

The 480 just beats the 5870 while having 110 watts more, the 480 loses massively to the 5970 at the same wattage.

That's fine, and most people would agree with you, and that would be my personal opinion if picking between the two, however what you fail to understand is that for the target market of this, especially those QuadSLi'ing them with a Flux Capacitor PSU, THEY DON'T care, and they might only care if they blow their wall socket and catch fire. But for anyonewith a 20A circuit and the desire to 'Go Fast!', they will likely be swapping out their Dual HD5970 rig for a Quad GTX480 rig if the GTX480 is 10-15% faster (when liquid cooled, overclocked and dipped in Unobtainium :sol: ). That just the thing about this market segment. It's not the efficient BMW/MERC Diesel of the marketplace, it's the MERC AMG / M Power segment of the market that wants MORE Powah !!
 
I know it's been talked about earlier in the thread, but I'm really interested in the CUDA/STREAM ability of the new cards. I recently purchased a Winfast PxVC1100 card with the Spursengine to speed up my bluray transcoding. I was a little shocked to see the same performance from a CUDA capable GF2XX series card with the TMPGEnc Xpress software.

I've been an ATI fan for years now (mostly because of my early adoption of the HTPC and I believe ATI is superior there) but I felt a little bad that I had to spend the $$$ when I could have been using a CUDA card for great gaming AND video editing.

I really hope the cross platform language takes off and we get away from this proprietary stuff. I would pay a premium for a 10% improvement in games with the ability to transcode video on the same card even knowing the gaming tech wasn't the best, and even if my TDP wasn't soo great.

BTW, the Winfast card is great. A product I feel was worth the premium $$$.
 
You feel bad that you bought a ATI card because you could use a CUDA card for great gaming & video editing?
Since when does CUDA have half a penny to do with gaming?
There are/will be video transcoders & editors based on: ATI Stream, OpenCL, DirectX Compute WAY more than CUDA.

The Winfast card is crap, I simulated Tom's benches & my OC'd 5850 is on spitting range of it.
 
I realize CUDA doesn't have anything to do with gaming. But if a card does BOTH well, it's an added bonus.

So you can actually use your 5850 for transcoding video? What are your real times for transcoding a 1.5 hour h264 or VC1 movie @1080p? How much CPU does it use while editing? Can it do that while being powered by a floppy connector?

I had no idea that card was capable of doing all that.
 
DirectX ComputeShaders (DX11 only) is MUCH more worthy than CUDA.
Use a specific clip, as different videos/movies are WAY different.

I used a 15 min, 20GB, FRAPS video, FPS1 codec, 1920x1200, and transcoded it to a 1920x1080, .avi file.
Took 5 min.
 
In my situation I'm looking for the ability to take the m2ts files from a bluray and transcode the video with minimal loss of quality back to an MPG2 or AVCHD file format. I didn't think the AVIVO convertor could do that. I have no doubts in the power of the STREAM processing of ATI cards, I simply don't know of any software that can do what I need with their hardware.

I am this close <pinches fingers> to purchasing a 5850 (and adding a card later) so I would loved for there to be a program that could do what I need on ATI hardware.
 
Status
Not open for further replies.