Nvidia’s GF100: Graphics Architecture Previewed, Still No Benchmarks

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I've been always leaning more towards Nvidia. But if GT300 will be less bang for the buck, I will deffinetly buy HD 5870. I don't feel like making the same mistake as i did with 9800GTX - bought it and 2 months later GTX260 became available for the same price.
 
[citation][nom]randomizer[/nom]Do you? The end of an NDA does not mean every detail has to be divulged. You can still only provide the details that have been given to you. If NVIDIA don't hand out the review samples, you can't benchmark them. It's not rocket science![/citation]

The problem is they can't even hand out specs.
 
[citation][nom]liquidsnake718[/nom]Finally some hard info on the gf100. Sounds to me like the card will be truly expensive yet worth it for those die hard fans. I am looking forward to getting one to replace the 5850 as I am not happy with the physics engine that catalyst and ATi impliments compared to PhysX.[/citation]Coming from the guy that thinks PhysX makes Crysis better...
 
To the uninitiated gamer who might not know what it takes to bring a popular title to market, it’s really just important to know that Nvidia’s efforts will, ideally, enable more efficient development and better on-screen effects

does this mean the nvidia card will look better?
 
DANG Server !!

Had a longer comment and it got 'eaten' by unavailability.

Anywhoo, this kinda reminds me of the GTX295 paper launch; where just as AMD/TSMC are supposedly finally able to meet the HD5K's demand with higher yield, these 'look at our architecture' paper launches have hit the web to try and delay people from buying ATi cards currently on the shelves. Very similar to the 'wood-screw Fermi' at GPU Tech and the F100 white-paper 'launches' right after the HD5870 launched.

More slides, more paper, more rumours, but very little substance.

There's little chance I'll buy a Fermi (gen1) based solution since I doubt I'll see anything mobile for another year, but I am interested for work, since it would likely be good with Geomatic and wave/signal propagation analysis. However it's still looking like a future product and not sure if AMD's Heck-a-core thing or intel commercial Larrabee won't be a better long term bet.
 
To aggressor: you sort of don't have a point 😀, as the GTX260 was also from Nvidia. Your example just meant you bought at the wrong time or a wrong product, not a wrong brand.

To TheGreatGrapeApe: but the 295 did launch and it was THE apex graphics card for almost a whole year.
---

About the article: This is good news but it looks like Nvidia still has lots of work to do. Both red and green BUYERS (i.e. we consumers) should be rooting for the arrival of Fermi, as its arrival will help the competition. Fermi and Nvidia failing would be terrible news for the whole industry. I don't really understand all the sarcasm and mockery people have thrown at Fermi. Unless maybe the cruel Nvidia has hurt your feeling by not working hard enough to deliver sooner? ALL those tax dollars going into Nvidia and this? How dare they!

... Or maybe you're afraid Fermi will make your beloved 5970 look like a sissy!!

Lighten up! 😛
 
[citation][nom]TheGreatGrapeApe[/nom].......and not sure if AMD's Heck-a-core thing or intel commercial Larrabee won't be a better long term bet.[/citation]

You can forget about Larrabee...

".....claimed this was the first public demonstration of a single chip solution exceeding one TeraFLOPS. He pointed out this was early silicon thereby leaving open the question on eventual performance for Larrabee. Because this was only one fifth that of available competing graphics boards, Larrabee was cancelled "as a standalone discrete graphics product" on December 4, 2009."
 
"It won’t bitstream high-def audio"
THIS. IS. COMPLETLY. USELESS.

HDMI, since 1.0, allows 8 uncompressed 192kHz/24b LPCM channels. The point was to stop requiring a new receiver when a new codec arrive : upgrade/replace your player (much cheaper than a receiver).
"HD Codec" bitstreaming was a feature asked by receiver manufacturers so they could sell us new models. HD Codec are LOSSLESS : decoding it in the player or in the receiver won't make a difference since there is NO LOSS.
And this actually REMOVES FEATURES : with bitstreaming, you can't mix bonuses like commentaries since this is supposed to be done in the player using 2 compressed sound tracks and bitstreaming can send only one.
 
[citation][nom]TheGreatGrapeApe[/nom]That's the CONSUMER solution that was cancelled, I'm talking about COMMERCIAL, try to keep up.[/citation]

My bad :)
 
[citation][nom]falchard[/nom]One thing I wonder is if nVidia finally pushes forward with this architecture, does this mean developers will finally start utilizing some of the tech ATI has had in its cards for generations? For instance, will they utilize more efficient poly rendering effectively making ATI cards perform 300% faster in drawing polies and make every consumer nVidia card before the GF100 moot?Also will they adopt a naming convention that finally makes sense? Up to 9000, reset, skip double digits and 100, go straight to 200. Now go back to 100. I mean seriously who comes up with these names?G80, G92, G200, GF100..[/citation]
You're mixing up code names for gpu architectures (g80, g92, gt200, gf100) with product naming such as geforce 8800gtx or gtx280. Nvidia has not anounced the product naming scheme for their gf100 gpu architecture yet. I'm not exactly sure what the complaint is in this regard, but I think it may stem from this confusion. I've never had a problem following the naming conventions of graphics card products, and with the exception of the latest generation it seems to be a pretty logical progression in product numbering (geforce 6800 ultra, 7900gtx, 8800gtx, 9800gtx, gtx285). As a matter of fact I would say that over the long term ATI's product naming convention has been more confusing then Nvidia's, though still very manageable (radeon 9800xt, x850xt, x1950xtx, hd2900xt, hd3870, hd4870).
The numbering used for gpu architectures on the other hand have been a bit unpredictable, but i don't have much of a problem with this as they were never meant to be used commercially and shouldn't really have to make sense outside of Nvidia's engineering and design teams.
 
[citation][nom]dingumf[/nom]The problem is they can't even hand out specs.[/citation]
Call up NVIDIA and complain. They control what is and what isn't given to reviewers, and what is and isn't still under NDA.
 


It did launch, but weeks after the [strike]reviews[/strike] errr... previews appearing Dec 18th, just before Xmas to cut into Christmas and Boxing day sales. :pfff:
Same thing here, when there's nothing to launch while the competitor can finally move product, pull the old IBM tactic of stalling them with FUD and promises of the 'next best thing'. Now all AMD have to do is send reviewers 'roadmaps' and 'wafer' shots of their 28nm GPU from GF and then people can wait another 6 months for the 'better-than-Fermi' solution, to which nVidia replies with their architectural whitepaper on Fermi-2, and then the two companies can stop releasing actual GPUs and just start focusing on power-points and review-guides which they both seem better at doing than making GPUs. :lol:

BTW, Fermi wouldn't help competition nearly as much as a successful version of Larrabee or any 3rd party entrant would.
 
[citation][nom]TheGreatGrapeApe[/nom]Chris, some 'leaked' 'internal' nV slides recently appeared with THG results from the HD5970 review, since I can't ask the question I would like yo about that (there's no way you could answer if true), I'll simply ask, were you aware of this? http://news.softpedia.com/newsImag [...] ace-3.jpg/Slight tweaking of the RE:5 results (likely because they didn't point in the right direction for the existing cards) And Charlie's recent 'Pro-nVidia' article is somewhat telling about the possibility of scaling downward, what's your opinion on it if you can say, other than "Charlie's just being Charlie". http://www.semiaccurate.com/2010/0 [...] facturable[/citation]

Hey Great!
Please feel free to send me a PM with whatever questions you have. I'm not quite following the post, as I don't see any of my 5970 benches at the link provided. I'll be checking my forum account to better answer what you're wondering!
 
[citation][nom]AWx[/nom]"It won’t bitstream high-def audio"THIS. IS. COMPLETLY. USELESS.HDMI, since 1.0, allows 8 uncompressed 192kHz/24b LPCM channels. The point was to stop requiring a new receiver when a new codec arrive : upgrade/replace your player (much cheaper than a receiver). "HD Codec" bitstreaming was a feature asked by receiver manufacturers so they could sell us new models. HD Codec are LOSSLESS : decoding it in the player or in the receiver won't make a difference since there is NO LOSS.And this actually REMOVES FEATURES : with bitstreaming, you can't mix bonuses like commentaries since this is supposed to be done in the player using 2 compressed sound tracks and bitstreaming can send only one.[/citation]

Not true. Talk to the guys who sell software decoders. They cannot send 24/192 over HDMI. The most that an LPCM output will do is 16-bit / 48 kHz due to DRM issues. If you want the untouched signal, you have to bitstream it. Now, admittedly, you have to LOOK to find a Blu-ray that's higher fidelity, but for the samples in the lab, I've shown this in action already.
 
[citation][nom]jennyh[/nom]If a doubling over the 285 is actually true then it's pretty impressive performance. Had it not been Nvidia's own benchmarks I might even believe it.[/citation]
They just ran the in-built FC2 benchmark like most reviewers would...
 
Where is this 2x performance coming from then? According to the FC2 benchmark the scores were...

GTX285: 51 FPS
GF100 part: 84 FPS

Did I miss new benchmarks? Or is it in the details, ie the gf100 will be 2x as fast under 8xAA situations? If that is so it's a lot less impressive than a simple 'twice as fast.'
 
I didn't pick up on the doubling performance part of you previous post (scan reading through the comments doesn't always work 😛). I just saw you mention that they were NVIDIA's benchmarks. The 2x thing is just marketing, probably from this slide: http://tpucdn.com/reviews/NVIDIA/GF100_Fermi_Architecture/images/GF100_49_small.jpg

So yea, at 8x AA. I'm not really interested in gaming performance myself. I want to see what it can do in the GPGPU world where NVIDIA has been blowing their horn for the last year. Give me ray tracing numbers! :fou:
 
[citation][nom]randomizer[/nom]They just ran the in-built FC2 benchmark like most reviewers would...[/citation]

With non-finalized clocks and god knows what other optimization gimmicks. I'm sure Fermi is faster than HD5870 but saying its significantly faster is most likely PR BS. Nvidia is just trying to hurt HD5x00 sales with these powerpoint info bits. When it finally arrives it will be hot and very expensive and yes, fastest single GPU card for a short while. That's why ATI is working on with Cypress refresh as we speak.
 
[citation][nom]scrumworks[/nom]With non-finalized clocks and god knows what other optimization gimmicks.[/citation]
Obviously the specs are not finalised, but it's unlikely to get any slower, because if it couldn't run at the clocks it was at during these benchmarks then... well it wouldn't have completed the benchmarks. There were no optimisations, the guy from Hardware Canucks managed to replicate the results within a few percent for the GTX285 (also tested by NVIDIA for comparison purposes) with the same hardware configuration. He also said that he, and others at the demo, saw the reps entering the settings for the benchmark run. The results are legit, but most likely not final.

[citation][nom]scrumworks[/nom]I'm sure Fermi is faster than HD5870 but saying its significantly faster is most likely PR BS. Nvidia is just trying to hurt HD5x00 sales with these powerpoint info bits.[/citation]

You're looking at the data, it's faster. Competing with the HD5970 is going to need a drop in power consumption though because two of these GPUs is going to blow the 300W envelope. Of course they are trying to hurt sales though, wouldn't you in their shoes?
 
[citation][nom]randomizer[/nom]Obviously the specs are not finalised, but it's unlikely to get any slower, because if it couldn't run at the clocks it was at during these benchmarks then... well it wouldn't have completed the benchmarks. There were no optimisations, the guy from Hardware Canucks managed to replicate the results within a few percent for the GTX285 (also tested by NVIDIA for comparison purposes) with the same hardware configuration. He also said that he, and others at the demo, saw the reps entering the settings for the benchmark run. The results are legit, but most likely not final.You're looking at the data, it's faster. Competing with the HD5970 is going to need a drop in power consumption though because two of these GPUs is going to blow the 300W envelope. Of course they are trying to hurt sales though, wouldn't you in their shoes?[/citation]

That could be true, but this could also be a cherry picked part with higher clocks than is likely to be seen on the final card. If you were Nvidia, would you use the best demoing part or not? 😉
 
Status
Not open for further replies.