Nvidia Hints That Kepler is 3x Faster Than Fermi

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bluekoala

Distinguished
Feb 8, 2008
333
0
18,810
[citation][nom]ubercake[/nom]I'm sure part of it is hype, but those of us who've been turned off to amd due to their junk drivers don't mind waiting at all for what Nvidia has in store for us. My two GTX 580s are more than capable for current games, but if the future games demand an upgrade... my money's on Nvidia. And I'm sure it will cost a little more, but I find good drivers equal a lot less wasted time.[/citation]
I've said it before and I'll say it again:
No video drivers are perfect.
When you drivers are problematic:
With AMD, you're usually stuck waiting for new drivers.
With Nvidia you're usually stuck having to roll back drivers.

After tinkering with computers for nearly 2 decades I can conclude that video drivers are by the far the most complex, both camps have their own driver issues, and the one company with the worst possible drivers is creative labs.
 

A Bad Day

Distinguished
Nov 25, 2011
2,256
0
19,790
[citation][nom]masterjaw[/nom]3x times faster? Hopefully, not 3x hotter too. Unless they're planning to equip their cards with LCLC out of the box.[/citation]

I would love to see that for Nivida's and AMD's dual GPU cards. You can only cram so much fan(s) and heatsink into that little slot before your card needs three PCI slot spaces.
 

alextheblue

Distinguished
[citation][nom]BigMack70[/nom]Nvidia isn't really hinting that Kepler is 3x faster than Fermi...They would be hinting that Kepler is 3x faster than Fermi if they didn't mention the alteration of the demo to use FXAA instead of MSAA and thus be much lighter on the GPU.Nobody is hinting that Kepler is 3x faster than Fermi... it would be far and away the greatest generational GPU performance leap ever.[/citation]Stop making sense! We need less reality, and more sensationalism. Everyone knows Kepler will be 11 times faster than Fermi and the top-end chips will end up in cards costing no more than $100.

Also FXAA is somehow superior in terms of IQ, despite being uglier and blurring stuff.
 

alextheblue

Distinguished
[citation][nom]pocketdrummer[/nom]And I'm basing this claim on my use of the 7800GT CO, 9600GT (super quiet), and GTX570. All cards are practically silent at idle and are well below the volume level of the game under load.[/citation]You realize that power draw and noise levels aren't tied at the hip? AMD cards traditionally are more power efficient. Some of them are louder, but that has more to do with the cooling solution selected, and the desired operating temperature for a given load (how aggressively the fan(s) ramp up).

So yeah, in the past couple of generations, AMD has typically had the edge in power consumption. Since Kepler is looking to be another monster chip, I suspect that will continue. Now the performance crown, that's another question entirely.
 

wardoc22

Distinguished
Jul 11, 2011
93
0
18,630
No info on the fps though...
A 3 gtx 580 might pump out 60 fps, so around 20 fps per gpu(i know it doesnt work that way because of many variables like the gpu not scaling 100%), and the single card might pump out 30 fps....

Lol
 

airborne11b

Distinguished
Jul 6, 2008
466
0
18,790
Well I feel sorry for people who already jumped on the 7000 AMD cards. 7000s btw were not revolutionary, they were a standard increase in power, with a standard improvement in effeciency, both companies do it every new generation (smaller process helps effeciency, and new tech = faster, this is standard). So 7000s were a decent improvement and bring their cards slightly faster or on par with the 18 month old 500 series. Problem though is that 600 series will either live up to hype and dominate in performance, or at the very least they are going to be competitive in price/performance with AMD 7000s, in turn driving down prices.

So why anyone would jump on 7000 series at this state in the game is beyond me, unless they were using something REALLY out dated and just needed something right now.
 

aidynphoenix

Distinguished
Apr 26, 2009
155
0
18,680
you people talking about noise levels of the cards.. you must be confused, Nvidia and AMD/ATI have almost nothing to do with that. the cooler is different depending on what brand made the card. weather its XFX, Evga or any other brand, you cant just say, because the last two AMD or Nvidia cards you had were loud that they all are..
 
G

Guest

Guest
MSAA & any other evolution of it should be dead the only way forward is SSAA with hardware faster then GTX580 it only makes sense. Anything else is unacceptable.
 

hetneo

Distinguished
Aug 1, 2011
451
0
18,780
This is ridiculous, as much as we know that card running Samaritan demo could have been dual GPU GTX690 prototype and it does use FXAA instead of MSAA. I am more interested in seeing Nvidia stop wining about low yields from TSMC, AMD, Apple and other TSMC customers have no problems with yields and AMD is also doing their 28nm chips there. Also not seeing again bs as was with tessellation optimization code in Crysis 2 DX11 patch and Batman Arkham Asylum would be nice. Nvidia should spend more time making better GPU than paying developers of games to include "optimized" code that will significantly hamper AMD performance wise, no wonder Apple is dropping them out of MacBooks in favor of Intel's IGP.
 

sh4dow83

Distinguished
Jul 4, 2011
59
0
18,630
as is clear now that is was one stinking pile of PR BS. but first off... no, this could not have been a dual GPU prototype because it says "single graphics chip". and as the name "dual GPU" implies, there are two chips.
but it seems they either optimized or scaled back the samaritan demo like hell because THESE are the first 680 benchmark results:
http://www.pcgameshardware.de/aid,873435/Erster-Test-der-Nvidia-GeForce-GTX-680-aufgetaucht-AMDs-Radeon-HD-7970-geschlagen/Grafikkarte/News/bildergalerie/?iid=1642273

and as one can see, 680 doesn't even deliver 25% more performance than a 580 - on average. so... unless that single kepler-based chip was from an 880 prototype (and even then it's ridiculous), i don't see 300% happening.
 
[citation][nom]Anonymous[/nom]MSAA & any other evolution of it should be dead the only way forward is SSAA with hardware faster then GTX580 it only makes sense. Anything else is unacceptable.[/citation]

TXAA is supposed to be good when it comes out too.[citation][nom]sh4dow83[/nom]as is clear now that is was one stinking pile of PR BS. but first off... no, this could not have been a dual GPU prototype because it says "single graphics chip". and as the name "dual GPU" implies, there are two chips.but it seems they either optimized or scaled back the samaritan demo like hell because THESE are the first 680 benchmark results:http://www.pcgameshardware.de/aid, [...] id=1642273and as one can see, 680 doesn't even deliver 25% more performance than a 580 - on average. so... unless that single kepler-based chip was from an 880 prototype (and even then it's ridiculous), i don't see 300% happening.[/citation]

Having seen this and now knowing that the GTX 680 has the GK104 GPU instead of GK110, maybe there will be a GTX 685 or 680 TI or even 685 TI and it will be a lot faster than the 580. It still wouldn't be three times faster (if it's even twice as fast, I'll be greatly surprised), but it/they could be a lot faster than the 680.
 
Status
Not open for further replies.