Nvidia Kepler GK104: More Leaks, Rumors on Specifications

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

DEVILVSANGEL00

Distinguished
May 21, 2008
65
0
18,660
Nvidia should do a paper launch before people get fed up with all the so called leaked specs and games that are going on and just go with AMD, i owned a 7970 already and it was a fast and expensive card but what suprised me the most is how far you could overclock it without voltage changes. i did about a 35% overclock using the standard cooler and no voltage increases and boy it was a beat beating the 6990 and gtx 590 in almost every benchmark, and that was without the drivers being optimised, AMD could of easily had 1150Mhz core and 1400Mhz memory as stock clocks and gave Nvidia a real challenge,

anyway i sent back my card and got my money back in antisipation for the kepler but every day im getting more dissapointed with the wait and not knowing how truely fast the flagship kepler will be compared to the overclocked 7970, all i can say is i hope nvidia live up to their hype especially after having so many months to catch up and create a a faster gpu to beat the overclocked 7970 performance otherwsie AMD would of truely won this round
 
The two most exciting features for gaming are:
1) Tessellation, and
2) Automatic Resource Allocation

These go hand-in-hand.

Tessellation allows you to scale the quality so that the close textures are higher quality and less resources are wasted on stuff further away. When a game engine is truly designed around tessellation the amount of processing AND memory required will be reduced a lot.

Resource allocation.
We really need some code in the engine that automatically allocates resources so that we maintain a constant frame rate (i.e. 60FPS VSYNC'd).

This would also save developers lots of time reducing the amount they have to tweak the game to ensure that frame rates don't plummet at certain times.

XBox vs PC:
If done properly, they will be able to develop both versions at exactly the same time (can do that now more or less), however with an engine that can automatically allocate resources the EXACT same game would automatically make use of the better PC hardware.

In other words, design a game that runs AMAZING in the future, but can scale to the low-end with ZERO STUTTER and CONSTANT FPS.

*It also means current PC gaming systems with DX11 cards are likely to actually look much better in the future with no hardware changes.

(the XBox is based on an HD6xxx or HD7xxx graphics chip which is DX11. While PC graphics will advance, there is little indication of a need or even desire for DX12 for a long, long time. We may see more tessellation etc, but the basic hardware will remain DX11.)
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]DEVILVSANGEL00[/nom]anyway i sent back my card and got my money back in antisipation for the kepler but every day im getting more dissapointed with the wait and not knowing how truely fast the flagship kepler will be compared to the overclocked 7970, all i can say is i hope nvidia live up to their hype especially after having so many months to catch up and create a a faster gpu to beat the overclocked 7970 performance otherwsie AMD would of truely won this round[/citation]
The "flagship" Kepler GPU won't be out for a while. gk104 was originally targeted as an upper mid-range GPU, but now it looks like Nvidia may be repositioning it as an entry level high-end GPU.

The worst case scenario shows gk110 launching in late Q3 2012, while other more optimistic "leaks" suggest sometime in Q2. Either way, it won't be launching with gk104 in the next couple months. But all the rumors I've heard so far suggest that gk104 will be performance competitive with the HD7970, so regardless of the more powerful gk110, it will still perform similarly to current high-end cards.

So if you're waiting for something significantly faster then the HD7970, you may be waiting for a while. gk104 was never meant to be the performance answer you've been looking for.
 

phenom90

Distinguished
Jul 27, 2010
623
0
19,010
if these leaked infos turn out to be true about this gk104.... about its performance compared to hd 7900 series... based on its price/performance position as current gtx 560ti... better it be named gtx 660 or even gtx 760.... imagine a mid range card from nvidia can be on par performance with hd 7970... i wonder what will be in store for nvidia's high end segment....
 

phenom90

Distinguished
Jul 27, 2010
623
0
19,010
if gk104 = mid range = gtx 660 and it is on par with hd 7970....
then what's in gk110/112 = high end = gtx 680???

i cannot imagine how freakin' fast it will be...
 

The Halo Don

Distinguished
Apr 24, 2011
623
0
19,010
All this competition is only good for us buyers.

I think it's silly to just stick to either AMD or Nvidia, I say to just go for the biggest bang for your buck.
 
[citation][nom]Sourav_ExploringPC[/nom]I was hell bent on buying the ATi 7850. Should I wait and see what nVIDIA has to offer? :|[/citation]

It's AMD, not Ati. Ati was bought out and doesn't exist anymore as a company or even as the brand name, as of the Radeon 6000 series. AMD branded the 6000 series and all concurrent Radeon series as AMD.

[citation][nom]photonboy[/nom]The two most exciting features for gaming are:1) Tessellation, and2) Automatic Resource AllocationThese go hand-in-hand.Tessellation allows you to scale the quality so that the close textures are higher quality and less resources are wasted on stuff further away. When a game engine is truly designed around tessellation the amount of processing AND memory required will be reduced a lot.Resource allocation. We really need some code in the engine that automatically allocates resources so that we maintain a constant frame rate (i.e. 60FPS VSYNC'd).This would also save developers lots of time reducing the amount they have to tweak the game to ensure that frame rates don't plummet at certain times.XBox vs PC:If done properly, they will be able to develop both versions at exactly the same time (can do that now more or less), however with an engine that can automatically allocate resources the EXACT same game would automatically make use of the better PC hardware.In other words, design a game that runs AMAZING in the future, but can scale to the low-end with ZERO STUTTER and CONSTANT FPS.*It also means current PC gaming systems with DX11 cards are likely to actually look much better in the future with no hardware changes.(the XBox is based on an HD6xxx or HD7xxx graphics chip which is DX11. While PC graphics will advance, there is little indication of a need or even desire for DX12 for a long, long time. We may see more tessellation etc, but the basic hardware will remain DX11.)[/citation]

Radeon 6000s are notoriously poor performers at tessellation compared to the Nvidia cards. Radeon 7700/7800/7900 cards are a lot better, even more efficient with tessellation performance than GTX 500s.
 

deploylinux

Distinguished
Feb 20, 2012
2
0
18,510
Well, performance is important, but so is reliability - drivers - power - & thermal. It seems like it takes nvidia at least 1yr to work out the kinks with each new gen of vid card. Some of us buy the best card that will last a long time every 3-5 years...amd/ati aside, fermi is just now really getting to that point ....the 560 and 560ti 448 especially seem to be the best long term purchases at the moment (448 especially for thermal). So, if the kepler is awesome...it should make a good purchase a year from now..however, I'm just now replacing my 8800 gt 512 with a 560 ti 448..
 
AMD vs NVIDIA:
I first look at this:

Bang for the BUCK!

Assuming these are equal, I normally look at the following things and weigh the Pros and Cons:

1) HEAT and NOISE (remember the HOT NVidia GTX4xx series)
2) PhysX
3) Driver Support (I give NVidia slight nod here)
4) CUDA/OpenCL (video transcoding and compute options)
5) Tessellation, Anti-Aliasing and other features (I like to use AA as much as possible, even if I have to FORCE it like I do in Mass Effect 1 and 2)
6) Cooling Solutions and Overclocking?

For example, Sapphire Technology has a great HD7850 OC card that has been officially overclocked to 45% above stock. Wow! That's with it's own air cooling solution and nothing else! It has two BIOS's. One for stock and one for overclock.

I'll definitely be comparing this Sapphire Tech HD7850 OC to the NVidia GTX670/680 (GTX770/780 ?) cards and looking carefully at the other options.

I'm leaning towards NVidia at this point but I'm waiting until THREE MONTHS after the new GTX high-end cards are released to weigh the pros and cons.
 
Radeon 7870 is benched, seems to be right next to the GTX 580 and 660 and 7950 in performance. 7900 seems to be idiotically designed. 1280 shaders in the 7870 @ 1GHz reference, 1792 shaders @ 800MHz reference in the 7950. Clock rates scale performance much more linearly than shaders, AMD should have known that the 7870 would perform similarly to the much more expensive 7950. The 7950 probably overclocks better, but not enough to justify a $100 higher price tag. Same goes for the 7970, it's next to useless if you overclock your video card because the 7950 overclocks just as far, maybe even a little farther (7950/7970 reference cooler is supposed to be the same, so same cooling performance). The overclocked 7950 is indistinguishable from the overclocked 7970.

Technically, the same is also true with the 6950 vs. the 6970/unlocked 6950, shaders need to be at the same clock rate and you need a lot more of them to beat another card of the same arch through increasing shader count. Now, where will the rumored 7890 fit in? AMD can't be stupid enough to make it perform as good as the 7970 but for between a 7870's and 7950's price.

AMD really messed up here, check out the benches from Tom's review of the 7870/7850 (7850 isn't even stable, so GCN has some serious problems there and AMD failed to understand the basics of performance scaling in video cards, or 7900 is severely bottlenecked somewhere else too.):

http://www.tomshardware.com/reviews/radeon-hd-7870-review-benchmark,3148.html
 
**SORRY, I meant the "HD7950 OC", not the HD7850.

The HD7950 OC from Sapphire Technology has a dual bios (one for overclocking). It was overclocked by 45% with no tricks.

CPU bottleneck:
How do I know if my CPU will be a bottleneck with a faster graphics card? Here's how (roughly):

1. Start the Task Manager (CTRL-ALT-DEL), and open "Performance" to monitor the CPU
2. Run a demanding game for a few minutes, then close the game and look at the CPU usage history in "Performance"
3. Repeat this entire process for at least three games as every game is different

Analysis:
- If the CPU hits 100% or close on ALL threads you are already bottlenecked
- If NONE of the cores go above 50% you can get at LEAST a graphics card of DOUBLE the performance (i.e. 2x the framerate in benchmarks over your currrent graphics card)
- If NONE of the cores go above 25% you can get at LEAST a card performing 4x the performance.

Analysis issues (important):
- test with Hyperthreading turned OFF. Many (most?) games won't even use these.
- Don't just use the AVERAGE CPU value. Hyperthreading will make this number useless, also if you had TWO CORES and one was at 100% and the second at 0% this could mean it's bottlenecked and can ONLY use a single core. If you used the AVERAGE value for a dual-core CPU it would have just said 50% usage.
- Overclocking the CPU doesn't always result in higher frame rates.

My current system analysis:
I have an i7-860 Intel CPU overclocked to 3.34GHz and an HD5870 graphics card. I have MANY games and Hyperthreading currently adds NO benefit in any game at all (it may with a much faster card).

I estimate on average about 25% CPU usage averaged across the FOUR PHYSICAL CORES. Again, though that doesn't mean the game can actually use 100% of all those cores.

I have EVERY major game over the last ten years (many Steam sales) and there is ONLY one game that Hyperthreading makes unplayble, and that is "Spider-Man Web of Shadows" (severe stuttering and very low frame rates).
 

phenom90

Distinguished
Jul 27, 2010
623
0
19,010
since gk104 will be released as gtx 680... it is most likely gk110/112 will not reach the market at least before amd next 8000 series.... since gk104 is meant to compete with hd 7900 series at lower power consumption... and even dual gk104 planned to encounter the hd 7990 by amd.... nvidia certainly has no reason to rush out on its strongest single chip even if it is not facing any problem in the design or production level... the phrase "save the best for last" is what i can describe now... so for anyone who wants the gk110/112 will likely be dissapointed...
 
Status
Not open for further replies.