Nvidia GeForce GTX 1000 Series (Pascal) MegaThread: FAQ and Resources

Page 97 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


this VRAM stacking stuff i think it only been said by AMD. but so far there is no actual demo showing it's working. and i think it is not specifically for DX12 as well. what AMD claim back then was VRAM stacking might be possible with low level API. not Dx12 specifically. but as i said there is not even a single demo showing how they work in reality. and then some people speculate it might be possible with SFR.

also some people see DX12 as a nail in the coffin for multi gpu. and to be honest looking at the trend i say that might end up true in the future. even without DX12 game developer already moving away from multi gpu tech when most of their new rendering technique are not working well with AFR. also multi gpu is gpu maker interest to sell more gpu. for game developer supporting multi gpu will not going to increase their sales and they only add more complexity to their games. and with how "day one patch" already part of game development game developer already too busy to fix their own game let alone to add more issues by supporting multi gpu. just simple enabling the multi gpu support was probably not that hard but optimization is different story altogether. nvidia ex engineer mentioned that almost 50% of driver team effort from both gpu maker are to make sure multi gpu able to work properly on existing games. and this is from people that know the most about in and out of gpu hardware.
 
Nvidia stated back before dx12 release that vram stacking is possible but would be up to developers to program it. Also multi gpu support is going down its going up. Dx12 as seen in the games that already support it is pretty good. Tomb raider, hitman and believe the new gears of war also show great sli scaling. It's gotten better over the years not worse. Not sure where you are getting the sli info from.
 
@azzazel_99:
VRam stacking would only work if the different gpus render parts of the screen that use different textures. It might be possible in scenarios where parts of the screen are actually statically displaying different parts of the game (e.g. Rendering the HUD on one GPU or having a split screen where each split is a different "level" or part of the game world), and even then it would be difficult to make it run efficiently. Another way would be to calculate the raw image on one GPU and after effects on another (although there are only a few effects that need multiple versions of the same pixels, so there would be little to save).

The problem is that accessing the other GPUs memory is ceartenly slower then accessing RAM, at least if PCIE lanes are used for communicating (as that is through the CPU). If you cannot access the other GPUs memory directly with a comparable speed to internal, you get stuttering if you try. And if you want to stack memory without that (through SFR), you need to predict which part of the screen is going to use which textures before actually rendering it, and while that might be in principle possible I would guess that comes with either a gigantic CPU overhead or exchanging gigantic amounts of data between the GPUs all the time.

It won't happen soon is what I am trying to say :)
 


I would have to agree aswell. The only way SLI and CFX would easily be adapted for games devs is if both Nvidia and AMD figured out some way to allow the devs an easy way to implement the capability without major drawbacks.
 


for DX11 sure. but not with DX12. with DX12 the responsibility to implement multi gpu is entirely on developer hands. they are not even called as SLI anymore. that's why 1060 able to work in tandem in Ashes DX12 despite the card have no official SLI support from nvidia. i'm not sure about hitman DX12 but even in rise of tomb raider multi gpu support in DX12 only added recently after "it has been requested by many" according to developer. other dev they simply ignore it like what Remedy did with Quantum Break. when asked about multi gpu support for their game they simply said "DX12 did not support multi gpu" when we know multi gpu support has been baked into DX12 natively; which negate the need of custom API needed for SLI and CF to work.
 
I just don't see multi gpu setups going anywhere. The requirement for some of these 4K or 100+hz 1440p monitors is just to great for any 1 card. If sli support was going away there would be plenty of new games that would not be able to achieve max settings at 4K 60fps or like my case 100hz at 3440x1440p. Anything more than 2 cards is a waste but people have been saying the same sli support is going away stuff for years and it's only gotten better overall. Single card setups just can't max out stuff on current monitors and hardware so multi gpu config a are needed.
 
I just know that right now sli has worked perfect for me with no issues that i can complain about in any game, the gtx 1080 will not give me the fps i want/need to enjoy my monitor at its native resolution at the 100hz it runs at so i either need 2 or a single titanX to get me closer to that goal. Just not sure yet about dropping that god awful amount on a single gpu
 
multi gpu probably will still be here for years. gpu maker will keep creating "reasons" to make multi gpu stay relevant. just that DX12 will not going to increase multi gpu adoption in games like some people like to believe.
 
I don't think it's just the gpu manufacturers. It's the monitors also. To fully enjoy some of these monitors like the one I have you have to have a dual gpu setup. It's just the way it is because no single card is strong enough.
 
Just to be clear SLI and Crossfire use alternate frame rendering. Couple of games already using alternative methods out there. New Unreal engine uses a type of software frame compression to improve performance on lower end hardware (probably targeting consoles with that idea). They don't support SLI or Crossfire at all and have stated they don't plan to. DX12 also offers two modes Linked and Unlinked. Linked allows for memory stacking, and Unlinked allows for a type of software crossfire/SLI. Ashes of Singularity has demonstrated Unlinked, even mixing AMD and Nvidia cards.

http://wccftech.com/nvidia-launching-dual-gpu-flagship-graphics/

(No idea why that is the link to the article, doesn't match the title at all)

Until they pick a common option and do make the choice easy (ie whatever is cheapest to implement) I would think it would all start going downhill for multi-GPU setups available now.

Give it a few years and you'll probably be able to toss as many GPUs into the mix as you want and it will just increase performance. I think this transition period is not going to be very SLI friendly. So I at least am going to stick with the single GPU config for a while.
 


i think this is the general idea with DX12 multi gpu. but it is not something that easy to implement. why nvidia dropping 3 way and 4 way? one of the reason is CPU bottleneck. another problem is with DX12 stuff like multi gpu will be done by game developer themselves which most of them have no interest with multi gpu.
 
I just know 1 gpu can't do what I want. 1 1080 can't hold a solid 100fps by itself at 3440x1440 so it's either 2 of those or I go big and get 2 titanXP's. The triple a titles iplay so far have always supported sli except doom and division. Battlefields always have as does overwatch. Not sure about watch dogs 2 or titanfall2 or mafia 3 but with those graphics I would think they would. As it stands there isn't a single card that can hold 60 fps maxed out at 4 k in all games. Not even the titanX
 


i don't see a lot of info on the warranty but considering they have a turkey specific site, you should be ok. https://www.zotac.com/tr/ they do have a live chat feature on the support page. i'd start a chat with them and verify the warranty info to be sure you have correct info. https://www.zotac.com/tr/support left side of the screen is a live chat button.
 
So normally I would buy the classified model of a card but this go around I'm not seeing really any diff between the FTW 1080 and the classified model. More power phases yes but when the cards are voltage locked and you can let do voltage increases what good is that?
 
funny you should make that comment. i just finished watching this video from gamers nexus about the ftw pcb. it's a breakdown of the vrm set-up and how it relates to the card's abilities. lots of stuff i did not know about how the vrm's work and so on. based on this video, i'd say the extra phases of the classified card would be pretty much worthless for performance. it cleared up a few thoughts i had in my mind both verifying a couple and showing me where my thoughts were wrong in a couple other ways. very much worth the watch for many reasons.

[video="https://www.youtube.com/watch?v=9RUrR94f64U"][/video]
 
that's what the video goes into. narrator uses the words "extreme overkill" a lot describing the card's vrm layout :)

i still like the card and have the ftw card on my short list for when i do my new build. but that's cause i gonna play with custom bios and see if i can unlock some more speed. seems with the over-engineered pcb, it would be marginally safer to really let the voltage flow with some custom bios. throw in some major water cooling and i'm curious if the pascal chip has any more speed to give other than what we have seen. i think the answer is no but still wanna play with it some.
 


Actually it wouldn't be the first time a lower tier card could have more VRAM than one above it. The GTX 680 was either 2GB or 4GB but the 660 had a 3GB variant. In my opinion though, the 128-bit bus that the 1050 will supposedly have is just not wide enough to fully utilize 4GB VRAM. We'll find out for sure though when review sites get their hands on one.

 
Alright. Got my GTX 1060 G1 Gaming 6GB card in the mail Friday. It's been a blast playing BF1 at ultra 2k 60fps over the past 2 days.

Just a quick tip on this card, if you hear any weird buzzing noise when the fans are at 85-95% RPM, it's NOT the fans! I figured out that the front of the plastic shroud is bumping up against the aluminium heatsink which is making that noise. But it only happens at very high RPMs. I just thought I'd let everybody know just in case they have the same issue.

Besides that, the card is awesome! I'm absolutely stunned at how fast this card is, coming from a GTX 750 ti.
https://drive.google.com/file/d/0B26PgkCHG5UZdzJkZXA4ZHd1NVE/view?usp=sharing
https://drive.google.com/file/d/0B26PgkCHG5UZTGVDZzhJUXNwVlE/view?usp=sharing

Oh and BTW...in BF1 I was hitting 3750mb of vram usage at 2k, so if anybody plans on using the 3GB model to run BF1, don't. You could get away at 1080P most likely, but you'll be very close to the 3GB mark.
 


Actually, your wrong. 😛

The GTX 780 was made in both 3GB and 6GB models. The 780 Ti only had 3GBs of vram.

But, for a more practical card, yeah i'd say it's a first indeed.
 


85-95% RPM seems awfully high. What are your temps? Did you resolve the plastic shroud problem?
 
The high RPMs were only for testing. I was purely testing the fans to see how they act at higher speeds. Under normal gaming usage, they barely hit 60%.

Yes I was able to fix it sort of. After putting pressure on the offending part of the shroud, I was able to get the sound down to a whisper. I also straitened the card by re-screwing the PCIE bracket while simultaneously lifting the card up.