Hands-On With Nvidia's Titan X (Pascal) In SLI

Status
Not open for further replies.

clonazepam

Distinguished
Jul 10, 2010
2,625
0
21,160
119
I vaguely remember reading a different article where the two types of SLI bridges were x-rayed and examined much more closely. IIRC, there's very little difference. The newer one has the traces all set to equal lengths (the click of a button in design software), and one pin I think is changed to Ground which allows the driver to detect its presence. I think they're physically equal in otherwise, besides trace length, LED, and the change to ground.

Anyway, I've done the multiple card setups for many generations, but game development is going in a direction where that's no longer a worthwhile endeavor as support dwindles more and more.

Thanks for the read.
 

Realist9

Reputable
May 31, 2014
97
0
4,630
0
This is an interesting experiment 'for science'. But that's all. SLI support in games is so spotty, unreliable, and sometimes non-beneficial, that it is now, IMO, irrelevant.
 

Compuser10165

Honorable
Sep 12, 2015
1,019
1
11,965
210
I think going X99 with such a system is a requirement in order to reduce the cpu limit factor. Desite the 10 core 6950X, an OCed 5960X (to about 4.4-4.5ghz) should be enough for such a setup. That is because Haswell -E is easier to OC at higher frequencies versus Broadwell-E.
 

ledhead11

Reputable
Oct 10, 2014
585
0
5,160
71
Thanks for the review. It's nice seeing these SLI reviews especially since there are so few for either the Titan X or 1080's.

When I got my 1080's I tried the EVGA bridge and had problems with getting full contact on my cards. Some boots would show the cards, some didn''t, so for a time I used dual ribbons until I got the NVidia HB a week or two later. The NVidia worked for me no problem. The main difference I noticed was, a few more FPS here and there but really a more stable, consistent frame rate. I read the same article about the x-ray comparisons as well before purchasing and have to say all this info is getting pretty consistent.

I can tell you that 1080SLI has very similar performance behavior as to the reviews of Titan X SLI I've seen. Both SLI setups seem to really shine in the 4k/60hz or 1440p/144hz. When I tried DSR 5k on my 4k display the frames quickly dropped to around 40fps.

I'm not really seeing the CPU bottleneck you mention except for the Firestrike tests. Whether 4k/60hz or 1440p/144hz my 4930k @ 4.10ghz rarely goes above 40%.

I completely agree with you about what to use the Titan's for- 4k/5k all the way. 1080SLI just starts to hit a ceiling at around 60-80fps in 4k and averages 100-150fps in 1440p depending on the game.
 

niz

Distinguished
Feb 5, 2003
903
0
18,980
0
I'm very pleased with my (P) Titan X, I upgraded from a 980GTX mostly for VR (Vive) and to be able to play Elite Dangerous at VR Ultra with superscaling set to 2.0, which it does fine now and is so rock solid I just leave SS set to 2.0 for everything in VR. With E:D in VR there's an occasional chugging but it seems to only be during loading screens (i.e. as a result of file I/O) so it seems that theres a real chipset issue that I/O can steal damaging amounts of bandwidth from something (PCIe bus?) rather than the CPU or GPU being maxxed out.
I have an acer X34 monitor so 3440 x 1440, my PC is running a i7-6700k at stock speed. I keep thinking about blowing another $1200 just to go SLI just because "moar is moar" and just the thought of Titans in SLI give me a nerd boner, but it honestly seems like I'd see no noticeable benefit.
 

RedJaron

Splendid
Moderator
Just because a CPU can be clocked higher doesn't mean it performs better. Improvements in IPC and efficiency more often than not make up for the lower clock speeds.
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310
20
Skipped the article after looking at contents. Still waiting on PRO benchmarks for this card to see how it does vs. other top cards in stuff like adobe (cuda vs. amd's OpenCL or whatever works best for them), 3dsmax, Maya, Blender etc. These cards are not even aimed at gamers, so wake me when you test what they were designed to do (pro stuff on the cheap), so content creators have some idea of the true value of these as cheap alternatives to $5000 NV cards (P6000, etc).
 
+ Filippo L. Scognamiglio Pasini
Since you noticed two Titan XP's achieving only 50% utilization when paired with the i7-6700k, then which overclock CPU would diminish or eliminate that condition? i7-5820k (CPU I own) or an i7-5960X? Any other recommendations would be appreciated. Thank you for the great article.
 

ledhead11

Reputable
Oct 10, 2014
585
0
5,160
71
@rcald2000

I didn't want to reply directly to your question in case Filippo does answer you.

Our CPU's are very similar. Most notable differences are 5820k(15mb L3 cache, 28PCIe lanes) and 4930k(12mb L3cache, 40 PCIe lanes). I know 1080's aren't Titan's but they're not that far off. My CPU has yet to go over 60% with both GPU's holding over 95% usage during 4k(60-80fps ultra, vsync on, AA min or off). It mostly averages 40%. This is with ROTTR, Witcher 3, GTA V, DOOM. The most obscene thing I've seen is how ROTTR loves to eat all the ram it can get, where ever it can get it.

The firestrike physics tests nails it at 100% along with HD video conversions(including software using hardware accelerations).

That being said. If you're really going all in for 2 of these then I would recommend seeing how your 5820 handles it. My personal experience in going from quads to hexes is that the lower hex speeds are more than compensated by the larger L caches and increased PCIe lanes vs a quad trying to hit 5ghz. I'm actually very surprised how many reviewers still obsess with using quads for these kinds of tests. Just look at the monsters in the 3d mark hall of fame and most are using 6,8,10 cores with their Titan SLI's.

When I put my old 970's from a 2600k(OC'd 4.20Ghz) to this 4930k(at the time 3.8ghz) I saw an average 10-20FPS increase in 4k. I was using the same memory and hard drives at the time.

If people are going to try use a quad for 4k/5k 60+hz or 1440p 200hz Titan SLI or the mythical 1080tiSLI(January 2017?) then a quad will likely need to be at ~5ghz or higher(I'm thinking closer to 6). I'm pretty certain a hex or bigger will sit happy around 4-5ghz for this. Mine's at 4.10ghz because I still use all air cool options and I simply don't need more and its 24/7 stable.

The other thing you may need to do if you take the leap. . .a bigger PSU. 850w can do it but a general recommendation I've heard for optimal PSU efficiency is to double the wattage you need. 2 Titan's and that CPU are going to push real close to 500-600 during a full load. I've watched mine at it averages in the 450-550 as is(but I've also got 2 raids and 3 or 4 other drives and that's including the display as well).
 
+ledhead11 Thanks for the detailed response.

ROTTR: I definitely have my eye on this game. One thing that I really respect is how the developer/publisher listed accurate recommended system specs. I also appreciate that they listed two specs: One for 1080p and the other for 1440p. I like titles that are highly optimized, yet push hardware to it's limits. How much RAM have you seen it consume?

Your recommendation: It honestly didn't occur to me to wait and see how my i7-5820k handles the two GPUs before determining if an upgrade is warranted. I will definitely wait and see. I was also under the impression that a four core high clock rate CPU was a better match for dual high-end GPUs then a six core lower clock rate CPU. I hope that your conclusion is correct, because I really like the extreme processors and I'd like to continue using them.

bigger PSU: Yes, the 50% of total available power is normally the most efficient part of the curve. Ideally a 1,000 watt Titanium (115V variant) would be a great option for two 250 watt PSU's in SLI with a top tier CPU. I've definitely considered upgrading to either an EVGA 1000 T2, or Seasonic Prime, although the later doesn't yet come in the 1,000 watt version. Honestly you will never recoup the cost of a titanium efficiency PSU over a gold efficiency; it's more of a pride thing.
 

mechan

Distinguished
Jan 25, 2009
90
0
18,640
2




Well, first of all - it really is a 1440p issue. At 5k (and, likely, 4k for the matter) you'll see the bottleneck shift to the GPU.

Now there literately is no reason to push over 150 FPS at 1440p (the most capable displays can handle 144 Hz but no more) so I think the question you are asking is somewhat academic in nature ...

... but entertaining the academic question, if GPU utilization is 50% and the bottleneck is on the CPU, you're looking to a 100% overclock required (per queuing network theory) to shift the bottleneck to the GPUs. Needlessly said, a 100% CPU overclock (let alone a stable one) is essentially impossible in any conventional environment.

- Filippo
 

mechan

Distinguished
Jan 25, 2009
90
0
18,640
2


Increased PCIe lanes make absolutely zero (ok, maybe 1-2% at most) difference. Proof:
http://www.tomshardware.com/reviews/graphics-performance-myths-debunked,3739-3.html

The real difference is the clock speed and, to an extent, the cache size.

The monsters in the OVERALL 3d mark hall of fame do indeed use 6,8,10 CPUs but, if you take out the CPU tests and look specifically at graphics scores the picture changes a lot.



The answer is actually simpler than that, as CPUs and GPUs follow Little's Law and Queuing Networks Theory. If his CPU utilization is at 50% with GPUs as the bottleneck, then the overclock he needs to shift the bottleneck to the CPU is precisely a 100% overclock (50%->100% utilization implies a doubling of the effective frequency, that is, a 100% overclock.) As mentioned in another response, that is simply not achievable under normal circumstances.
 

heliomphalodon

Distinguished
Jan 20, 2007
42
0
18,540
1
I was keen to buy one of these, but I couldn't face the noise. I don't have the skills to mod a card for water cooling (not to mention voiding the warranty), so I went with a pair of Gigabyte 1080 water-cooled cards instead.
 

ledhead11

Reputable
Oct 10, 2014
585
0
5,160
71


In my case I'm pretty sure it was the cache sizes that made the most difference but I really did gain 10-20fps in going from my 2600k to 4930k. I also emphasize that the 4930k has yet to go over 60% in gaming with 1080sli while Filippo stated that the 6700k(4.2ghz) pegged 100% leaving the Titan's bottle-necked at 50%.

I do completely agree with you on 144hz though. I honestly can't see much a difference from 110-144. I just mentioned it because of the new craze the display manufacturers are obsessing on and you know no matter how ridiculous there will be people trying to push it.
 

ledhead11

Reputable
Oct 10, 2014
585
0
5,160
71
In regards to the PCIe detail. I checked the link and as usual on Tom's, a good read. That's why your at the top of my bookmarks and one of the first things I read daily.

Beyond that. I'd like to see the same tests done with the Titan X or 1080 SLI at 4k/5k/8k before really drawing a conclusion. Those 690's were awesome for the time but they don't really compare to today's needs. 1080p and 1440p don't seem to demand much but 4k/5k seems a lot more different.
 

ledhead11

Reputable
Oct 10, 2014
585
0
5,160
71


On both my setups I've seen it eat full Vram. The 970's at 1080p will hold 3.5-4GB. The 1080's will hold 8GB at 4k and sometimes drop a little at 1440p. Both running in Ultra, AA min, V-sync(1080p/4k) or G-sync(1440p). The system ram varies from 9GB up to ~12GB. The rig with the 970's only has 16gb and I find it shocking to see so much used at 1080p.

Until this game I used to think 16/32 system ram was overkill for gaming. It obvious now that some games at 4k will need more than 16 on hand. I already knew 8gb vram was going to be an issue for present future 4k/5k but I couldn't really afford 2 Titans and I got tired of waiting for a TI. I'd been waiting since my 970's.

BTW this is the only game I have that eats that much. GTA V is close but still not as much. Witcher 3/Doom are much more reserved in 4k with 3.5-6GB on average. In 1080p both just hang around 3-4 or sometimes less but not normally full on like ROTTR.

I just remembered that I hadn't checked the numbers on my laptop(MSI GT80 2qe) which has 980m SLI(think a pair of 970s with lower than stock clocks and 8GB of Vram). Mines modified with 2 850's in raid 0 for games and 32GB/2133mhz ddr3 gskills like my desktop.

For kicks, I maxed everything the game settings offer, AA included at 1080p. FPS dropped to 20-30. Test scene is the soviet union campsite at night with full snow(in my experiences one of the more demanding FPS renders, even more than some of the builtin benches). It used 11141MB System and 7282MB Vram. No exaggeration and this is 1080p/60hz. Tried Doom and vram was low like I said before but system was almost the same as ROTTR.



 
you would need to isolate which cores were bottlenecking in witcher 3@1440p to see if you need more cores all together or more clock speed per core to reduce the bottleneck. many are mentioning 6 or 8 core cpus, but i suspect the bottleneck lies with two cpu cores and not the need for multiple cores. im not saying that more cores clocked at an equal 4.2ghz wouldn't help, but is imagine a 7700k@5.2ghz would provide much less of a cpu bottleneck than a 6950k@4.4ghz.
 

krr711

Distinguished
Jan 23, 2011
26
0
18,530
0
All the high-end graphics cards should have the HBM2 instead of the refreshed GDDR5x. It seems with such a costly group the best should have been delivered but we were given something the designer could piece-meal a new product by simply swapping in the HBM2. These are excellent cards but the technology exists to make it better.
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS