Question Display Limits for CPU or GPU issue?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Jul 3, 2019
23
4
15
Hello,

I have a setup that includes 11 overall displays, all are 1920 * 1080 besides a couple that are 1920*1200 and one 4k tv. These displays are connected to four GTX 1080 cards and the CPU is an Intel Core i7-7700 both of which I believe are slightly overclocked. The issue is that we are running into major framerate issues when we use all of the screens (flight simulation), which in my opinion is understandable as I don't think our CPU is capable of supporting so many displays. Looking at the specs for the CPU it says it supports a max of 3 displays, however it definitely runs like 4-5 just fine. I was thinking the # of displays on that CPU is definitely bottlenecking our system's performance. So my questions are: 1. Do you think I am fairly correct about the CPU? 2. If so, would an Intel Core i9-9920x or something similar and more new be a better fit? 3. Or completely different, am I having a GPU issue?

Thanks!
 
OF COURSE it doesn't, the Intel spec is THREE maximum. You need one DMA engine per display that needs to be refreshed since each display gets its own frame buffer and needs to be streamed at a rate that matches each individual display. If you have two displays on DP/MST, then DP consumes two DMA engines out of the IGP's three, leaving you with only one more to either use for the laptop's internal display, HDMI/DVI or a third DP stream.

If you want to have more displays, then you need something like Matrox's Dual/TripleHead-to-Go adapter which takes two or three displays and make them look like one larger monitor to the host PC.
Key aspect is docking station. We're driving 2 displays directly from the docking station via USB-C, so it is not using any sort of hardware from the laptop itself and only bandwidth from USB-C + iGPU. Now, this is an interesting foot note in the Gen9 programmers guide:

Programming Note Context: Gen9
Display Resolution Support Do not use more than 60% of raw system memory bandwidth for display.

So each display it can theoretically support, is still going to be hard constrained by the iGPU memory bandwidth and the clock generators for each protocol.

All this to say that the Intel iGPU can't really drive fancy configurations of displays and it's not only restricted by the GPU, but a lot of other factors as well.

Cheers!

PS: https://01.org/sites/default/files/documentation/intel-gfx-prm-osrc-skl-vol12-display.pdf
 
Key aspect is docking station. We're driving 2 displays directly from the docking station via USB-C, so it is not using any sort of hardware from the laptop itself and only bandwidth from USB-C + iGPU.
Unless the docking station has a USB-based frame buffer adapter with local memory, then your docking station is using DP alt-mode to drive external displays and DOES consume the IGP's DMA engines for however many additional monitors are attached to the dock.

As I wrote earlier, 4k60 is only ~1.5GB/s. Three 4k60 displays would consume less than 20% of dual-channel DDR3-1600's bandwidth or 40% of single-channel, still well within Intel's 60% maximum guideline. Even dual-channel DDR2-800 would meet Intel's guideline for triple-4k60. The memory bandwidth requirement is borderline trivial for any platform new enough to actually support 4k60 output, you'll have to sabotage your memory bandwidth to ridiculous extent (single channel at less than half the rated speed of the slowest memory you can get assuming your motherboard even allows you to clock memory that low) to make it an issue.

The number of clock generators is a non-issue: eDP, HDMI/DVI and DP each have their own and all displays on DP/MST share the same clock.
 
Last edited:
The problem here is he is driving 11 displays one of which is 4k across 4 graphics cards and is expecting fluid framerates on a professional flight simulator dynamic software.

He's not understanding that more displas means more pixels. More pixels mean more math and any system is constrained by the amount of math it can do or the amount of memory it can push on a bus.

It's a big old duh that it runs faster with less displays. It's because at 1/3 the displays you are doing 1/3rd the math. But even 1 4k display is enough to bring a 2080ti to it's knees if the Dynamics and shaders are complicated enough. Dynamic cloud generations is actually extremely shader heavy. Not only is it a complex math problem to generate the shape but intersecting polygons transparency is based on how much cloud it must pass through.

If Lockheed Martin has setups like this then he needs to contact Lockheed Martin for proper configuration information. There is zip we can do for him.
 
Last edited:
The problem here is he is driving 11 displays one of which is 4k across 4 graphics cards and is expecting fluid framerates on a professional flight simulator dynamic software.

He's not understanding that more displas means more pixels. More pixels mean more math and any system is constrained by the amount of math it can do or the amount of memory it can push on a bus.

It's a big old duh that it runs faster with less displays. It's because at 1/3 the displays you are doing 1/3rd the math. But even 1 4k display is enough to bring a 2080ti to it's knees if the Dynamics and shaders are complicated enough. Dynamic cloud generations is actually extremely shader heavy. Not only is it a complex math problem to generate the shape but intersecting polygons transparency is based on how much cloud it must pass through.

If Lockheed Martin has setups like this then he needs to contact Lockheed Martin for proper configuration information. There is zip we can do for him.

You can do more than "zip" because I do have a question. When I looked at task manager during running the Prepar3d software, to the right of the GPU section for Prepar3d, it said GPU-1 and for some other random process it said GPU-3. Is that specifying what GPU is being used for that program? If all 4 were actually being used would it list all of them (e.g. GPU-1,2,3,4)? I am trying to get Lockheed to work with me again.... but I need to know this first.

Thanks!
 
You can do more than "zip" because I do have a question. When I looked at task manager during running the Prepar3d software, to the right of the GPU section for Prepar3d, it said GPU-1 and for some other random process it said GPU-3. Is that specifying what GPU is being used for that program? If all 4 were actually being used would it list all of them (e.g. GPU-1,2,3,4)? I am trying to get Lockheed to work with me again.... but I need to know this first.

Thanks!

That would be a reasonable assumption. GPU-Z will also tell you your usage and memory use for each card.

 
  • Like
Reactions: FrequentFlyer
That would be a reasonable assumption. GPU-Z will also tell you your usage and memory use for each card.


Thanks, would you by chance know the reason behind why it would mainly run throught the 1st GPU even though the displays are physically plugged into their own separate GPUs? Our setup is not SLI'd and now I am just even curious as to how this works. It just doesn't make sense to me.
 
Thanks, would you by chance know the reason behind why it would mainly run throught the 1st GPU even though the displays are physically plugged into their own separate GPUs? Our setup is not SLI'd and now I am just even curious as to how this works. It just doesn't make sense to me.

This is a very long and complicated history and it delves deep into tech. Not to be insulting, but for someone as new to performance graphics as you, a lot of it would fly over your head. It takes time to understand the complexities of SLI/XFire/mgpu/ nview spanning and how they work and how they are different.

But that would do you little good to solve your problem. The software has to support it and the system has to be set up properly with the proper settings. It is possible the software may support it, but if you don't have it configured correctly or don't have the 4 way bridge, then you won't see the performance you need. And to be honest I would try to drive any more than 1 monitor with 1 graphics card if that monitor is 4K. A single 4K display takes 4x's the horse power of a standard 1080p display if run at native resolution.
 
This is a very long and complicated history and it delves deep into tech. Not to be insulting, but for someone as new to performance graphics as you, a lot of it would fly over your head. It takes time to understand the complexities of SLI/XFire/mgpu/ nview spanning and how they work and how they are different.

But that would do you little good to solve your problem. The software has to support it and the system has to be set up properly with the proper settings. It is possible the software may support it, but if you don't have it configured correctly or don't have the 4 way bridge, then you won't see the performance you need. And to be honest I would try to drive any more than 1 monitor with 1 graphics card if that monitor is 4K. A single 4K display takes 4x's the horse power of a standard 1080p display if run at native resolution.

Okay, to be quite fair though, disconnecting the 4k display in comparison to disconnecting a regular 1080 display has the same impact on my FPS. Not that that really means anything for my setup given that it is already laughing at me for it being configured so poorly.
 
How is this used? Is it a wall of screens using a single source, stretched across all the screens with only 1 user input at any given time? Or is it 11 different scenarios with upto 11 different users all playing and inputting? That'd put a large hit on the fps as each individual screen would be basically it's own game, the frames rendered would be different per monitor
 
Welp, I've already said everything that's actually helpful, but I'll try to summarize:
  • The multitude of GPUs you have inside the PC can be used, AS LONG AS two things happen: 1.- the driver actually exposes them individually or as a single entity and 2.- the game/software understands and knows how to deal with them either in bundle or individually.
  • The software needs to be able to support the exposed resolutions and refresh rates of the screens via the driver while the driver sorts everything else. This is to say: you may have all monitors working at native resolution in desktop mode, but in the game they might be getting lower resolution and refresh rates for compatibility.

Let us know if the fellas making the software answer.

Cheers!
 
  • Like
Reactions: FrequentFlyer
Welp, I've already said everything that's actually helpful, but I'll try to summarize:
  • The multitude of GPUs you have inside the PC can be used, AS LONG AS two things happen: 1.- the driver actually exposes them individually or as a single entity and 2.- the game/software understands and knows how to deal with them either in bundle or individually.
  • The software needs to be able to support the exposed resolutions and refresh rates of the screens via the driver while the driver sorts everything else. This is to say: you may have all monitors working at native resolution in desktop mode, but in the game they might be getting lower resolution and refresh rates for compatibility.
Let us know if the fellas making the software answer.

Cheers!

Yeah, I appreciate all your help. Agreed, there's not really much else to say though about this issue. Pretty sure my GPUs are not being used to their potential and that the load is primarily being put on one of them but \-_-/... Hopefully they provide me with some assistance. If I get any news I will give an update.
 
Status
Not open for further replies.