Play in 4K: Building a Liquid-Cooled, Dual-GPU Gaming Rig

Is this an april fool joke? Dual GPUs is dead. I cannot believe I am saying that, but you are better just getting a 2080TI at this point. Be ready to wait a year for having decent drivers support for CF or SLI... if you get any at all.

Anyway, really pointless article.
 
  • Like
Reactions: Blitz Hacker
No. Ryzen 3000 series has demonstrably higher IPC than this chip, but the Intel chip can have higher performance due to higher clocks.
It depends. IPC varies much on what an application demands.

AMD 3rd gen does have a superior IPC to 9th gen for programs like cinebench. You put a 3700x vs 9900k both at 4ghz, the 3700x will win cinebench.

However, a 9900k @4ghz will still outperform a 3700x@4ghz since Intel's gaming ipc is still superior.
 
  • Like
Reactions: vinay2070

rugupiruvu

Honorable
Apr 4, 2019
74
5
10,535
why 2080TI inside “Silent But Deadly” got so bad results in 3dmark ?
Worse than "Console killer" with RTX2080.

“Silent But Deadly” - TS - 9775, PR - 5967, FS U - 6801, FS - 22503
My 8600k + 2080 - TS - 10368, PR - 6690, FS U - 6548, FS - 22392
 

Blitz Hacker

Distinguished
Jul 17, 2015
65
24
18,565
It depends. IPC varies much on what an application demands.

AMD 3rd gen does have a superior IPC to 9th gen for programs like cinebench. You put a 3700x vs 9900k both at 4ghz, the 3700x will win cinebench.

However, a 9900k @4ghz will still outperform a 3700x@4ghz since Intel's gaming ipc is still superior.
what your saying makes absolutely no sense at all. Instructions per Cycle/Clock is an averaged measurement, it's static on a cpu architecture. A measurement of the amount of work done, and it doesn't change dependent on core/clock speeds. If anything it's a base multiplier for the performance equation with the amount of useful work done per clock cycle. Speeding up the clock cycles doesn't make it do more work per cycle, it just makes it do more cycles faster. So saying the intel's gaming IPC is superior is a lie. Ryzen 3rd gen processors have been measured to have a marginally higher IPC vs Intel currently.

As stated both ran at the same core speed the AMD will win, which is a result of a higher/better IPC. There is no differentiation of IPC based on 'gaming' 9900K(s) will run gaming better currently, with a lower IPC and a much higher clock speed making up for the IPC deficit
 

Blitz Hacker

Distinguished
Jul 17, 2015
65
24
18,565
Cinebench R15. 3700x @4ghz beats 9900k @4ghz
View: https://youtu.be/RmxkpTtwx1k?t=191

Most games. 3700x @4ghz loses to I9 9900k@4ghz
View: https://youtu.be/RmxkpTtwx1k?t=288


Then how do you explain this?

IPC is far from static.
Could simply be game optimizations, microcode, different instruction sets or the different motherboard architectures. The linked video (oddly) is 'measuring IPC' but then showing FPS which is odd. As presumably they're making the assumption that faster PFS is a direct relation to the IPC of the processor. Would of been a much better test running a CPU benchmark and disabling all but a single (presumably the best core left active) on both chips and running at a locked core speed.

Also this may have something to do with the added inefficiency of the infinity fabric interconnect for the chiplets. But from my understanding the latest AMD did have a higher IPC than the current generation Intel. I'm by no means a fanboy of AMD, was just my understanding from the benches on the chips. The intel stack has always out performed the AMD stack with FPS in current gen pc games (albeit by a small margin 2-10 fps usually) but was always on the assumption that it was a result of core clock
 

Blitz Hacker

Distinguished
Jul 17, 2015
65
24
18,565
Dual GPU is dead, and total waste of money.
Almost wish they would sell 'fake' gpu cards because I really miss the asthetic of a sli system.

Nvidia promised to bring back SLI then started using a NVlink bridge for the SLI implementations. Then locked out SLI from all but the top tier cards, people started using SLI less (because most of the people that usually would of couldn't) and now game developers don't give support for a time intensive feature that almost no one is using.
AMD said they didn't need the NVlink bridge bandwidth because they have Pcie4. Then completely dropped crossfire support across the board.

Assuming that Nvidia might take a swing at sli with nvlink maybe in 2020 or something.. or it might be the end of it, however at the moment the gains of sli are situational at best and dual or quad gpu's make zero sense outside of a scientific or rendering workload that is able to leverage the gpu cores directly or thru cuda
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010
what your saying makes absolutely no sense at all. Instructions per Cycle/Clock is an averaged measurement, it's static on a cpu architecture. A measurement of the amount of work done, and it doesn't change dependent on core/clock speeds.

IPC is not a singular number for any CPU. It varies greatly depending on the code being run. So, it is certainly possible for a CPU to have a higher IPC in some applications vs the competition while having a lower one in other applications. That said, current gen Ryzen on average has a higher IPC than Coffee Lake. What the original quote that started all of this should have said was that the 9900KS has superior per core performance due to its vastly higher clock speed. It also has superior single threaded performance for the same reason.
 

trashpandacoder

Commendable
Jan 13, 2018
6
0
1,510
The article is well crafted but a little late: dual GPUs are out of favor given progressively fewer game and hardware vendors are supporting them. Better to save the cost in power and GPU cards and buy just one high performing one. I figure this "sync" issue is due to the long production time of an article this detailed, so, unlike some arm-chair builders and critics, I give the authors a pass on the dual GPU choice. The effort here is notable and I hope to see more like it.
 

msroadkill612

Distinguished
Jan 31, 2009
204
30
18,710
Meh, there is a world other than gaming guys.

I could see multi gpu being very big in AI training/inference/... & rendering.

it wouldnt be such a gimmick article if they had given 2x pcie 4 cards (rtx 5700xt) a trot in an amd system - an am4 even.

Having 2x 16GB/s pcie interface dgpuS hasn't been possible before. A TR could have 2x 32GB/s DGPUs.

Its a new paradigm to have such a powerful DGPU link to other system resources - like 55GB/s+ RAM & 5.5GB/s PCIE 4 NVME.
 

Lord Tyrion

Prominent
Mar 27, 2020
5
2
515
Far better to run a single GPU and with the money saved watercool the NVMe's - the MP600 in particular does run hot but with a water block I'm seeing max temps of 30c under extreme load.