News Crazed modder straps CPU cooler to Nvidia GTX 960 with a 3D-printed bracket, breaks 3DMark benchmark record

The article said:
And TrashBench isn't done. "I think I'll have to try it on a 4080 next," they quipped at the end of the video. Engaging further on Reddit, TrashBench also agreed that a GPU with a more sophisticated cooling system (the 960 is from 2015, after all) might not yield such a drastic improvement, promising to try a more potent cooler on a 2070 Super. In fact, TrashBench is even considering more thorough benchmarking to weigh stock GPU coolers, comparing them directly with similar-sized CPU coolers for fairer testing
The only reason this worked is that the GTX 960 wasn't a very high-end card. As you go up the range, they start incorporating vapor chambers and coolers that are generally much better than CPU air coolers, or else how do you think air-cooled GPUs manage to burn more power and hold lower temps than CPUs???
 
I have a Scythe mini-Ninja mounted to my RX 6500XT. Not to run cooler but to run more quiet. Works great but not thermally difficult. I personally think there should be a new form factor so that GPUs can be mounted similar to motherboards so they can have large tower coolers that can work with normal case airflow instead of trying to force these high power components into an add-in card where airflow is all over the place.

My attempt at a new case layout...
 
The only reason this worked is that the GTX 960 wasn't a very high-end card. As you go up the range, they start incorporating vapor chambers and coolers that are generally much better than CPU air coolers, or else how do you think air-cooled GPUs manage to burn more power and hold lower temps than CPUs???
By being that much louder. I used to run a similar setup on a GTX 980Ti, much cooler and quieter than the OEM one.
 
  • Like
Reactions: artk2219
That is totally dumb and here is why. Water cooling….. also weight and bulky cooler vs thin and water…. Maybe I should become a journalist and talk about how I stubbed my toe and I make shift use of duck tape into a Band-Aid.
 
I have a Scythe mini-Ninja mounted to my RX 6500XT. Not to run cooler but to run more quiet. Works great but not thermally difficult. I personally think there should be a new form factor so that GPUs can be mounted similar to motherboards so they can have large tower coolers that can work with normal case airflow instead of trying to force these high power components into an add-in card where airflow is all over the place.

My attempt at a new case layout...
A few things I've wanted to see.

When MXM was still a thing, that would have been nice to see common on desktop motherboards. That would fit your solution nicely and get the low power highly binned GPUs into the hands of Micro ATX.

The other major one I wanted to see was a backside PCIe slot for the GPU, plus a right angle. GPU on the back of the motherboard effectively, leaving the front sides remaining PCIe slots fully available. They already make plenty of dual chamber chassis for the PSU to be behind the tray. And a version that only had half height slots on the front would neat as well, to maintain a normalish width of a case. Solves that 3 3/4 GPU width. Why even have slots on the board of a gaming motherboard if they are just covered up by GPU.

And to add to that idea, CAMM2 memory would allow for a full coverage block for CPU/VRM/Memory with relative ease.
 
  • Like
Reactions: BillyBuerger
By being that much louder. I used to run a similar setup on a GTX 980Ti, much cooler and quieter than the OEM one.
I find that claim dubious. We don't see CPU air coolers that can cope with that amount of heat, except on huge workstation and server CPUs that have large contact area. A GPU die is even smaller than a CPU IHS and GPUs don't tolerate heat as well as CPUs do. Sure, you get some benefit by going direct-die, but I think any CPU air cooler would quickly heat soak and leave the GPU thermal-throttling.

TL;DR: pics or it didn't happen. Let's see some screen shots with temps and power figures, too.

I had an EVGA GTX 980 Ti FTW, which was a 275W card with 2 fans. I found it remarkably quieter than my previous 2-fan card that used 100W less. I doubt any CPU air cooler is going to generate much less noise than that, unless it maxes out way earlier.

So, in my experience, at least up to that power level, an air-cooled GPU doesn't have to be loud. It's probably just a question of whether the manufacturer is willing to spend the big bucks on a high-end cooling solution.
 
The other major one I wanted to see was a backside PCIe slot for the GPU, plus a right angle. GPU on the back of the motherboard
They make PCIe cables, so you can place & orient your GPU however and wherever you want. IMO, that makes a lot more sense than manufacturers having to offer even more motherboard variants than they already do!

And to add to that idea, CAMM2 memory would allow for a full coverage block for CPU/VRM/Memory with relative ease.
I hate CAMM2 for desktop. It makes about as much sense as the M.2 form factor for SSDs did. The slots waste more board space and you can't effectively cool the chips on the back side.
 
They make PCIe cables, so you can place & orient your GPU however and wherever you want. IMO, that makes a lot more sense than manufacturers having to offer even more motherboard variants than they already do!


I hate CAMM2 for desktop. It makes about as much sense as the M.2 form factor for SSDs did. The slots waste more board space and you can't effectively cool the chips on the back side.
Sort of, the PCIe cables for the scenario I am talking about does effectively that, but only with Mini-ITX wrapping around to the backside. No other PCIe slots. Now if they designed a more routing friendly PCIe extension, maybe a round optical cable, and left a pass through hole through the board or something, that might be easier.

Not sure I see the argument against M.2. On a lot of boards they snuck it in behind/between where normal 1x slots were sitting. The boards that have like 4 or 5 M.2 slots I agree. But they came up with solutions there. Expansion cards, DIMM.2, stacking. On smaller builds having the M.2 on the board simplifies the wiring at least. What I don't like is having M.2 sitting under GPUs, which is one of the reasons keeping the GPU separate from the front side is an interesting idea. Then all the drives would be accessible. Heck they could even be slotted vertically.

You've mentioned that CAMM2 flaw before, it isn't that big a deal. Memory isn't that warm anymore, CAMM2 claims to be more power efficient as well, and doing what I am proposing would be more then enough cooling through the PCB. Or, none at all would be fine for most use cases.

All of this would be for enthusiasts, which is getting to the point of being the entire industry. Besides, more options means more niche products which can allow more companies to stay in business.
 
Not sure I see the argument against M.2.
Then how come there are virtually no double-sided M.2 drives? It's a bad design, that's why! You will find no shortage of people pining for higher-capacity M.2 drives. The market demand is there, but not if it means putting chips on both sides, which is a terrible waste of space when you can't even fill the top side with NAND, due to the controller and other components!

Not only that, but developers at my job who do builds on their i9 HX laptops are burning out their SSDs. NAND doesn't like high temperatures, and apparently the laptops we have don't cool them well enough for them to reach the end of their warranty period, even though they're single-sided. It just goes to show that thermals need to be a first-order concern, when it comes to SSDs.

You've mentioned that CAMM2 flaw before, it isn't that big a deal. Memory isn't that warm anymore,
Incorrect. Heat is very much a live issue, for DDR5.

 
Last edited:
Double sided M.2, 4TB SN850X being a quick to find example. Flash memory is a little different than DRAM in terms of temperature tolerance. It is cheaper to make single sided, I think that is the main reason most are that way.

Thermal issues in laptops are not new, it would be great if they ran the heat pipes over to them, but that would be rather inconvenient to the consumer. Sounds more like they should be using external storage for their heavy duty needs, or a full up workstation. I get not all companies can do that, but the option exists for such use cases.

DDR5 8000, running at 1.5 volts in their testing, yes, that is a pretty extreme edge case. CAMM2 lets you reach higher speeds without extreme voltages, since it is intended as a laptop standard, or even more extreme speeds with higher voltages. Recent CES showed off 10000 MT/s plus, naked board too, at the G.Skill booth.
 
Double sided M.2, 4TB SN850X being a quick to find example. Flash memory is a little different than DRAM in terms of temperature tolerance. It is cheaper to make single sided, I think that is the main reason most are that way.
It's not just for cost reasons. If you look at the power consumption of the PCIe 4.0 SN850X, it's fairly modest. That's why they could make it double-sided and the higher-speed PCIe 5.0 drives cannot.

nt87QfjaZxym2cTTWoNP3L.png


yappffZ9Ri7t5tPtY2AQKL.png


Thermal issues in laptops are not new, it would be great if they ran the heat pipes over to them, but that would be rather inconvenient to the consumer. Sounds more like they should be using external storage for their heavy duty needs, or a full up workstation. I get not all companies can do that, but the option exists for such use cases.
The comment was to illustrate that heat is a serious issue for SSDs and not merely a matter of some performance throttling.

Recent CES showed off 10000 MT/s plus, naked board too, at the G.Skill booth.
That module is listed here at 1.45V and the demo you cite was an open bench in an air conditioned convention hall.

Like M.2, it's a waste of board space and yet another cooling liability. Maybe not too much of a problem at DDR5-10000, but what about when we want DDR5-12000, in a couple years? It's the same kind of short-sighted thinking that gave us the M.2 standard. Horrible idea.

I will acknowledge that the CAMM approach at least addresses a technical problem, which overcoming the limitations of DIMM sockets. M.2 was purely about making SSDs more suitable for thin & light laptops, and that's where it should've stayed. It's a shame the U.2 standard never caught on, for desktop PCs.
 
That module is listed here at 1.45V
The test benchmarks were labeled as 1.5 volts, You can take that up with Tech Power Up. I saw the 1.45 volt specs for the kit. But they also, according to their charts, tested at 1.5v at 5600, so not really sure. Regardless, you can certainly get away with naked RAM at the lower voltages. And given a few years, what usually happens is a process node shrink that allows for the higher speeds at even lower voltages. I remember needing 1.65V for DDR3 1600 when it launched, that put me off high voltage memory after that point.
and the demo you cite was an open bench in an air conditioned convention hall.

Like M.2, it's a waste of board space and yet another cooling liability. Maybe not too much of a problem at DDR5-10000, but what about when we want DDR5-12000, in a couple years? It's the same kind of short-sighted thinking that gave us the M.2 standard. Horrible idea.

I will acknowledge that the CAMM approach at least addresses a technical problem, which overcoming the limitations of DIMM sockets. M.2 was purely about making SSDs more suitable for thin & light laptops, and that's where it should've stayed. It's a shame the U.2 standard never caught on, for desktop PCs.
Granted it was a demo on an open air bench. But that was also 10000, that is not what I expect the typical system to use. 1.1 volt 6400 looks just fine to me, and that was last year's achievement.

I think my Z490 board has some U.2 ports, but yeah, never got around to buying the drives. That would fill your needs well. But nothing really stopping you from picking up some used enterprise drives and doing it directly, if I recall there are M.2 to U.2 adapters, and cards of course.

I don't see why they couldn't achieve higher speeds if they wanted, and DDR6 will show up eventually, not like we have cross generational support anyway. I'm already surprised at seeing 10k, if I recall the original DDR5 spec was supposed to top out around 8400, but they are always conservative on that.
 
The test benchmarks were labeled as 1.5 volts, You can take that up with Tech Power Up.
I was talking about the G.Skill CAMM2 that you cited. It's spec'd at 1.45V, which is a lot higher than regular DDR5 DIMMs.

And given a few years, what usually happens is a process node shrink that allows for the higher speeds at even lower voltages.
I think DRAM is starting to hit a scaling wall. We'll see what happens with that. Might not be until DDR6 that voltages get any lower, and that will probably use PAM3 or PAM4.

I think my Z490 board has some U.2 ports, but yeah, never got around to buying the drives. That would fill your needs well. But nothing really stopping you from picking up some used enterprise drives and doing it directly, if I recall there are M.2 to U.2 adapters, and cards of course.
I have two U.2 drives and they're extra expensive, because they have lots of datacenter features that normal people don't need. They also idle hot. Lastly, the cables are very expensive, since they're not a commodity item, like SATA cables. Luckily, I only needed PCIe 4.0.

The availability of enterprise U.2 drives is not a proper substitute for having a viable selection of consumer models. And we don't even know how long that form factor will stick around, since servers have now moved on to the E1 and E3 form factors.


if I recall the original DDR5 spec was supposed to top out around 8400, but they are always conservative on that.
I wonder if that CAMM2 module uses a CKD, like CUDIMMs.

Imagine if having all of these cards using up so much space on our motherboards ushers in a revival of the EATX form factor!
 
  • Like
Reactions: LabRat 891
I wonder if the first GPU water block wasn't actually a CPU waterblock that someone adapted for the purpose.
My first attempts at water cooling were just that.
Took a copper heatsink and a sheet of copper bent in a square U shape with in/out ports on the legs of the U. Then soldered this to the heatsink to make a copper box with fins inside.
Became my first component destroyed.
Cracked the corner of the die on my P 550e when reassembling.
 
Great thread. I never knew evga made a 2 fan 275 watt 980. And the Vega with 16 gb vram: is that a server part? I've never heard of that. Neat stuff.
And I followed the ddr 5 ram thing bit user posted ,but all the charts have the temps in the 60c range.Obviously we can't compare those temps to cpu or gpu temps, but what am I missing? Is ddr5 RAM less resistant to heat even at theses lower temps?
 
Great thread. I never knew evga made a 2 fan 275 watt 980.
GTX 980 Ti. The factory spec on that GPU was 250 W, but I think they even made a Kingpin edition that was 300 W.

Here's one that's similar to mine, not sure if it's the exact one. When I bought mine, there were literally about a dozen different SKUs that were branded "EVGA GTX 980 Ti FTW" - it took me a while to figure out exactly which one I wanted!

Someone in the comments of that article helpfully posted this model comparison, which confirms the card mentioned was 275W:

2015-09-11_0303.png


And the Vega with 16 gb vram: is that a server part? I've never heard of that. Neat stuff.
There were two, actually. There was the Frontier Edition, which was sort of a Pro-level card, but had the same GPU as Vega 64.

Then, there was Radeon VII, which was basically a 7 nm die-shrink of Vega and had a full quad-stack HBM config w/ 1 TB/s of bandwidth.

And I followed the ddr 5 ram thing bit user posted ,but all the charts have the temps in the 60c range.Obviously we can't compare those temps to cpu or gpu temps, but what am I missing?
The author found that reduced CAS latency made the DRAM very sensitive to temperature. The only way it ran stably was with active cooling.

cas.png


The author also found that increasing the time between refreshes also was temperature-sensitive.

trefi.png


Is ddr5 RAM less resistant to heat even at theses lower temps?
That seems to the the gist of it, especially when you try to squeeze more performance out of it.​
 
  • Like
Reactions: Lamarr the Strelok