Exclusive Threadripper Socket TR4 Schematics, Cooler Compatibility

Status
Not open for further replies.

FPS-fan

Commendable
Oct 30, 2016
16
0
1,520
1
I'm just a humble home computer builder and will never have a need for one of these monsters. However I saw the three letters LGA and shuddered. My experience with LGA hasn't been great having bent socket pins in a build, thankfully only the once, and of course thinking to myslef: "How stupid am I!"

Hopefully the following link is viewable. Have a look at the bent pins in the image of a TR4 socket motherboard: http://www.pcworld.com/article/3197184/components-processors/amd-ryzen-threadripper-prices-specs-release-date-and-more.html

I'm glad I didn't do it! And perhaps I'm not as stupid as I originally thought.

For me personally, any future builds will utilize the AM4 socket simply because I can't guarantee robot precision that seems to be needed when installing the cpu.
 

PaulAlcorn

Senior Editor
Editor
Feb 24, 2015
680
27
5,010
0



Good eye! Those look familiar. I've killed boards that way before, one even recently. I can swap eight procs in for a single test cycle, so it's just the odds, I guess.
 

P1nky

Distinguished
May 26, 2009
27
0
18,530
0
"Threadripper CPUs have four die, and thus there are large disparate hotspots."

Only 2 dies are working, confirmed by der8auer and Ian Cutress from Anandtech after speaking with AMD representatives.
 

FormatC

Distinguished
Apr 4, 2011
981
0
18,990
1
I wrote in original not four, but "few" (small but fine difference), because nothing is confirmed yet oficially. BTW: this delidded CPU was a very, very early sample, older than the first working ES.

But it is not clear, which of them will work and which not and it might be in each CPU totally different. So all cooler vendors must plan with the worst case and use the cooling area for all four dies. :)
 

rwinches

Distinguished
Jun 29, 2006
888
0
19,060
30
Full Quote from Ian Cutress.

"Stories about TR having four dies. Confirmed two dies are disabled, and it's an ES - not final retail packaging. Could still end up 2"

So we have to see if a production TR have the same setup. Doesn't seem to make sense with multiple dies.
 

PaulAlcorn

Senior Editor
Editor
Feb 24, 2015
680
27
5,010
0


AMD will have to keep four emplacements in the package to ensure that pressure from heatsink pressure does not result in crushing the IHS. If there are missing packages the IHS could bow in those areas.

These other two Zeppelin die are likely 'dummy' die to avoid this issue. As such, it's possible those die on his sample are not 'real' die at all. AMD will also likely keep solder across the entire package, as well, to ensure consistent thermal dissipation.
 

bit_user

Splendid
Herald
It's striking that, in addition to the rectangular shape of the CPU recess in the socket frame, there is an asymmetrical arrangement of the mounting screw emplacements. On the left, we see a distance of 65.2mm and to the right only 46mm.
The two distances you quote are measuring different things. I think those holes are symmetrical, which they're taking advantage of in order to alternate which distances they list on each side, in order to keep the diagram from being too cluttered.
 

bit_user

Splendid
Herald

Most rack-mount servers do not use water cooling, yet > $1k CPUs are the norm.

I've never used water cooling, and I'm still reluctant to take the "plunge". Of course, I'm similarly reluctant to buy a $1k CPU, as the most I've so far spent on a CPU is about $315.

If Intel fixes the IHS TIM and optimizes the mesh in the i7-8800X or i7-8820X, I'm likely to go for it. I'm currently using a bit over 24 lanes of PCIe 3.0 in my workstation, so I think 28 lanes would suit me just fine. Threadripper is definitely more than I want to spend, plus I'm still limited by single-thread performance a lot more often than I could use > 12 threads.

Zen+ might win me back to AMD's side, but I just wish it exposed a few more PCIe lanes.
 

PaulAlcorn

Senior Editor
Editor
Feb 24, 2015
680
27
5,010
0


If you check out this post we just put up today, the video shows them mounting the cooler and the holes. The distances quoted in the article are from the same asymmetrical hole alignment.

http://www.tomshardware.com/news/amd-threadripper-tr4-socket-installation,35110.html

 

g-unit1111

Titan
Moderator


There's a huge difference between a rackmount server and a gaming / enthusiast PC. The rackmount server will most likely use some kind of external cooling solution. But for a gaming / enthusiast PC I wouldn't use anything less than full custom water cooling. And if you've got the kind of cash needed to build a Threadripper PC I'm sure you've already got something like that in mind.
 

bit_user

Splendid
Herald

If you mean like an air-conditioned server room, then yeah. But that's really not relevant to the discussion, since the reason for the air conditioning is simply to deal with the density of machines. It's nothing to do with enabling the servers to use air cooling. Servers' air-cooling solutions are designed even to work at above-normal room temperatures.

If you're talking about something else, then I'm pretty sure most standard rack-mount servers (and all the ones I've seen) are air-cooled.

 

FormatC

Distinguished
Apr 4, 2011
981
0
18,990
1
The mounting is asymetrical, I have the brackets and tried it ;)

 

rwinches

Distinguished
Jun 29, 2006
888
0
19,060
30
I was thinking the TR 16C 32T would use 4 4 4 4 instead of 8 8 0 0 and the 12C 24T 3 3 3 3 instead of 6 6 0 0 as that would spread the heat.
I suppose latency has to be considered though.
 

bit_user

Splendid
Herald

In Intel's FUD presentation:
claims EPYC's access to "far" L3 cache will weigh in at roughly the same latency as going to DDR4, which our sister site AnandTech confirmed in its testing. Intel also pointed out that EPYC's "near" L3 access is actually faster than Skylake's.
http://www.tomshardware.com/reviews/intel-amd-die-fabric-slides,5125-3.html

So, I'm sure latency is a big factor in this decision. As Intel was keen to point out, this has significant implications for virtualization.
 

PaulAlcorn

Senior Editor
Editor
Feb 24, 2015
680
27
5,010
0


Good point. Also, there are dual memory channels and PCIe connectivity tied into each Zeppelin die. So, if they were to use all four die we would be looking at eight memory channels and 128 PCIe lanes. Of course, there is the option to use die that have bad memory channels or PCIe to construct the TR, but i think latency would override the benefits.
 

Bill_133

Prominent
Aug 6, 2017
1
0
510
0
The monster package is intimidating, but pins-in-the-socket makes complete sense. The moving/fragile parts should be on the less expensive side! In the worst case, I'd always rather toss a $400-$550 motherboard than a $800 or $1000 processor.
I took some quality time with a loose socket and a non-functional device. Knowing the forward and reverse motions never hurts. Bring a Torx screwdriver.

The elastomer gizmo that supports the package is new to me, but I've assembled 2 systems so far and both worked the first time. Air cooled. Take your time. There's no "snap", nothing irreversible. It all fits nicely. You should be able to back up and start over at any point.
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS