Question ASRock B660 Bare Bones: Adding 10Gbe and 2xUSB4: Progress & Questions

cmccane

Distinguished
Feb 28, 2011
10
1
18,510
The ASRock Deskmeet, an 8 liter ITX mini server, is my current ongoing project. The build so far is:

ASRock B660 Deskmeet case & motherboard
Intel i7-13700K cpu
128G of DDR4-3600 RAM
Nvidia RTX 4060ti 16G
2T WD SN860 M.2 SSD
Generic AQC113C 10Gbe
Asmedia ASM4242 2x USB4 (coming soon)

The 10gbe card was needed becasue the previous Mellanox 25Gbe nic was overheating. The 10gbe card is in the same x4 slot which is mounted on an M.2 adapter. Its running fine but temps as reported in Ubuntu 22.04 is running around 85 to 92 degrees celsius. I already upgraded the heatsink but it appears I might need to add a fan. My questions for the forum are:
1. Is 92 degrees celsius too hot for a 10Gbe controller? The cpu and gpu never get that hot even when load testing.
2. Is there any way to make the 4060ti run its fan to cool the case? This case has it positioned next to a vent screen and there's no other fan except cpu fan.
3. Does anyone have a source for the new ASM4242 PCIe 4.0 based card?
4. Is anyone aware of any upcoming ITX MB with built-in 10Gbe and ASM4242?
Thanks in Advance
 
If your NIC has a heatsink but no method of active cooling, you could add a fan to the heatsink. I would get a PWM fan hub and a PWM fan...but you're a the point where you're now going beyond spec of a small form factor system.

To me it looks like you need an matx build.
 
Mellanox and other NICs designed for server installation rely on built-in high pressure (noisy) forced air cooling to maintain normal operating temperatures. When transferred to a quieter desktop system, the small passive heatsinks do not receive enough airflow to cool the chip.

I had an LSI HBA SAS controller card die of heatstroke after several years in a desktop, so I now fit 30mm or 40mm cooling fans to the heatsinks and run them at full speed. This reduces the heatsink temperature from too hot to touch to just warm. Fans can be difficult to install if the adjacent PCI slot is filled with another card.

I don't have any heat problems with my collection of Asus XG-C100C 10Gbe NICs, but they do have fairly large passive heatsinks. My Mellanox X2 SFP+ 10Gbe cards are perfectly happy in my HP servers. All they need is lots of air.

It might be worth checking the ServeTheHome forums to see if people have found alternative cooling solutions.
 
  • Like
Reactions: cmccane
If your NIC has a heatsink but no method of active cooling, you could add a fan to the heatsink. I would get a PWM fan hub and a PWM fan...but you're a the point where you're now going beyond spec of a small form factor system.

To me it looks like you need an matx build.
No doubt it's a stretch to get an SFF PC with 10Gbe and two 40Gb TB3/USB4 ports. Its my first SFF build, happened to see this Asrock ITX mini PC with 4 DIMM slots. It's very small. All OEMs except Apple seem to hate doing much with I/O. My Mac Mini M2 Pro has built-in 10G (AQC113) plus four TB4 ports.

Now running the B660 box on 10Gbe w/ AQC113C connected to the M.2 Wifi slot via a riser cable. The AQC113C is showing incredible flexibility by running on PCIe 2.0 x2. The card wants any combination of speeds that equal 16GT/s to run at full 10Gbe speed. I was hoping the Wifi M.2 slot would be PCIe 4.0 but turned out to be 2.0 so x2 speed is at 8GT/s. Sending files over the link using samba shows 6 to 7 gigabits per second. This configuration would run a 5Gbe PCIe card at full speed, ex. AQN108.

It's disappointing that there's 8 PCIe 4.0 lanes going to the Northbridge but nothing really using it. I'm also wasting another 8 PCIe 4.0 lanes on the x16 GPU slot since 4060ti only uses 8 lanes.
 
Mellanox and other NICs designed for server installation rely on built-in high pressure (noisy) forced air cooling to maintain normal operating temperatures. When transferred to a quieter desktop system, the small passive heatsinks do not receive enough airflow to cool the chip.

I had an LSI HBA SAS controller card die of heatstroke after several years in a desktop, so I now fit 30mm or 40mm cooling fans to the heatsinks and run them at full speed. This reduces the heatsink temperature from too hot to touch to just warm. Fans can be difficult to install if the adjacent PCI slot is filled with another card.

I don't have any heat problems with my collection of Asus XG-C100C 10Gbe NICs, but they do have fairly large passive heatsinks. My Mellanox X2 SFP+ 10Gbe cards are perfectly happy in my HP servers. All they need is lots of air.

It might be worth checking the ServeTheHome forums to see if people have found alternative cooling solutions.
I watch STH on YouTube and visit their website. Excellent work from those guys. I will get back to 25Gbe at some point but you are right, the tiny heatsink on smallest ConnectX-5/6 cards are meant for fast airflow servers only. Most server OEM's already offer dual 25Gbe LoMs (LAN on Motherboard). Hoping that 10Gbe Macs will drive a creator market for 10Gbe everywhere. Copying files to a server at a minimum of 1.2GB/s should be a requirement for any media company. More and more home users are realizing how slow 1G ethernet is for copying files. ISP's now offer Internet at over 1Gb/s as well. Copying large media files at 3GB/s is a good reason to bump up to 25Gbe, close to the same speed as a local SSD.
 
The VRM on this motherboard is to weak for anything above non K i5 processors.
I'm using the Asrock B660 ITX MB with an Intel I7-13700K. I have maxed it out using Linux/stress which maxes out all cores to 100%. The problem is with heat, which I watch closely. In order to get Linux/stress to run for 24 hours without overheating, I had to cut a round hole in the bottom and mount a 60mm fan. Now it runs all day without ever exceeding 75 degrees celsius. Before adding the fan, the temp kept going up and reached 90 after an hour so I shut it down to avoid damage.