• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

Build Advice Advice needed for a Pop_OS server with VM's ?

DREDKNOT_2077

Honorable
Nov 5, 2017
142
3
10,595
Is this a decent server to start build on ?


with these two xeon

e1fdb2f454aac99a2381edf6d24a131396b8c9dd_2_1078x658.png


which is a little better then the ryzen 5 pro 2400ge but up side thers 2 for the price

id like to get this an add some tb ssds an a slim bluray drive an slow buy an add some ram to it which i was thinking about 512gb total an if possible to build an add a custom water loop for the two xeons

an if this doesnt sound to crazy to put a Radeon-Pro-3200


in an run all this on truenas core with some vm's for a pop-os 23.04 lts vm an two andriod cloud based os's for two laptops an use something like anbox-cloud.io which iv got no ides how to do but id like to try

do you think this can be done an if so any pointers

id like to run the os with vm's on an 2tb nvme on a pci adapter
 
is this a decent server to start build on


with these two xeon

e1fdb2f454aac99a2381edf6d24a131396b8c9dd_2_1078x658.png


which is a little better then the ryzen 5 pro 2400ge but up side thers 2 for the price

id like to get this an add some tb ssds an a slim bluray drive an slow buy an add some ram to it which i was thinking about 512gb total an if possible to build an add a custom water loop for the two xeons

an if this doesnt sound to crazy to put a Radeon-Pro-3200


in an run all this on truenas core with some vm's for a pop-os 23.04 lts vm an two andriod cloud based os's for two laptops an use something like anbox-cloud.io which iv got no ides how to do but id like to try

do you think this can be done an if so any pointers

id like to run the os with vm's on an 2tb nvme on a pci adapter
The most important thing about a host for VMs, is RAM. The price you show has no RAM.
 
is this a decent server to start build on


with these two xeon

e1fdb2f454aac99a2381edf6d24a131396b8c9dd_2_1078x658.png


which is a little better then the ryzen 5 pro 2400ge but up side thers 2 for the price

id like to get this an add some tb ssds an a slim bluray drive an slow buy an add some ram to it which i was thinking about 512gb total an if possible to build an add a custom water loop for the two xeons

an if this doesnt sound to crazy to put a Radeon-Pro-3200


in an run all this on truenas core with some vm's for a pop-os 23.04 lts vm an two andriod cloud based os's for two laptops an use something like anbox-cloud.io which iv got no ides how to do but id like to try

do you think this can be done an if so any pointers

id like to run the os with vm's on an 2tb nvme on a pci adapter

The most important thing about a host for VMs, is RAM. The price you show has no RAM.
this is to be a near bare bones buy just the main with the cpu's then slowly to buy an add ram an stated other gear an software , my max budget for pc gear a month is 320$ which also has to cover sh&tx the goal is 512gb of ram
 
well what i stated about pop os , truenas core an the vm's plus also with the add of a slim blu ray driver is a direct to render & storage mass media srever to copy an compress all my blu-rays an my 4k's disc collection
Yeah, I get the OSs.

What will these VMs be doing, and how many?
512GB RAM is quite significant.

And a custom water loop, inside a 2U server box?
 
this is to be a near bare bones buy just the main with the cpu's then slowly to buy an add ram an stated other gear an software , my max budget for pc gear a month is 320$ which also has to cover sh&tx the goal is 512gb of ram
Have you ever been in the same room with a 2U server chassis? I would not want one in my house. They are too loud to be anywhere but a seldom visited basement or garage.
 
Yeah, I get the OSs.

What will these VMs be doing, and how many?
512GB RAM is quite significant.

And a custom water loop, inside a 2U server box?
well the max is 1.5tb so half seem like the sweet spot between plenty of breathing room & going crazy & i can

an as for watercool id drill holes in the case an mount externally for each cpu an maybe the ram to

a https://modmymods.com/aquacomputer-...-mm-radiator-wtih-d5-pump-aluminum-33015.html

with one combined rez i figure that would all cool an quite
 
Are you intending to populate the server with ECC RDIMMs or ECC LRDIMMs? I don't think you can mix the two memory types. 512GB RAM will be quite expensive, even if you buy second-hand. How big are your VMs?

You might find this DIY cooling mod for the ML380 Gen9 of interest.
https://www.reddit.com/r/homelab/comments/i0fn9y/making_quiet_hpe_dl380_gen9_for_home_renderfarm/

I haven't modded the cooling in either of my ML350P Gen 8 chassis running TrueNAS Core. At startup the fans are very loud but settle down after about four minutes. I don't run them very often so the noise is of little concern.

If you're going to run TrueNAS core, you might need to buy a third-party HBA controller flashed to IT-mode. The HP controller probably defaults to hardware RAID and may complain if it detects non-HP SAS (or SATA) drives. Ditto if you don't use genuine HP drive carriers.

What boot drive will you be using for TrueNAS Core? These days they recommend SSDs and not USB, but you may have "fun" finding a compatible interface if you want to retain all the front panel SAS bays for storage.
 
i suppose ECC RDIMM as i dont know about the other an like i said id buy the ram in chunks an to as How big my VMs for each i was planing to dedicate 50 gb of storage an 64gb ram
as for boot drive for the main os an vm's is going to be on a 2tb nvme on a pci adapter so between the ram an os's id have room for 7 vm's aside from the os an 1.6tb of space left on the nvme for cache or scratch.

an as for third-party HBA controller flashed to IT-mode were would i get one as i want my main mass storage on ssd / hd array to do raid 10

using Seagate BarraCuda 4 TB 2.5" 5400 RPM for mass storage https://pcpartpicker.com/product/ky98TW/

an Crucial MX500 4 TB https://pcpartpicker.com/product/p7nypg/crucial-mx500-4-tb-25-solid-state-drive-ct4000mx500ssd1

as redundant back up

i was also going to use two of these for each power supply

APC UPS 1500VA UPS Battery Backup and Surge Protector, BX1500M with their exspansion cell APC External Battery Pack, BR24BPG



which i figure would handle a blackout an proper shut down during is load run
 
LRDIMMs allow you to fit up to 3TB RAM in the ML380 Gen9 (24 x 128GB). RDIMMs allow "only" 768GB (24 x 32GB) due to the lower maximum size of RDIMM.

https://www.kingstonmemoryshop.co.uk/server/hp/proliant-dl-series/hp-proliant-dl380-gen9-g9-server

I see you're planning to run two operating systems, POP OS (which I've never used) and TrueNAS Core (which I have used).

I'd be inclined to start off with a very basic (minimal) configuration, e.g. 1 CPU, 32GB RAM, until you've ironed out all the problems you're likely to encounter getting non-HP branded hardware to work in the server.

The HP BIOS in my ML350p Gen8 servers has a tendency to object when it discovers a third-party device at startup. This usually manifests itself by running the fans at full speed all the time, which can be almost deafening.

To prevent this, I sourced controller cards, network cards and USB cards branded with the HP logo. In the case of HBA controllers, the underlying chipset is often made by LSI, but the firmware on the card is HP. If the server BIOS detects a card with non-HP firmware, you may encounter problems.

I buy my HBA controllers on eBay. I normally use SAS (Serial Attached SCSI) hard disks in my TrueNAS Core arrays, running them in RAID-Z2 (equivalent to RAID 6). Since I'm running hard disks, I'm happy with SAS-1 (3Gb/s), because even with eight drives, I don't saturate the bus.

If you're thinking of installing SATA SSDs in RAID, you'll probably want to consider a SAS-2 (6Bb/s) or better still a SAS-3 (12Gb/s) HBA controller. SAS-3 controllers are expensive. You can connect SATA drives to a SAS controller, but you cannot connect SAS drives to a SATA controller. SAS controllers typically come with 4, 8 or 16 ports.

https://en.wikipedia.org/wiki/Serial_Attached_SCSI
https://www.servethehome.com/current-lsi-hba-controller-features-compared/

I have no experience of fitting a quad or octal M.2 NVMe drive PCIe card in an HP server. The same warning applies. Check compatibility before buying.

You may find your boot drive options limited in an HP server if you want to reserve all the ports on the HBA controller (and front panel drive bays) for RAID. I'm still booting TrueNAS Core from a fast USB memory stick plugged into an internal USB2 port on the ML350p motherboard. The preferred option is now SSD, but I'm not giving up the only available SATA port which is currently used by the DVD drive. As I said, interfaces on a stock HP server are limited.

Getting an HP server up and running with non-HP hardware and non-Microsoft server operating systems can be difficult, but other people have succeeded. If you don't have a current HP License for your server, you may find yourself locked out of certain functions, including iLO.
 
LRDIMMs allow you to fit up to 3TB RAM in the ML380 Gen9 (24 x 128GB). RDIMMs allow "only" 768GB (24 x 32GB) due to the lower maximum size of RDIMM.

https://www.kingstonmemoryshop.co.uk/server/hp/proliant-dl-series/hp-proliant-dl380-gen9-g9-server

I see you're planning to run two operating systems, POP OS (which I've never used) and TrueNAS Core (which I have used).

I'd be inclined to start off with a very basic (minimal) configuration, e.g. 1 CPU, 32GB RAM, until you've ironed out all the problems you're likely to encounter getting non-HP branded hardware to work in the server.

The HP BIOS in my ML350p Gen8 servers has a tendency to object when it discovers a third-party device at startup. This usually manifests itself by running the fans at full speed all the time, which can be almost deafening.

To prevent this, I sourced controller cards, network cards and USB cards branded with the HP logo. In the case of HBA controllers, the underlying chipset is often made by LSI, but the firmware on the card is HP. If the server BIOS detects a card with non-HP firmware, you may encounter problems.

I buy my HBA controllers on eBay. I normally use SAS (Serial Attached SCSI) hard disks in my TrueNAS Core arrays, running them in RAID-Z2 (equivalent to RAID 6). Since I'm running hard disks, I'm happy with SAS-1 (3Gb/s), because even with eight drives, I don't saturate the bus.

If you're thinking of installing SATA SSDs in RAID, you'll probably want to consider a SAS-2 (6Bb/s) or better still a SAS-3 (12Gb/s) HBA controller. SAS-3 controllers are expensive. You can connect SATA drives to a SAS controller, but you cannot connect SAS drives to a SATA controller. SAS controllers typically come with 4, 8 or 16 ports.

https://en.wikipedia.org/wiki/Serial_Attached_SCSI
https://www.servethehome.com/current-lsi-hba-controller-features-compared/

I have no experience of fitting a quad or octal M.2 NVMe drive PCIe card in an HP server. The same warning applies. Check compatibility before buying.

You may find your boot drive options limited in an HP server if you want to reserve all the ports on the HBA controller (and front panel drive bays) for RAID. I'm still booting TrueNAS Core from a fast USB memory stick plugged into an internal USB2 port on the ML350p motherboard. The preferred option is now SSD, but I'm not giving up the only available SATA port which is currently used by the DVD drive. As I said, interfaces on a stock HP server are limited.

Getting an HP server up and running with non-HP hardware and non-Microsoft server operating systems can be difficult, but other people have succeeded. If you don't have a current HP License for your server, you may find yourself locked out of certain functions, including iLO.
thanks for all that it was very informative , question in-regards to the "getting non-HP branded hardware to work" does any similar issues exist with dell servers of similar spec
 
I've no experience of modding Dell servers at home although I've seen dozens of them at work. It wouldn't surprise me if Dell place similar constraints on using third-party components in their servers. If they can charge 3x more for a "certified" drive with "special" firmware, that's what they'll do.

Professional rack mount servers are noisy beasts and if you've never visited a server room with hundreds of blade servers, you can't image how loud they get. Second hand servers might be cheap but they can be awkward to repurpose.

I suggest you try the Serve The Home Forum for more information.

https://forums.servethehome.com/index.php
 
thanks for all that it was very informative , question in-regards to the "getting non-HP branded hardware to work" does any similar issues exist with dell servers of similar spec
The server vendors ARE very particular about allowed parts. Because they have to "qualify" them to ensure there are not unintended long term problems. When NVIDIA released the virtual desktop GPUs, where I worked had to wait for Dell to qualify them before we could purchase them. Dell was one of the first vendors to certify the NVIDIA GPUs for the R720 series servers.
Servers are very different than desktops. Have you thought about workstation class hosts? They have simialr CPU and memory capabilities to 2U servers but in a more normal sized case and support for normal (without risers) cards.
 
I've no experience of modding Dell servers at home although I've seen dozens of them at work. It wouldn't surprise me if Dell place similar constraints on using third-party components in their servers. If they can charge 3x more for a "certified" drive with "special" firmware, that's what they'll do.

Professional rack mount servers are noisy beasts and if you've never visited a server room with hundreds of blade servers, you can't image how loud they get. Second hand servers might be cheap but they can be awkward to repurpose.

I suggest you try the Serve The Home Forum for more information.

https://forums.servethehome.com/index.php
thanks
 
The server vendors ARE very particular about allowed parts. Because they have to "qualify" them to ensure there are not unintended long term problems. When NVIDIA released the virtual desktop GPUs, where I worked had to wait for Dell to qualify them before we could purchase them. Dell was one of the first vendors to certify the NVIDIA GPUs for the R720 series servers.
Servers are very different than desktops. Have you thought about workstation class hosts? They have simialr CPU and memory capabilities to 2U servers but in a more normal sized case and support for normal (without risers) cards.
yeah i did i was considering the

Custom Dell Precision T5810 Workstation also from https://pcserverandparts.com/build-your-own-custom-dell-precision-t5810-workstation/

an transfer the core guts to a new case an slowly buy an add in ram which i would go for the full 512gb of ram an new psu maybe a 750 to 850w an

a ARCTIC Liquid Freezer II 280 if it will fit an cool the Intel Xeon E5-2650 v4 ?

an if im reading the pdf on this units mobo it either take 2 or 4 SK Hynix Gold P31 1 TB M.2-2280 PCIe 3.0 X4 drives but i cant tell if ther 3.0 or 4.0 sockets

an two Western Digital Red Plus 10 TB https://pcpartpicker.com/product/C9...0-tb-35-7200rpm-internal-hard-drive-wd101efbx in raid 10

an a Gigabyte GAMING OC Radeon RX 6650 XT 8 GB Video Card hows that sound ?