Question Server Build Advice/Recommendations

Apr 23, 2024
4
0
10
I'm fairly new to all of this, we run a business that we pay monthly for 60 VMs the specs are generally as follows from what I can see in the system information tab which I'll list below. I wanted to see recommendations on hardware/equipment to build this at the office instead of paying monthly for it since I know it will be cheaper. From what I've looked into each CPU can run 3 VMs & my biggest issue with adjusting the pricing at the moment is figuring out how to put this all in server(s) or if there are pre-build boxes are already I just put the mobos, cpus, ram, etc into or a separate one for the GPUs. I appreciate any advice. I am unsure if the GPU is even dedicated I am sure 2-3 VMs can run off of one, I have not got a straight answer for the this specific GPU.


I'll post the general specs of my current VM with the datacenter below running off Windows 10

CPU: Intel(R) Xeon(R) CPU 35-2667 v3 @ 3.20GHz, 3200 Mhz, 4 cores(s), 8 Logical Processcors
Bios: Blade 1.1.3
12GB Physical Memory
16.4GB Virtual memory
GPU: Quadro P5000

I know those CPUs are extremely cheap, but the GPUs are not. I can use them or either 1080s, which are a lot cheaper. Just depends on how many each board can fit or server.

I am just looking for the best advice for the most cost effective way to replicate what I am currently paying for

44-60 Machines
8-16GB Physical Memory per Machine
8GB Virtual Memory (Has to be 1080 Equivalent)

I can buy mostly all of the parts used & assemble.

If someone can advise we would all appreciate it greatly, especially on buildouts, parts, etc. Since we are not too familiar with servers just started researching them recently.

Thank you all, talk soon!
 
Also, I will note this is the CPU

Intel Xeon E5-2678 V3 2.5GHz 12-Core 24T PROCESSOR Socket 2011-3 CPU 120W

It seems like each server is running off of 4 cores & processors each, that is why I assumed it was 3 per VM we have. So we'd need about 15 in total for 45 or 20 for 60
 
In addition to whatever capital expense you have to acquire the hardware, you then have to account for the electricity increase and the cooling required for those servers. Capital expenses have different tax implications compared to your "renting" servers which is an operational expense. I am only mentioning this because the TOTAL cost of ownership is often underestimated.
 
  • Like
Reactions: 35below0
You also forgot to mention the cost for a person to maintain these VM's. If you put these VM's in house, you have to have an in house server person to maintain that which would increase the cost significantly. 60 vm's is not alot compared to 1-2 people that you have to hire to maintain these servers.
 
  • Like
Reactions: Corwin65
You also forgot to mention the cost for a person to maintain these VM's. If you put these VM's in house, you have to have an in house server person to maintain that which would increase the cost significantly. 60 vm's is not alot compared to 1-2 people that you have to hire to maintain these servers.
And the UPS infrastructure, the network, the firewall(s), etc. There is a LOT of "behind the scenes" you get with a hosted service.
 
So you would need about 10 of those for around 8k.

You could just go for a more modern high core count server and take advantage of faster IPC to only give out, say 2 cores to each VM.

Some decent pricing on 56 core Epyc (2x28 cores) chips from the last generation and then some, or even newer 48cores.

https://www.cpubenchmark.net/compar...E5-2678-v3-vs-AMD-EPYC-9454P-vs-AMD-EPYC-7443

Perfect, thank you I appreciate that. I need at least 4 cores minimum & a 1080 GPU or something equivalent per machine. Any advice on that.

In addition to whatever capital expense you have to acquire the hardware, you then have to account for the electricity increase and the cooling required for those servers. Capital expenses have different tax implications compared to your "renting" servers which is an operational expense. I am only mentioning this because the TOTAL cost of ownership is often underestimated.

Yeah that's fine, we've looked at it already. Just rather own then rent, instead of paying $1600 a month on just renting the machines, maybe not easier, but we'd still own them.

You also forgot to mention the cost for a person to maintain these VM's. If you put these VM's in house, you have to have an in house server person to maintain that which would increase the cost significantly. 60 vm's is not alot compared to 1-2 people that you have to hire to maintain these servers.
Yeah that part will be taken care of as well
 
Each VM will need about 300GB of space & a 960 or 1080 GPU or it's equivalent


So we'll need about 14.4TB of HDD space, let's say 48 machines.


Plus this may also help, we have a division directly that does gaming. So those 48 machines will be running games, that's what we need the 960/1080 equivalent for
 
Perfect, thank you I appreciate that. I need at least 4 cores minimum & a 1080 GPU or something equivalent per machine. Any advice on that.
I was talking off the shelf new rack server. Dell, HP, etc carries these chips and they are configurable for varying amounts of memory and what not. They include warranties, which is quite nice to have vs used older hardware.

1080 GPU like performance per VM is a bit tricky but you did mention maybe running 3 VMs off of that. You have to use trickery to get GPU sharing working on Geforce cards. That would mean something like a high end Quadro if you want to share GPU resources. If they need that much GPU power per workstation, then I would go down that route. Far easier to get a pile of RTX3060 and some cheap workstations than to centralize on huge GPUs in servers.

60 of these... You can always put them in a server rack.
https://www.newegg.com/msi-pro-dp180-13tc-054us-business-desktops-workstations/p/N82E16883151316?Item=N82E16883151316

Used to do that for our offshore team so they had quicker access to our local file servers. Those were much cheaper office pcs though with the exception of a few ludicrously overpriced workstations for doing big jobs that needed more memory.

What I can see from Dell as an option would be their Xeon Gold or Platinum 28c or 32c dual chips in an R750xa rack, and dual A16 GPUs (8x GPUs) only 1280 CUDA cores each (GTX1650 like), but they are Ampere and not Pascal like a GTX1080. Basically built for what you have in mind. Barebones that would 30k+, so for about 60k+ you could have a pair of them, but you would have to configure for your network and storage needs and such. Probably best to get aftermarket memory since Dell overcharges for that. That might be suitable for 16 workstations or so in terms of GPU with CPU to spare for running the hypervisor and maybe even a few other things. Don't really save much by dropping down to the smaller CPUs.