Server Upgrade

It was time to upgrade my old Dell T20 (Xeon E3-1225v3) home server with a budget friendly newer model. Started out looking at some Supermicro Epyc Rome models but quickly found out they we’re too costly. So scaling down settled for a Ryzen 3700x.

Since i use VMware ESXi for all of my previous home servers, needed a motherboard that was hardware compatible with ESXi 6.7. Through some googling stumbled across the ASRock X470D4U which supositely is supported by VMware ESXi 6.7.

The X470D4U mATX sized motherboards has support for Ryzen 3rd gen CPU’s (like the 3700x) and boast a lot of server grade features.

  • AMD Ryzen 3rd generation support
  • Dual Channel DDR4 UDIMM (up to 128GB)
  • 6 SATA3 6.0Gb/s
  • RAID 0, 1, 10
  • M.2 slots
  • Dual 1Gb/s NIC’s
  • IPMI

Putting it all together this is what i came up with:

  • ASRock Rack X470D4U
  • AMD Ryzen 3700x
  • Corsair 32GB DDR4-3200 Kit (x2)
  • Corsair CX550X PSU
  • Asguard 1TB NVMe
  • Be Quiet! Silent Base 601 case

Total cost of the components approximately 1200 euro’s.

Other components brought over from my previous server:

  • Western Digital RED 2TB (x3)
  • EVGA 1050Ti

First time login into IPMI was a treat. Never had an IPMI based server before and needed to drag a spare monitor and keyboard/mouse around to do troubleshooting. With IPMI you can login to the server remotely on the BIOS level. Power off, reboot the server, get a remote desktop, mount media, all the good stuff.

Next up, installing VMware ESXi 6.7. Which was straightforward except for two things.

  • RAID controller was not supported in ESXi 6.7
  • NVMe drive was also not supported

The NVMe problem was quite easily fixed by replacing the nvme.v00 from an older ESXi 6.5 update 2 ISO to /bootbank. The RAID controller unfortunately had no fix and using RAID in this setup had to be done by one of VMware’s supported RAID controller cards. (most likely a LSI variant)

Restored my back-upped VM’s and got the whole thing running within half a day. Unfortunately no RAID10 and ended up by mounting the HDD’s individually. (no RAID at all for the moment)

A list of the VM’s the server currently runs:

  • Ubuntu 18.04 (main coding and source compiling VM)
  • FreeNAS (ZFS Media/file vault and Plex Server VM)
  • Kali and Parrot (Pentesting VM’s)
  • Debian 9 (DirectAdmin Webserver VM)
  • UNMS (Ubiquiti Network Management System VM)
  • Windows 10 (Nvidia hardware accelerated VM)
  • Windows Server 2016 (Domain Controller / Milestone Camera VM)
  • Several other VM’s for testing purposes
All 16 threads running

Since i don’t have a 10GB switch and most of the communication to the server is done wirelessly, i didn’t see the need to get the 10GB ASRock Model. Instead i opted for ‘bonding’ the 1GB links together through a Link Aggregation Group. (LAG). Both VMware and my Ubiquiti Edgeswitch 10XP support this.

So in vSwitch under NIC Teaming enable Load Balancing on the basis of “Route based on hash”

Do the same thing for the Management Network

Getting the Nvidia card up and running is next. We need to set it to passthrough

Go to Manage, Hardware, PCI Devices and Toggle Passthrough for the videocard like so.

Next edit the VM which the videocard will be added to. Click on “add another device” at the top, and select “PCI device”. Check if the correct PCI device is added. Also under memory tick “Reserve all guest memory (All locked)”

Go to “VM Options” – “Advanced” – “Edit Configuration” and add the following line: hypervisor.cpuid.v0 FALSE

Ok to exit and save.

I proceeded to install Remmina for Linux (remote desktop client) and connected through RemoteFX RDP to the Windows 10 VM for 3D accelerated graphics.

Right, all done on the VMware side. Now to do some actual work on it.