«

»

Mar 23

Home Lab Rebuild – Synology and vSphere 6

Last week I ordered a Synology DS1815+ for the home lab. All the pieces arrived this past Friday. A good bit of my weekend was spent rebuilding the home lab to include the new Synology and vSphere 6.

synology-1815

My lab has been running for a couple of years and I have tested a bunch of parts and pieces of different technologies – VMware Horizon, Citrix XenDesktop, etc. Last week I had upgraded my VCSA to vSphere 6 and had originally planned to just upgrade my hosts and drive on, instead I decided to do a complete rebuild.

Here is a list of the parts I ordered:

The Synology did not come with any documentation outside of the quick start guide but it was very easy to set up. I installed the 4 x 2 TB Seagate NAS drives in slots 1-4. I had a couple of 1 TB Western Digital Red drives I installed in slots 5 and 6. I also had a couple of Crucial m4 64GB SSDs (CT064M4SSD2) which were not being used so I added these to the Synology in slots 7 and 8 to use for SSD Read-write cache. Will probably upgrade these to a couple larger SSDs once the budget allows, but for now they will work just fine (Didn’t take long to fill up all 8 slots did it).

After the drives were installed I connected the Synology up to my network and powered it on. After 60 seconds or so the Synology pulled a DHCP address; I connected to http://DHCP_Address:5000/ and followed the set up wizard. I configured the 4 x 2 TB Seagate NAS drives in a volume using the Synology Hybrid RAID (SHR) with 1 disk fault-tolerance, this will be where I host my NFS Datastores for my vSphere lab. The 2 x 1 TB Wester Digital REDs are configured in a separate RAID 1 volume, I am going to use this for general storage – things like my pictures, music, and such. I created the SSD Cache using the 2 Crucial 64GB SSDs and attached the cache to the SHR volume. The Synology was already running DSM 5.1 but it was a couple updates behind so I updated DSM 5.1 to the most recent update – DSM 5.1-5022 Update 4.

I created two NFS Datastores on the SHR volume. This post on Configuring Synology for NFS for ESXi host access helped with this.

Once the Synology was up and running I installed ESXi 6.0 on my two lab hosts. One of these, a Shuttle build, has been my primary lab host for some time. The other is an HP Microserver which I had been using as my NAS box. Both of these have a 4 port Intel 82571EB Gigabit Ethernet Controller. ESXi 6.0 installation is straight forward, only issue I had was the on-board NIC in the Shuttle is not recognized by ESXi, I seem to remember having to install a VIB for it back when I deployed ESXi 5.5 on it. Don’t really need the on-board since I have the 4 ports on the Intel card, so I did not worry about getting it working. May tackle it later.

On the ESXi hosts I configured the VMkernels to use for NFS and mounted the NFS shares on the Synology. I created two separate VLANs and assigned IP addresses on the Synology LAN 3 and LAN 4 connections on each VLAN. Each ESXi hosts has a VMkernel on each VLAN. I mounted each of the two datastores on separate VLANs, this is to balance the NFS connectivity across two 1 GB interfaces on both the Synology and the ESXi hosts (NFS 4.1 is not currently supported by the Synology).
esxi-nfs-network

I downloaded and installed the NFS VAAI plugin from Synology. The instructions for installing the VIB can be found here.
synology-VAAI

I deployed a new Windows 2012 R2 VM to be used for my lab Domain Controller and setup my lab.local domain.

Then I deployed the VCSA using the web installer. For some reason the first deployment failed. The appliance was imported but the customization (hostname, IP, etc) did not apply.

VCSA-install

I deleted the failed VCSA appliance and ran the deployment again. This time it worked with out issue.

Once the VCSA was deployed I created a datacenter and added my ESXi hosts. The only thing I had saved from the original lab was my Windows Server 2012 R2 template. I deployed a couple other infrastructure VMs – an SQL server and a File Server. Deployed the vMA 6. I also deployed a View Connection and Security Server and a Windows 7 desktop for remote access – TGFVK (Thank God for vExpert Keys)

So far the Synology is performing well. Everything is running great. Have not really done any benchmarking but I am regularly pushing over 1000 write IOPS and my latency average is under 1 ms.

As part of the new lab build I updated my JumpSquares with my new lab jumps.
jumpsquares

Now to start messing with vSphere 6. First stop, SSL certificate replacement…

About the author

vHersey

Hersey Cartwright is an IT professional with extensive experience designing, implementing, managing, and supporting technologies that improve business processes. Hersey is Solutions Architect for SimpliVity covering Virginia, Washington DC, and Maryland. He holds the VMware Certified Design Expert (VCDX-DV #128) certification. Hersey actively participates in the VMware community and was awarded the VMware vExpert title in 2016, 2015, 2014, 2013, and 2012. He enjoys working with, teaching, and writing about virtualization and other data center technologies. Follow Hersey on Twitter @herseyc

5 comments

Skip to comment form

  1. Patrik W.

    Hi!

    Great article! I’ve been using my Synology DS1813+ for NFS datastores on my ESXi ever since. Just as you describe in the article I am also separating NFS traffic via two VLANs and two NICs on my Synology box and ESXi host. After reading your article the NFS VAAI plugin came to my knowledge and I decided to install the VIB. By the way, I am too using vSphere 6.0. For some strange reason it does not seem to work. The VIB is loeaded:

    [root@esxi1:~] esxcli software vib list | grep nfs
    esx-nfsplugin 1.0-1 Synology VMwareAccepted 2015-06-23

    My NFS datastores still have “Hardware Acceleration” listed as “Unknown”.

    If I initiate a copy of a VM the following is logged:

    2015-06-23T10:44:31.208Z info vpxa[40081B70] [Originator@6876 sub=Libs opID=50c54393-70e6-42f6-845e-147341e44fca-360-ngc-c3-b1-1b] SynologyNasPlugin: could not create RPC mount client for the remote host 10.0.14.5 error:: RPC: Program not registered

    I have Googled this error and Synology forum without any good answers. Any idea what might be causing this? Any thoughts or suggestions would be greatly appreciated!

    1. David

      This article explains the problem and that you might need to upgrade your switch to version 6.0
      https://forum.synology.com/enu/viewtopic.php?t=102194

  2. Jerrell Roberson

    Hello Mr. Cartwright, thanks for all of the great information on your site. I am writing to ask a question about storage and virtualization. I read your article about your lab with the Synology appliance. Do you still feel strongly about your purchase? If so, I plan to purchase one as well. I plan to use it for home storage (or NAS) and for my penetration testing lab. My budget is 3k. Will please give me suggestions on the HD’s, NICs(Ethernet or Gibabit), RAM (amount I should purchase), where top purchase and possible software like ESXi. My knowledge is limited, however, I have read on it. I would like to be able to take snapshot of images and revert for my kids to use, have vulnerable applications running, etc.

    Truly appreciate your recommendations in advance.

    Thanks Jerrell

    1. vHersey

      Jerrell,

      Thanks for stopping by. Yes the Synology is a great storage array and I am using it for both my vSphere lab and media storage. I don’t use the snapshot capability but it is available. I have it populated with a 2 x 1 TB WD RED NAS drives where I store my media (videos, music, etc) and 4 x Seagate 1 TB NAS Drives which I use for the vSphere lab. I have 2 x 512 GB SSD which I use as a write cache for the vSphere lab.

      Hope this helps.

      Hersey

  3. Vanbrugh

    Hi vHersey,

    Thanks for your information. It is useful.

    I would like to built my homelab using Synology DS916+ for VMware study, network emulation and personal storage. Idea is 2 x 3TB hardisks running raid 1 and 2 x 128/256 SSD harddisk running raid 1 for read/write cache.

    A question is it possible to divide 3TB harddisk space into 1TB space for VMware datastore and 2TB for personal storage (media file, photo….etc) and both using the SSD harddisk for read/write cache working into independent environment ?

    Vsphere host idea is 3 x ASRock desktopmini 110 with i7 6700T CPU (4C8T,TDP35W), 32GB RAM, 128/256 SSD and using cisco SG300-10 for interconnection the hosts PC and the storage.

    Thanks & Regards,

    Vanbrugh.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

fifteen − one =