PKS NSX-T Home Lab – Part 6: Config Nested ESXi Env

Intro

In the previous post in this series we deployed our nested ESXi hosts. In this post we are going to configure them and prepare our nested environment for the NSX-T install which includes deploying a vCenter, configuring networking and storage. Again, like I have said previously, this is one way to do it, there are many permutations. Goal here is to get up and running asap. Nothing stopping you implementing a different way now or later.

Complete list of blog post in this series…

vCenter

We have deployed our nested ESXi hosts, now we need a vCenter to manage them. As always, there is options, you can deploy the vCenter on the nested ESXi hosts, or deploy on the physical ESXi hosts, or use your existing vCenter (if already exists). I personally deploy a new vCenter on the physical ESXi hosts beside the nested ESXi host VMs so its on the same release as the nested ESXi hosts for that particular environment.

Download the VCSA ISO and extract or mount it on your laptop. Use either the CLI or UI tool in the ISO to deploy VCSA. For step by step instructions see here. When prompted, deploy a VCSA for a “tiny environment”. This will size the VCSA to have 2 vCPUs and 10GB memory. As mentioned previously we are tight for memory so this can be manually modified if so wished to a lower figure. Currently I have mine at 5GB without issues. Also use the nested-mgmt port group which is on VLAN and CIDR of 10.0.10.0/24.

After VCSA is deployed, log into vCenter with credentials created during the install process. Add necessary vSphere and vCenter licenses or continue with the 60 trial licenses. Create two clusters, compute and management. Add the nested ESXi hosts to vCenter as seen below.

Networking

To recap, our nested ESXi host VMs have four NICs, each connected to the nested-trunk port group on the physical ESXi hosts.

Distributed Switch

Now with a vCenter managing our nested ESXi hosts, we can create a distributed switch. Create it with 2x uplinks, a MTU of at least of 1600, a port group for each VLAN network we created on the pfSense router, and the VLAN ID added for each.

By default an ESXi host when installed is on a vSwitch. We will now migrate it to the distributed switch, vDS, we just created. Right click the distributed switch > Add and Manage Hosts > Add Hosts > New Hosts. Here we assign uplinks to vmnic0 and vmnic1 for each nested ESXi host. We will leave vmnic2 and vmnic3 unassigned for now as we will use them for NSX-T later.

Note, screen shot below is wrong, shows all vmnics getting assigned an uplink, where only the first two should be, vmnic0 and vmnic1.

Assign vmk0, our management kernel, from our vSwitch to our management port group on our distributed switch. Click next, next, finish.

vMotion VMkernal Adapter

Next add a vMotion VMkernel network adapter for each nested ESXi host. Go Hosts and Clusters view, click one of the nested ESXi hosts > Configure tab > VMkernel adapters > Add Networking. 

Select VMkernel Network Adapter, select our vMotion port group, vmotion-20, select vMotion service, use static IPv4 settings and fill out the fields.

Repeat for each nested ESXi host.

Storage

As always there’s options. You can configure local storage, vSAN, or remote storage or even a mix.

Local Storage

In our environment where we have a single host in the management cluster, local storage is a viable option. In our compute cluster we have two hosts so local storage is not a viable option as we need shared storage for DRS etc.

To use local storage, add a disk to the nested ESXi VM

Then in the nested environment, scan for new storage by going to the host, Configure tab > Storage Adapters > Rescan Storage

After which the newly added disk will be now found

Next we create a VMFS datastore on it. Right click the host > Storage > New Datastore

Step through the wizard to create a VMFS datastore. Once created it can be seen in several places including the Storage view as seen below.

vSAN

vSAN is a viable option for the compute cluster but there are some items to note. vSAN in an ideal world needs four or more hosts but can operate with three or two and an external witness. The witness can be a ESXi host or a vSAN witness appliance. In this particular env we have built out so far, we have two hosts in our compute cluster and a single host in our management cluster therefore the two hosts with a witness is a viable vSAN option. If another host is added to the compute cluster then the external witness configuration is not required. Note, enabling vSAN does consume additional memory which is limited in our NUC based homelab. I have found enabling vSAN typically consumes approx 5GB per host in my env.

To configure vSAN we need to add a new VMkernel adapter with the vSAN service. Follow the steps as used previously except select vSAN service and our storage network on VLAN 30. Perform this for each nested ESXi host.

Next we need to add disks to the nested ESXi hosts that will be used/claimed by vSAN. We need to add two disks to each compute host, one for cache and one for capacity. Typically the cache disk size is 10% of the capacity disk. Add the disks using the same process as above but don’t create a VMFS datastore on them.

To enable vSAN, click the cluster in the Hosts and Clusters view > Configure tab > vSAN > Services > CONFIGURE

In the wizard select Two host vSAN cluster if you are going to proceed with two hosts in compute and the management host as a witness. Otherwise, if you have added a third compute host, then select Single site cluster. Below is an example of how the disks are claimed; the 10GB disks for cache tier and the 100GB disks for capacity tier. Note, this is just an example, you would size the disks larger to support PKS.

Select a witness host, in our case the management host. On selecting a host it will perform a series of checks to see if its meets the requirements.

Next is to select which disks on the witness host to be used for the cache tier and the capacity tier

Click next then finish and watch the magic happen…but…wait…what are all these warnings…

As this is a home lab with unsupported hardware we are getting warnings. We can disable these using RVC or vSAN API. See post here by fellow vExpert Florian Grehl on how to silence them.

NFS

Another option is to use NFS storage from an external NAS or even a NAS appliance such as FreeNAS running on the physical ESXi hosts. In my home lab I’m using my existing QNAP NAS so don’t have the memory penalty of running vSAN.

To use NFS, create a shared folder and set appropriate permissions on your NAS of choice. In the nested vCenter, right click the datacenter > Storage > New Datastore.

In the new datastore wizard select NFS, then NFS version. At the name and configuration section enter a name for the datastore, the name of the shared folder on the NFS server, and finally the FQDN or IP of the NFS server.

For host accessibility section select all the hosts that require access to the NFS datastore.

Click next then finish and voila, we have a NFS datastore!

And that completes getting the environment ready for NSX-T and PKS install. Wow, that took longer than I was initially planning. Next up, we install NSX-T.

Tagged , , ,

2 thoughts on “PKS NSX-T Home Lab – Part 6: Config Nested ESXi Env

  1. Hi Kieth,

    When you say..

    “Select VMkernel Network Adapter, select our vMotion port group, vmotion-20, select vMotion service, use static IPv4 settings and fill out the fields.”

    As a non-expert, I was concerned about the “Override default gateway” field. I checked it off and set a gateway. For example, without setting it, the v-motion kernel adapters won’t show 10.0.20.1. I have no idea if this works or not – maybe you can update the screen shot or detail on what to create.

    -Andrew

Leave a Reply

Your email address will not be published. Required fields are marked *