PKS NSX-T Home Lab – Part 7: Install NSX-T

Intro

In the previous post in this series we deployed a vCenter to manage the nested ESXi host, configured a distributed switch and storage for the nested environment. In this post we are going to install NSX-T Data Center 2.3* which includes NSX Manager, Controller and Edge. VMware’s documentation covers the install of NSX-T very well so I won’t be giving step by step instructions but rather pointers along the way. Also for this series we are installing every component manually, eg the” hard way”, in an effort to understand the process. After this series we will look at automating the install using Concourse pipelines eg the “easy way”.

Complete list of blog post in this series…

* UPDATES

29th November 2018 – It’s highly recommended to install NSX-T 2.3.0.2 due to a bug where ESXi host may loose connectivity. See this VMware KB
20th December 2018 – NSX-T 2.3.1 released which includes 2.3.0.2 bug fix. See release notes here.

NSX Manager

First we are going to deploy NSX Manager to our management cluster. Note, this is deployed using the NSX unified appliance ova in case you are looking for a NSX manager OVA. Prior to installing NSX manager and other components create DNS entries for them. Follow the instructions here to deploy the NSX Manager. Take note to use the vSphere web client (Flex based) to deploy the OVA, not the vSphere Client (HTML5 based). Options to select during the process are “smal|” configuration as only consumes 2 vCPU and 8GB memory rather than the default “medium” which consumes 4 vCPU and 16GB memory. Thin provisioning. Our datastore, be it local, vSAN or NFS. Our management port group on VLAN 10. Role of “nsx-manager”. Enable ssh. Allow root ssh logins.

Once NSX Manager is deployed, power on the VM. After a few minutes open a browser to the IP or FQDN of NSX Manager and login using the credentials provided during the deployment. You will be presented with a EULA that you need to read line for line 😛 and accept. You will also be prompted if wish to join the Customer Experience Improvement Program (CEIP) after which you will be presented with the NSX Overview screen.

You may or may have not noticed that NSX Manager by default has reservations. We don’t like reservations in our resource limited home lab so we remove them by setting them to zero.

NSX Controller

There are several ways to deploy a NSX Controller. Using NSX Manager UI, NSX Manager API, vSphere Client, or via CLI using the ovftool. The easiest is to use NSX Manager UI but that only offers medium and large sized controllers. There is a way to “trick” NSX Manager UI to deploy a small controller but that’s not for this blog post! Therefore we will deploy the controller using the vSphere Client. Follow the steps here to deploy a controller using vSphere Client. For home lab purposes, a single NSX Controller is sufficient, no need for three of them unless you have lots of memory or want to execute some failure scenarios of the NSX control plane.

During the deployment select “small” as the configuration type as this only consumes 2 vCPU and 8GB memory rather than the default “medium” which consumes 4 vCPU and 16GB memory. Thin provisioning. Our datastore, be it local, vSAN or NFS. Our management port group on VLAN 10. Enable ssh. Allow root ssh logins. Leave the “Internal Properties” blank.

Once deployed, just like NSX Manager, set the CPU and Memory reservations to zero. Power on the VM.

As we deployed the Controller manually using vSphere Client rather than using the NSX Manager UI we have to manually join the Controller to NSX Manager. Follow the steps here to perform this.

Once joined we then have to initialize the Control Cluster even when there is only one Controller. Follow the steps here to perform this. Once Controller is added and cluster initialized it will then show in the dashboard. Note, the Manager Node is amber as we deployed a small one and alas memory usage is high for what little was made available to it.

NSX Edge

Next up is the NSX Edge. Again there are several options on how you wish to deploy an Edge such as NSX Manager, vSphere Client, ovftool or even PXE. We will deploy the Edge using NSX Manager using the process here.

One pre-req for deploying an Edge using NSX Manager is that a Compute Manager is added so that NSX Manager can deploy the Edge to the destined location. In NSX Manager go to Fabric > Compute Managers > +ADD and fill out the prompts like below to add our vCenter.

During the deployment when prompted for the form factor size of the Edge, you must select Large. Large is a hard requirement for PKS which it validates during the install of PKS. A large Edge requires 8 vCPU and 16GB memory. While memory is large its manageable, but its the number of vCPUs that is the deal breaker in home lab environments. Its typically only higher end devices that have pCPUs that can support 8 vCPUs (or more).

For Configuration Deployment select the compute manager we previously added, the management cluster for cluster, and datastore eg NFS or vSAN we previously created for the datastore. For Configure Ports select Static for IP Assignment and provide a management IP and default gateway. Select our management port group with VLAN 10 for the management interface. For the Datapath Interfaces, which will use DPDK, select our overlay port group for the first, and uplink1 and uplink2 port groups for the 2nd and 3rd interfaces as seen below.

Once deployed, the Edge will appear in NSX Manager in several places including under Edges as seen below. As we deployed the Edge using NSX Manager we don’t have to manually register/join the Edge with NSX Manager.

Don’t forget, just like NSX Manager and Controller, set the CPU and Memory reservations to zero.

Our Host and Clusters view should now look like below.

And that completes this blog post. Next post we will cover configuring NSX-T which will include prepping hosts, creating transport nodes, transport zones, edge cluster, profiles, T0’s and a whole lot more.

4 thoughts on “PKS NSX-T Home Lab – Part 7: Install NSX-T

  1. Hi Keith

    One question, in this post you changed the number of nodes in the mgmt cluster to 2 (was only 1 previously) and deployed the NSX Edge here. Usually I’d expect the Edge/Compute cluster to have the Edge deployed in it. Mgr and Ctrlr would be deployed in the management as you have done. Any reason for this setup? Maybe I should continue reading part 8…

    Thanks
    Matt

    1. I moved one ESXi host from the compute cluster to the mgmt/edge cluster as the mgmt/edge needed more horsepower in my env.

      1. Thanks Matt and Kieth – just going through this now and saw the same. But I set up the vsan part yesterday, so how would this affect me to move a host to the management cluster? I can try to do this, but not being a vsan user, I am afraid to mess things up.

  2. Hi Kieth,

    Can you comment on the IPs to use for NSX components? I would assume that we are going to use the 10.0.10.x network, but it would be good to spell this out.

    -Andrew

Leave a Reply

Your email address will not be published. Required fields are marked *