PKS NSX-T Home Lab – Part 8: Configure NSX-T


In the previous post in this series we deployed the NSX-T Manager, Controller and Edge VMs. In this post we are going to configure them which includes creating transport zones, transport nodes, uplink profiles, edge cluster, tier-0 logical router, prepping the ESXi hosts and more.

Complete list of blog post in this series…

Configure NSX-T


Tunnel EndPoints, TEPs, are the source and destination IP addresses used in the external IP header to uniquely identify the hypervisor hosts originating and terminating the NSX-T encapsulation of overlay frames. These TEP IP addresses are pulled from a pool. The pool will be the overlay network we created previously on the pfSense router;

In NSX Manager go Inventory > Groups > IP Pools > Add. Enter a Name, IP Range and CIDR. The rest are optional. For the range we are only entering a small range as only have a few nodes that will participate in the overlay. See here for detailed info.

Overlay Transport Zone

Transport zones dictate which hosts and, therefore, which VMs can participate in the use of a particular network. A transport zone does this by limiting the hosts that can “see” a logical switch – and, therefore, which VMs can be attached to the logical switch. A transport zone can span one or more host clusters. The overlay transport zone is used by both host transport nodes and NSX Edges.

In NSX Manager go Fabric > Transport Zones > Add. Enter a Name for the transport zone, a name for the N-VDS, N-VDS Mode as Standard, and Traffic Type as Overlay. See here for detailed info.

VLAN Transport Zone

The VLAN transport zone is used by the NSX Edge for its VLAN uplinks. When an NSX Edge is added to a VLAN transport zone, a VLAN N-VDS is installed on the NSX Edge. The N-VDS allows for virtual-to-physical packet flow by binding logical router uplinks and downlinks to physical NICs.

In NSX Manager go Fabric > Transport Zones > Add. Enter a name for the transport zone, a name for the N-VDS, N-VDS Mode as Standard, and Traffic Type as VLAN. See here for detailed info.

Uplink Profiles

An uplink profile defines policies for the links from hypervisor hosts to NSX-T Data Center logical switches or from NSX Edge nodes to top-of-rack switches. The settings defined by uplink profiles include teaming policies, active/standby links, the transport VLAN ID, and the MTU setting. Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts or nodes. As the default profiles are not editable we will create new ones.

First we will create an uplink profile for our edges. In NSX Manager go Fabric > Profiles > Uplink Profiles > Add. Enter a Name, Teaming Policy as Failover Order, Active Uplinks as uplink-1, Transport VLAN as 0 (as we are tagging at port group level for our edge), and MTU of 1600.

Next we will create an uplink profile for our ESXi hosts. In NSX Manager go Fabric > Profiles > Uplink Profiles > ADD. Enter a Name, Teaming Policy as Failover Order, Active Uplinks as uplink-1, Standby Uplinks as uplink-2, Transport VLAN as 50 (as the ESXi host vmnic is not attached to any port group we need to tag it here), and MTU of 1600.

Transport Edge Node

A transport node is a node that is capable of participating in an NSX-T overlay or NSX-T VLAN networking. Any node can serve as a transport node if it contains an N-VDS. See here for more info.

In NSX Manager go Fabric > Nodes > Transport Nodes > ADD. Enter a Name, select our NSX Edge from the drop down, select the two transport zones we created earlier and click the > arrow so they are added to the Selected box. Don’t click ADD yet, we must fill out the N-VDS tab.

On the N-VDS tab, we will first need to add the N-VDS for the overlay. Select the overlay N-VDS we created when creating the overlay transport zone, select our edge-uplink-profile, select Use IP Pool for IP Assignment, TEP-pool for IP Pool, and fp-eth0 for uplink1. fp-eth0 is the second nic on the Edge VM which is connected to our overlay port group. You can cross reference the fast path interfaces using the MAC address. Don’t hit ADD yet, we still have to add a N-VDS for our VLAN uplink.

Click + ADD N-VDS. Select the nvds-vlan-uplink from the drop down which we created earlier when creating the VLAN uplink transport zone, select edge-uplink-profile which we also created earlier, and fp-eth1 for uplink1. fp-eth1 is the third nic on the Edge VM which is connected to our uplink1 port group. Now you finally can click ADD!

Verify Edge Transport Node is successfully created. It may take a few minutes for configuration state and status to report Success and Up. May have to hit refresh a few times.

Edge Cluster

Having a multi-node cluster of NSX Edges helps ensure that at least one NSX Edge is always available. In order to create a tier-0 logical router or a tier-1 router with stateful services such as NAT, load balancer, and so on, you must associate it with an NSX Edge cluster. Therefore, even if you have only one NSX Edge, it must still belong to an NSX Edge cluster to be useful. In our env we only have one NSX Edge as they are monsters consuming 8 vCPU and 16GB memory each!

In NSX Manager go Fabric > Nodes > Edge Clusters > ADD. Enter a name, select our nsx-edge-01-tn we created earlier in the Available column and click the > arrow to move it to the Selected column. Click ADD.

Tier-0 Router

An NSX-T logical router reproduces routing functionality in a virtual environment completely decoupled from underlying hardware. The tier-0 logical router provides an on and off gateway service between the logical and physical network.

In NSX Manager go Networking > Routers > ADD. Enter a Name, select our Edge Cluster, and select Active-Standby for HA mode. Click ADD.

Next we connect the T0 router to a VLAN uplink logical switch which we have yet to create. In NSX Manager go Networking > Switching > ADD. Enter a name, select tz-vlan-uplink as the Transport Zone, and VLAN as 0 as we are tagging at port group level.

With the VLAN uplink logical switch created we can attach our T0 router to it. In NSX Manager go Networking > Routers > T0-LR > Configuration > Router Ports > ADD. Enter a Name, select our Edge transport node, select our VLAN uplink logical switch, enter a name for the Switch Port, and finally give it an IP address. Our uplink-1 network we created on the pfSense was with the gateway as Here we select the next available address in CIDR format;

Below it can be seen successfully created.

Next we need to add a static route pointing to our pfSense router so our northbound traffic knows where to go! In a latter blog post after this series I will show some HA configurations such as two edges with a HA VIP and also how to configure BGP with BFD.

In NSX Manager go Networking > Routers > T0-LR > Routing > Static Routes > ADD. Enter for the Network, Next hop of (our gateway for uplink-1 network on pfSense router), and select our Logical Router Port we created earlier.

OK, that was lot of configuration without actually checking if it works! With the above steps we should now be able to ping the router port on the tier 0 router; from our jump hosts.

Wuhoo, success.

Host Preparation

For an ESXi host to be part of the NSX-T overlay, it must first be added to the NSX-T fabric. A fabric node is a node that has been registered with the NSX-T management plane and has NSX-T modules installed. This process is known as host preparation. We want our ESXi hosts in our Compute Cluster prepared. You can prep each host individually or the whole cluster. When prepping the whole cluster, not only are the NSX-T modules/VIBs installed and host added to the management plane, you can also make them transport nodes all in the same action. When making prepared hosts, transport nodes, you have to choose which vmnics of the ESXi host will be used for the overlay. Currently we have the first two, vmnic0 and vmnic1 as uplinks to our distributed switch, leaving vmnic2 and vmnic3 available for NSX-T.

In NSX Manager go Networking > Fabric > Nodes. Here we can see our Clusters and the ESXi hosts in them where the deployment status is Not Prepared.

Put a check beside the Compute Cluster and then click CONFIGURE CLUSTER. Enable Automatically Install NSX and Automatically Create Transport Node. Fill out the Transport Node config as seen below using all the objects we created earlier and the two unassigned vmnics.

On clicking ADD, we will see the host prep process start.

After a few minutes the NSX-T VIBs will be installed on the ESXi hosts, the hosts added to the NSX-T management plane, and made transport nodes.

In NSX Manager go Networking > Fabric > Transport Nodes. Here we see the ESXi hosts from the compute cluster now transport nodes along side our Edge that is also a transport node.

Test Overlay Network

Now that we have our compute hosts and Edge as Transport Nodes participating in the overlay Transport Zone we need to verify the overlay network is working. To verify this we will perform some pings from TEP to TEP with a packet size of 1572 bytes. We don’t use 1600 as there is an overhead of 28 bytes for IP and ICMP headers. One of first tasks we performed in this blog post was to great a TEP IP Pool which had a range of to TEPs are allocated from this pool. There are several ways to see which TEP IP is allocated to what transport node. Below is just one way via ESXi.

By default, SSH is disabled on ESXi. To start the SSH service open the vSphere Client > Host and Clusters view > click a host in compute cluster > Configure tab > System > Services > SSH > Start

Open a SSH session to the nested ESXi host in the compute cluster which we just enabled SSH using the management IP address, username of root and password which was set during install. Execute esxcfg-vmknic -l to see the IP address assigned to the TEP, denoted as vxlan. Below it can be seen it is

To get the TEP IP used by the NSX-T Edge we first need to enable SSH also on it. Well we don’t actually have to enable SSH but I’m not a fan of using the web console and theres no harm in learning in how to enable SSH on a NSX Edge! Open the vSphere Client > Host and Clusters view > click the NSX Edge VM > Launch Web Console / Launch Remote Console > Login with admin / <password> . Enter the two following commands start service ssh and set service ssh start-on-boot to enable SSH and so its always enabled.

With SSH now enabled on our Edge, open a SSH session to the management IP of the Edge and login.  Enter gets vteps. Nice easy one that!

Now that we know a TEP IP address on an Edge and an nested ESXi host we will now ping one from the other to make sure we have no MTU issues along the path as its a common issue in many installs I have performed over the years (#YAMI – Yet Another MTU Issue!).

From the ESXi host, ping the Edge TEP with the following command vmkping ++netstack=vxlan -d -s 1572 As explained above we ping with a MTU of 1572 bytes due to the header overhead of IP and ICMP of 28 bytes.

Success! We know MTU is configured correctly end to end and don’t have YAMI!

That completes configuring NSX-T. In the next post we will prepare NSX-T for PKS by adding a logical switch and router for the PKS Management network, add various IP Pools and Blocks for our Node and Pod Networks, and create some NAT rules.

Leave a Reply

Your email address will not be published. Required fields are marked *