Intro
In the previous post in this series we deployed the NSX-T Manager, Controller and Edge VMs. In this post we are going to configure them which includes creating transport zones, transport nodes, uplink profiles, edge cluster, tier-0 logical router, prepping the ESXi hosts and more.
Complete list of blog post in this series…
Configure NSX-T
TEP IP Pool
Tunnel EndPoints, TEPs, are the source and destination IP addresses used in the external IP header to uniquely identify the hypervisor hosts originating and terminating the NSX-T encapsulation of overlay frames. These TEP IP addresses are pulled from a pool. The pool will be the overlay network we created previously on the pfSense router; 10.0.50.0/24.
In NSX Manager go Inventory > Groups > IP Pools > Add. Enter a Name, IP Range and CIDR. The rest are optional. For the range we are only entering a small range as only have a few nodes that will participate in the overlay. See here for detailed info.
Overlay Transport Zone
Transport zones dictate which hosts and, therefore, which VMs can participate in the use of a particular network. A transport zone does this by limiting the hosts that can “see” a logical switch – and, therefore, which VMs can be attached to the logical switch. A transport zone can span one or more host clusters. The overlay transport zone is used by both host transport nodes and NSX Edges.
In NSX Manager go
VLAN Transport Zone
The VLAN transport zone is used by the NSX Edge for its VLAN uplinks. When an NSX Edge is added to a VLAN transport zone, a VLAN N-VDS is installed on the NSX Edge. The N-VDS allows for virtual-to-physical packet flow by binding logical router uplinks and downlinks to physical NICs.
In NSX Manager go
Uplink Profiles
An uplink profile defines policies for the links from hypervisor hosts to NSX-T Data Center logical switches or from NSX Edge nodes to top-of-rack switches. The settings defined by uplink profiles include teaming policies, active/standby links, the transport VLAN ID, and the MTU setting. Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts or nodes. As the default profiles are not editable we will create new ones.
First we will create an uplink profile for our edges. In NSX Manager go
Next we will create an uplink profile for our ESXi hosts. In NSX Manager go
Transport Edge Node
A transport node is a node that is capable of participating in an NSX-T overlay or NSX-T VLAN networking. Any node can serve as a transport node if it contains an N-VDS. See here for more info.
In NSX Manager go
On the N-VDS tab, we will first need to add the N-VDS for the overlay. Select the overlay N-VDS we created when creating the overlay transport zone, select our edge-uplink-profile, select Use IP Pool for IP Assignment, TEP-pool for IP Pool, and fp-eth0 for uplink1. fp-eth0 is the second nic on the Edge VM which is connected to our overlay port group. You can cross reference the fast path interfaces using the MAC address. Don’t hit ADD yet, we still have to add a N-VDS for our VLAN uplink.
Click + ADD N-VDS. Select the nvds-vlan-uplink from the drop down which we created earlier when creating the VLAN uplink transport zone, select edge-uplink-profile which we also created earlier, and fp-eth1 for uplink1. fp-eth1 is the third nic on the Edge VM which is connected to our uplink1 port group. Now you finally can click ADD!
Verify Edge Transport Node is successfully created. It may take a few minutes for configuration state and status to report Success and Up. May have to hit refresh a few times.
Edge Cluster
Having a multi-node cluster of NSX Edges helps ensure that at least one NSX Edge is always available. In order to create a tier-0 logical router or a tier-1 router with stateful services such as NAT, load balancer, and so on, you must associate it with an NSX Edge cluster. Therefore, even if you have only one NSX Edge, it must still belong to an NSX Edge cluster to be useful. In our env we only have one NSX Edge as they are monsters consuming 8 vCPU and 16GB memory each!
In NSX Manager go Enter a name, select our nsx-edge-01-tn we created earlier in the Available column and click the > arrow to move it to the Selected column. Click ADD.
Tier-0 Router
An NSX-T logical router reproduces routing functionality in a virtual environment completely decoupled from underlying hardware. The tier-0 logical router provides an on and off gateway service between the logical and physical network.
In NSX Manager go
Next we connect the T0 router to a VLAN uplink logical switch which we have yet to create. In NSX Manager go
With the VLAN uplink logical switch created we can attach our T0 router to it. In NSX Manager go
Below it can be seen successfully created.
Next we need to add a static route pointing to our pfSense router so our northbound traffic knows where to go! In a latter blog post after this series I will show some HA configurations such as two edges with a HA VIP and also how to configure BGP with BFD.
In NSX Manager go
OK, that was lot of configuration without actually checking if it works! With the above steps we should now be able to ping the router port on the tier 0 router; 10.0.60.2 from our jump hosts.
Wuhoo, success.
Host Preparation
For an ESXi host to be part of the NSX-T overlay, it must first be added to the NSX-T fabric. A fabric node is a node that has been registered with the NSX-T management plane and has NSX-T modules installed. This process is known as host preparation. We want our ESXi hosts in our Compute Cluster prepared. You can prep each host individually or the whole cluster. When prepping the whole cluster, not only are the NSX-T modules/VIBs installed and host added to the management plane, you can also make them transport nodes all in the same action. When making prepared hosts, transport nodes, you have to choose which vmnics of the ESXi host will be used for the overlay. Currently we have the first two, vmnic0 and vmnic1 as uplinks to our distributed switch, leaving vmnic2 and vmnic3 available for NSX-T.
In NSX Manager go
Put a check beside the Compute Cluster and then click CONFIGURE CLUSTER. Enable Automatically Install NSX and Automatically Create Transport Node. Fill out the Transport Node config as seen below using all the objects we created earlier and the two unassigned vmnics.
On clicking ADD, we will see the host prep process start.
After a few minutes the NSX-T VIBs will be installed on the ESXi hosts, the hosts added to the NSX-T management plane, and made transport nodes.
In NSX Manager go
Test Overlay Network
Now that we have our compute hosts and Edge as Transport Nodes participating in the overlay Transport Zone we need to verify the overlay network is working. To verify this we will perform some pings from TEP to TEP with a packet size of 1572 bytes. We don’t use 1600 as there is an overhead of 28 bytes for IP and ICMP headers. One of first tasks we performed in this blog post was to great a TEP IP Pool which had a range of 10.0.50.11 to 10.0.50.19. TEPs are allocated from this pool. There are several ways to see which TEP IP is allocated to what transport node. Below is just one way via ESXi.
By default, SSH is disabled on ESXi. To start the SSH service open the vSphere Client > Host and Clusters view > click a host in compute cluster > Configure tab > System > Services > SSH > Start
Open a SSH session to the nested ESXi host in the compute cluster which we just enabled SSH using the management IP address, username of root and password which was set during install. Execute esxcfg-vmknic -l to see the IP address assigned to the TEP, denoted as vxlan. Below it can be seen it is 10.0.50.12
To get the TEP IP used by the NSX-T Edge we first need to enable SSH also on it. Well we don’t actually have to enable SSH but I’m not a fan of using the web console and theres no harm in learning in how to enable SSH on a NSX Edge! Open the vSphere Client > Host and Clusters view > click the NSX Edge VM > Launch Web Console / Launch Remote Console > Login with admin / <password> . Enter the two following commands start service ssh and set service ssh start-on-boot to enable SSH and so its always enabled.
With SSH now enabled on our Edge, open a SSH session to the management IP of the Edge and login. Enter gets vteps. Nice easy one that!
Now that we know a TEP IP address on an Edge and an nested ESXi host we will now ping one from the other to make sure we have no MTU issues along the path as its a common issue in many installs I have performed over the years (#YAMI – Yet Another MTU Issue!).
From the ESXi host, ping the Edge TEP with the following command vmkping ++netstack=vxlan 10.0.50.11 -d -s 1572 As explained above we ping with a MTU of 1572 bytes due to the header overhead of IP and ICMP of 28 bytes.
Success! We know MTU is configured correctly end to end and don’t have YAMI!
That completes configuring NSX-T. In the next post we will prepare NSX-T for PKS by adding a logical switch and router for the PKS Management network, add various IP Pools and Blocks for our Node and Pod Networks, and create some NAT rules.