PKS NSX-T Home Lab – Part 9: Prepare NSX-T for PKS

Intro

In the previous post in this series we configured NSX-T which included creating transport zones, transport nodes, uplink profiles, edge cluster, tier-0 logical router, and prepping the ESXi hosts. In this post we will review the various logical topologies that can be deployed for PKS, followed by choosing one and preparing NSX-T for its implementation. The preparation includes creating a T1 router and logical switch for the PKS Management network, DNAT and SNAT rules, static routes, IP Blocks, IP Pools, NSX Manager API certificate and NSX Manager Superuser Principle Identity. Wow, that’s a lot to get covered so lets get started.

Complete list of blog post in this series…

Topology

As always we have options, lots of options, but options are a good thing. I will briefly cover the logical topology options as could probably write a whole blog series deep diving into them.

No-NAT with PKS Management on VSS/VDS

In the diagram below (taken from VMworld 2018 on-demand video library, kudos to Romain Decker) we have the PKS Management plane, which includes Ops Manager, BOSH, PKS, and Harbor, on a network external to NSX-T, alas on a standard or distributed switch and is using routable address space eg NO-NAT. The Kubernetes Nodes network(s) for the Kubernetes masters and workers are also using routable address space but are on a NSX-T logical switch. The POD namespace networks are on NSX-T logical switches and will use private address space and so will use NAT when needed. The Management network where our vCenter, NSX-T manager and controllers resides outside of NSX-T. Note, the Management network and the PKS Management plane can reside on the same network if desired. Also to note regards the diagram from VMworld, the “PKS Infrastructure” namespace network should be “PKS System”, and the “Kube Public” namespace network is missing. When I have the time I will draw up my own diagrams

No-NAT with PKS Management on NSX-T Logical Switch

Next option is where the PKS Management plane network is now on a NSX-T Logical Switch rather than a standard or distributed switch.

NAT with PKS Management on NSX-T Logical Switch

Next option is same as above regards where the PKS Management network resides but now all networks use private address space and are NAT’d.

There is actually another topology which is a hybrid of these where the PKS Management network is NO-NAT and the Node network is NAT.

In all of above the POD namespace networks are NAT’d but since PKS 1.2.1 they can use routable address space (NO-NAT) by using Network Profiles. The default for POD networks is NAT.

So what topology will it be? There are pros and cons to all of them but for this series I’m going to use the third one where everything sites behind NSX-T and is NAT’d. Now that we have decided, lets plan out our networks…

Network Plan

Before allocating CIDRs be aware that the following are already reserved by PKS and Harbor; 172.17.0.0/16 through to 172.22.0.0/16 and 10.100.200.0/24

PKS Management: This small network is used to access PKS management components such as Ops Manager, BOSH, PKS API, and the Harbor Registry. A /28 would suffice but as we are using NAT and not tight for address space a /24 is more than enough. We will use 172.14.0.0/24.

Node Network: Each Kubernetes cluster deployed by PKS owns a /24 subnet. To deploy multiple Kubernetes clusters, we need larger than /24. The recommended size is /16 which would give 256 /24s but that’s overkill for a home lab but as we are NAT’ing for our specific env there is no pain. We will use 172.15.0.0/16

Pod Network: Each time a Kubernetes namespace is created, a /24 subnet is allocated. When a Kubernetes cluster is deployed by PKS, by default 4 namespaces are created and so we need much larger than a /24. The recommended size is /16 which would give 256 /24s which like above could be said to be overkill but as we are NATing there is no pain. We will use 172.16.0.0/16

VIP / LB Network: This network provides your load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. A /24 is more than sufficient for our purposes. We will use a /24 out of our nested lab routed networks, which will be 10.0.80.0/24. Prior to that we configured 10.0.70.0/24 on our pfSense router. I go in increments of 10 on the third octet as I find they are easy to remember. It’s important to note that NSX-T will own this network and so it doesn’t need to be configured on our pfSense router. But what does need to be configured is a static route to it which we will create later. Rather than creating another network, we will use some IP addresses from this network for our NAT’ing; more on this later.

TEP Network: TEP meaning Tunnel EndPoint, is for our GENEVE overlay network for our transport nodes which are our ESXi hosts and NSX-T Edges. We already created a network in a previous post for this which was 10.0.50.0/24 marked as overlay network.

With our networks planned, we will now start preparing NSX-T for our PKS install.

Logical Switch and Tier 1 Logical Router for PKS Management

First up is to create a logical switch and tier 1 router for the PKS Management network and connect it to the tier 0 router.

In NSX Manager go Networking > Switching > Switches > ADD. Enter a Name for the logical switch and select our overlay transport zone.

Next create a tier 1 router for PKS Management. In NSX Manager go Networking > Routers > ADD > Tier-1 Router. Enter a name for the router, our Edge cluster, and Edge cluster members.

Next we create a router port on the t1 router. In NSX Manager go Networking > Routers > t1-pks-mgmt > Configuration > Router Ports > ADD. Enter a name, select the logical switch we just created, enter a name for the switch port, and finally the gateway IP address for the PKS Management network we defined in the network planning section at the start of this post.

Next we need to configure it to advertise its address. In NSX Manager go Networking > Routers > t1-pks-mgmt > Routing > Route Advertisement > Route Advertisement. Change Status to Enabled and Advertise All NSX Connected Router to Yes. Then click save.

Next we need to connect the T1 router to the T0 router. In NSX Manager go Networking > Routers > t1-pks-mgmt > Overview > Tier-0 Connect > Connect. Select our T0 router and click connect.

In our routers view, we now have two routers, our T0 and T1 where T1 is connected to T0.

While here, record the ID of the T0 router as we will need it later when configuring the networking in the PKS tile.

NAT Rules

As we are going to be using NAT, we need to create NAT rules but don’t worry as its only a few. PKS will create majority on the fly when needed. As the PKS Management plane is on a non-routable subnet we need to create DNAT rules for Ops Manager, BOSH, PKS, and Harbor so we can reach them from “outside” eg our jump host and then SNAT rules so they can reach the resources outside of NSX-T such as DNS, NTP, AD, ESXi hosts, vCenter, and NSX-T Manager.

First lets create a DNAT for Ops Manager. In NSX Manager go Networking > Routers > NAT > T0 > +ADD (there are a few ways to get here!). Change Priority from 1024 to 1000. This is optional, I do it so that its at the top of NAT rules above the auto created NAT rules. Change Action from SNAT to DNAT. Enter an IP address from our routable VIP/LB network for the destination IP and an IP address for the Translated IP from the internal network we defined when creating ls-pks-mgmt earlier. Again, if possible, I keep the last octet the same so its easy to remember them.

Repeat same for BOSH (optional), PKS API, and Harbor (Optional) where the translated IP for BOSH will be 172.14.0.3, PKS API will be 172.14.0.4 and for Harbor will be 172.14.0.5. BOSH is optional as we can access BOSH via Ops Manager later but if wish to access it directly from a jump then we need a NAT address to get to it. Harbor is optional also as its not a requirement for PKS but I will be installing it. See DNATs created in screenshot below.

That’s us able to get in using DNAT, now we have to be able to get out to reach external services such as DNS, NTP, ESXi hosts, vCenter, and NSX-T Manager. Typically you would/should create a SNAT rule for each but as this is a home lab we are going to create a single SNAT that will be able to reach anything. In NSX Manager go Networking > Routers > NAT > T0 > +ADD. Change Priority so its top of table and enter the next available IP from our 10.0.80.0/24 network. We are not entering any Destination IP which is OK for home lab purposes, otherwise I would recommend to lock these down so you have granular control.

Below is our DNAT and SNAT rules.

VIP / LB / Floating IP Network Static Route

As mentioned above in the Network Plan section NSX-T will own our VIP / LB / Floating IP network, not our lab pfSense router, therefore we need to add a static route to our pfSense router for this network and point it to the uplink port on our NSX-T T0 logical router. Once created, any IP in the 10.0.80.0/24 network will be routed to the NSX-T T0 to be handled.

The T0 router uplink port, 10.0.60.2, which we created in the previous blog post can be found in NSX Manager > Networking > Routers > T0 > Configuration > Router Ports as seen below.

To add a static route to our pfSense router we first need to create a gateway. In pfSense go System > Routing > Gateways > Add. Select our uplink1 interface, enter a name, enter the IP address of the uplink port and click save.

Next we create the actual static route. In pfSense go System > Routing > Static Routes > Add. Enter destination network and its mask, select the gateway we just created and optionally add a description.

With the static route now in place, any traffic destined for 10.0.80.0/24 will be routed to T0 to be handled.

NSX Manager API Certificate

By default, the NSX Manager includes a self-signed API certificate with its hostname as the subject and issuer. Ops Manager requires strict certificate validation and expects the subject and issuer of the self-signed certificate to be either the IP address or FQDN of the NSX Manager. As a result, we will need to regenerate the self-signed certificate using the FQDN of the NSX Manager in the subject and issuer field and then register the certificate with the NSX Manager using the NSX API. Follow the steps here to perform this.

On completion of the procedure refresh you browser session for NSX Manager and view the new certificate. As can be seen below the subject and issuer are the FQDN for NSX Manager.

NSX Manager Superuser Principal Identity

A principal can be an NSX-T component or a third-party application such PKS. With a principal identity, a principal can use the identity name to create an object and ensure that only an entity with the same identity name can modify or delete the object. A principal identity can only be created or deleted using the NSX-T API. However, you can view principal identities through the NSX Manager UI. The PKS API uses the NSX Manager superuser principal identity to communicate with NSX-T to create, delete, and modify networking resources for Kubernetes cluster nodes. Follow the steps here to create it.

After completion of the steps you will see the new superuser in NSX Manager > System > Users.

There will also be two files created from the process that we will need later when installing PKS, pks-nsx-t-superuser.crt and pks-nsx-t-superuser.key

Pods IP Block

Pods IP block is used by the NSX-T Container Plug-in (NCP) to assign address space to Kubernetes pods through the Container Networking Interface (CNI). In our network planning section above we assigned 172.16.0.0/16 for the Pods.

In NSX Manager, go to Networking > IPAM > ADD. Enter a name and the CIDR.

Nodes IP Block

Nodes IP Block is used by PKS to assign address space to Kubernetes master and worker nodes when new clusters are deployed or a cluster increases its scale. In our Network Planning section above we assigned 172.15.0.0/16 for the Nodes.

In NSX Manager, go to Networking > IPAM > ADD. Enter a name and the CIDR.

With the two IP Blocks created, take note of the ID’s of each as we will require these later during the install of PKS.

Floating IP Pool

The Floating IP Pool is from which routable IP addresses are assigned to components. This network provides your load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. This network is used when creating the virtual IP pools, or when the services are deployed. In the Network Planning section above we assigned 10.0.80.0/24 for this pool.

In NSX Manager, go to Inventory > Groups > IP Pools > ADD. Enter a name and then click ADD under subnets to add a subnet. Enter an IP Range and the CIDR for the network. Below I start the range as 10.0.80.11 as we used some IP addresses from the start of the subnet for NAT.

Like for the IP Blocks, take note of the ID of the IP Pool

And that completes our NSX-T preparation for PKS! Next up, we start installing PKS, wuhoo, finally!

Leave a Reply

Your email address will not be published. Required fields are marked *