In the previous post in this series we installed Ops Manager and BOSH. In this post we are going to install the PKS tile. Rather than just going verbatim on the official docs here, I will provide some complementary commentary just like how I did for the previous post.
Complete list of blog post in this series…
Import PKS Product File
In previous post while downloading Ops Manager from PivNet, we also downloaded the PKS product file (pivotal-container-service-1.2.4-build.6.pivotal). Now, we need to import it. From Ops Manager, click Import a Product, navigate to where you downloaded the PKS tile to, click open, and the upload process will start. The progress can be monitored in the bottom left corner of the browser window where it will give a percentage of the upload. After a few minutes the PKS will be in the left column where we click the plus sign to add it to the installation dashboard.
The progress bar of the PKS tile is orange as not all required parameters are configured, plus as can be seen below its missing a stemcell. We will configure the tile first and then add the missing stemcell.
Click the PKS tile on the installation dashboard. It will open the settings tab at the first settings page of Assign AZs and Networks.
Assign AZs and Networks
We will place the PKS API VM in the Management AZ and on the PKS Management Network.
Place singleton jobs in: Mgmt-AZ
Balance other jobs in: Mgmt-AZ
Service Network: PKS-MGMT
Just like when configuring the Ops Manager tile on clicking save it will validate the parameters. If validations pass then a green banner will show. If the validations fail, then a red banner will show.
Generate a certificate using a wildcard domain eg *.lab.keithlee.ie
For API Hostname enter pks.lab.keithlee.ie
Don’t forget to create a DNS entry on your DNS server for this FQDN where the IP will be the routable destination IP address from the DNAT rule we created, 10.0.80.4.
A plan defines a set of resource types used for deploying Kubernetes clusters. There are many configurable options such as numbers, size, and location of Master and Worker nodes.
For our environment, Plan 1 will be for a small cluster consisting of a single Master and three Workers. Set Master/ETCD Availability Zones to AZ1 and Worker Availability Zones to AZ1, AZ2, and AZ3. Leave rest at default.
For our environment, Plan 2 will be a medium cluster consisting of three Masters and six Workers. Set both Master/ETCD Availability Zones and Worker Availability Zones to AZ1, AZ2 and AZ3. Leave rest at default.
We don’t need to activate a third plan for now. We can always activate and configure it at a later date.
Kubernetes Cloud Provider
Naturally select vSphere as your IaaS!
For vCenter Master Credentials, you can create a specific vCenter service account with specific permissions following the steps here. Found home lab purposes using the administrator account is fine if you so wish.
Enter vCenter FQDN, DataCenter Name, and Datastore Name. For Stored VM Folder, enter the same name when configuring the BOSH tile in a previous post, eg pks_vms.
View what parameters can be configured here, eg Syslog and vRLI, but we will leave them at their defaults for now
Container Network Interface: NSX-T
NSX Manager hostname: Enter FQDN of our NSX Manager
NSX Manager Super User Principal Identity Certificate
– Certificate PEM: contents of pks-nsx-t-superuser.crt which we created in part 9 (Prepare NSX-T for PKS) of this blog series
– Private Key PEM: contents of pks-nsx-t-superuser.key which we created in part 9 (Prepare NSX-T for PKS) of this blog series
NSX Manager CA Cert: contents of nsx.crt which we created in part 9 (Prepare NSX-T for PKS) of this blog series
Disable SSL certificate verification: checked as we created a self-signed certs
NAT mode: checked as we are using NAT!
For Pods IP Block ID, Nodes IP Block ID, T0 Router ID, and Floating IP Pool ID enter the ID’s recorded in part 9 (Prepare NSX-T for PKS) of this blog series
Nodes DNS: Our home lab DNS server
vSphere Cluster Names: Name of Compute cluster in vCenter
View what can be configured but leave at defaults
View what can be configured but leave at defaults. We will configure Wavefront in a future post.
Choose if you wish or not wish to be part of VMware’s Customer Experience Improvement Program and Pivotals Pivotal Telemetry Program.
As we are using NSX-T we must turn on the NSX-T Validation errand to verify our config. Prior to PKS 1.2.4 the errand used to tag the NSX-T objects but now uses the ID’s which now allows us to have multiple instances of Tier-0 routers. I will do a separate post covering a multi-T0 topology.
View what can be configured but leave at defaults.
With all the parameters filled out, return to the Installation Dashboard view by clicking Installation Dashboard at the top of the window. The PKS tile will now have a solid orange bar, not green, as its still missing the stemcell. A stemcell is a versioned Operating System image wrapped with IaaS specific packaging. A typical stemcell contains a bare minimum OS skeleton with a few common utilities pre-installed, a BOSH Agent, and a few configuration files to securely configure the OS by default. PKS uses the Ubuntu Xenial stemcell which is based on Ubuntu 16.04.
Click Missing stemcell on the PKS tile, or STEMCELL LIBRARY at the top of the Ops Manager window. Here we can see PKS requires stemcell 97.39 but there is none currently deployed. Click IMPORT STEMCELL, navigate to where you downloaded the stemcell in post 10 (Install Ops Man and BOSH) and click open.
After import, click APPLY STEMCELL TO PRODUCTS.
The stemcell is now staged.
Click INSTALLATION DASHBOARD and it can now be see that the PKS tile progress bar is now solid green. We are now ready to proceed to with the install.
On the Installation Dashboard, click REVIEW PENDING CHANGES on the far right which will open the review pending changes window. Click APPLY CHANGES
The applying of changes will commence. It is essentially deploying and configuring the PKS API VM.
The install should take approx 30 minutes after which there will be a prompt informing so.
Verify Install of PKS
Back at the Installation Dashboard, click the PKS tile followed by the Status tab. Here we can see the IP address of our PKS API. Also we don’t see any alerts, so thats a good thing!
In post 9 (Prepare NSX-T for PKS) of this series, we configured a DNAT rule so we can reach 184.108.40.206 externally using 10.0.80.4.
Moment of truth, can we reach the PKS API for the “outside”? From any host, in my case an ubuntu jump, ping the FQDN for the PKS API we configured earlier in this post. Success!
Not really a verification step but useful to know that you can identify what VM is what by using Custom Attributes in the vSphere client.
So that completes this post in the series. Congratulations, you now have PKS installed. In the next post we will create some Kubernetes clusters.
2 thoughts on “PKS NSX-T Home Lab – Part 11: Install PKS”
Minor issue Keith, but the UAA screenshot is actually a repeat of the Logging screenshot.
Thanks for the great write-up though – a real time saver.
Thanks for finding that. Now resolved.