Intro
In the previous post in this series we configured and deployed PKS. Now that we have a PKS API endpoint, we are going to create a kubernetes cluster.
Complete list of blog post in this series…
Create a User
Before we can create a K8s cluster we need to create a user. We create and manage users in PKS with User Account and Authentication (UAA). We use the UAA Command Line Interface (UAAC) to interact with the UAA server. You can either run UAAC commands from the Ops Manager VM or install UAAC on your local workstation or jump etc.
To install UAAC on our jump, run the following commands…
apt -y install ruby ruby-dev gcc build-essential g++ gem install cf-uaac
As we are going to be communicating with the PKS API, we need its certificate we generated when performing the install. In Ops Manager, click the PKS tile, then PKS API in the left column of the settings tab. Copy the cert and then on the jump create a new file where the contents are the certificate just copied. Take note of the PKS API FQDN we configured previously.
Next we need to retrieve the PKS UAA admin credentials. In Ops Manager, click the PKS tile, followed by the Credentials tab. Click Link to Credential for Pks Uaa Management Admin Client as seen below. Record the secret returned.
Using UAA command line tool, target our PKS API using the PKS API FQDN and PKS API cert
root@ubuntu-jump:~# uaac target https://pks.lab.keithlee.ie:8443 --ca-cert pks.crt Target: https://pks.lab.keithlee.ie:8443 Context: admin, from client admin
Next, login using the secret retrieved from Ops Manager above
root@ubuntu-jump:~# uaac token client get admin -s fy6Ps5gTFDHMQM-KL2MJbIl5HbdQBer5 Successfully fetched token via client credentials grant. Target: https://pks.lab.keithlee.ie:8443 Context: admin, from client admin
Now we are logged in, we can create a user
root@ubuntu-jump:~# uaac user add keith --emails keith@keithlee.ie -p VMware1! user account successfully added
And then assign their scope. Two options are 1) pks.clusters.admin where user can create and access all clusters 2) pks.clusters.manage where the user can create and access only their own clusters.
root@ubuntu-jump:~# uaac member add pks.clusters.admin keith success
Create a Kubernetes Cluster
Now we have a user created who has permissions to create a kubernetes cluster. But how do we create a cluster? We use the PKS CLI which can be downloaded from PivNet. While there, also download the Kubectl CLI. There are Linux, Windows, MAC flavors of each. Copy both the PKS and Kubectl CLI’s to your jump.
Add execute permissions and move to /usr/local/bin
root@ubuntu-jump:~# chmod +x pks-linux-amd64-1.2.4-build.1 root@ubuntu-jump:~# mv pks-linux-amd64-1.2.4-build.1 /usr/local/bin/pks root@ubuntu-jump:~# chmod +x kubectl-linux-amd64-1.11.5 root@ubuntu-jump:~# mv kubectl-linux-amd64-1.11.5 /usr/local/bin/kubectl
Now we have the PKS CLI on our jump lets login in using the PKS API FQDN, the user we create earlier, and the PKS API cert. Note, you can if wished, skip cert verification by using -k instead of –ca-cert <cert>
root@ubuntu-jump:~# pks login -a pks.lab.keithlee.ie -u keith -p VMware1! --ca-cert pks.crt API Endpoint: pks.lab.keithlee.ie User: keith
Now that we are authenticated, lets create a k8s cluster. Here we are creating a cluster called pks-cluster-1, whose kubernetes API will be available via pks-cluster-1.lab.keithlee.ie and will use the small plan we defined in the previous post which was 1x master and 3x workers by default. The number of workers can be specified by -n, –num-nodes, up to a maximum as defined in the plan, which was 50.
root@ubuntu-jump:~# pks create-cluster pks-cluster-1 --external-hostname pks-cluster-1.lab.keithlee.ie --plan small Name: pks-cluster-1 Plan Name: small UUID: 910f1410-9c48-435c-8f8a-a775c6688021 Last Action: CREATE Last Action State: in progress Last Action Description: Creating cluster Kubernetes Master Host: pks-cluster-1.lab.keithlee.ie Kubernetes Master Port: 8443 Worker Nodes: 3 Kubernetes Master IP(s): In Progress Network Profile Name: Use 'pks cluster pks-cluster-1' to monitor the state of your cluster
pks clusters lists all clusters
root@ubuntu-jump:~# pks clusters Name Plan Name UUID Status Action pks-cluster-1 small 910f1410-9c48-435c-8f8a-a775c6688021 in progress CREATE
pks cluster <cluster-name> provides further detail. Here we see our cluster is being created.
root@ubuntu-jump:~# pks cluster pks-cluster-1 Name: pks-cluster-1 Plan Name: small UUID: 910f1410-9c48-435c-8f8a-a775c6688021 Last Action: CREATE Last Action State: in progress Last Action Description: Instance provisioning in progress Kubernetes Master Host: pks-cluster-1.lab.keithlee.ie Kubernetes Master Port: 8443 Worker Nodes: 3 Kubernetes Master IP(s): In Progress Network Profile Name:
If we look in our vSphere Client when the K8s cluster is being created, we see four new VMs in our Mgmt AZ. These are not our K8s cluster VMs, but actually the BOSH Compilation VMs. BOSH compiles packages on the fly. This can be confirmed by checking the Custom Attributes.
After a while, the pks cluster <cluster-name> command will return that the cluster is created. Note, rather than manually checking you could use the watch command. You will also notice the K8s API IP, 10.0.80.11, for the cluster is from our NSX-T Floating IP pool. If wished, you can create an entry in your DNS for pks-cluster-1.lab.keithlee.ie to resolve to 10.0.80.11
root@ubuntu-jump:~# pks cluster pks-cluster-1 Name: pks-cluster-1 Plan Name: small UUID: 910f1410-9c48-435c-8f8a-a775c6688021 Last Action: CREATE Last Action State: succeeded Last Action Description: Instance provisioning completed Kubernetes Master Host: pks-cluster-1.lab.keithlee.ie Kubernetes Master Port: 8443 Worker Nodes: 3 Kubernetes Master IP(s): 10.0.80.11 Network Profile Name:
If we look in our vSphere Client, we can now see 4x VMs across our AZs AZ1, AZ2, and AZ3, where the master node is in AZ1 and the 3x worker nodes across the AZs, just like how we defined them in our small plan. We also see the master node IP address is from our Nodes IP Block we defined in NSX-T.
So now we have a K8s cluster, naturally we want to use it! For that we use a tool called kubectl, pronounced by some as kube-cuddle, awww cute! Earlier in this post we downloaded it and added it to our PATH on your ubuntu jump. Before we can issue kubectl command we need the credentials. Credentials can be retrieved using the PKS command pks get-credentials <cluster-name>
root@ubuntu-jump:~# pks get-credentials pks-cluster-1 Fetching credentials for cluster pks-cluster-1. Context set for cluster pks-cluster-1. You can now switch between clusters by using: $kubectl config use-context <cluster-name>
Lets test we can successfully communicate with the K8s cluster with the retrieved credentials by running the kubectl cluster-info command
root@ubuntu-jump:~# kubectl cluster-info Kubernetes master is running at https://pks-cluster-1.lab.keithlee.ie:8443 Heapster is running at https://pks-cluster-1.lab.keithlee.ie:8443/api/v1/namespaces/kube-system/services/heapster/proxy KubeDNS is running at https://pks-cluster-1.lab.keithlee.ie:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy kubernetes-dashboard is running at https://pks-cluster-1.lab.keithlee.ie:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy monitoring-influxdb is running at https://pks-cluster-1.lab.keithlee.ie:8443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I’m not going to run through all the kubectl commands but another useful command is kubectl get nodes -o wide. Here we can see details of the workers such as the K8s version and the IP address of each worker.
root@ubuntu-jump:~# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 9af0e2c7-4719-47cb-85db-23a8238210fd Ready <none> 1h v1.11.5 172.15.0.3 172.15.0.3 Ubuntu 16.04.5 LTS 4.15.0-42-generic docker://17.12.1-ce a3add38b-d527-4f8d-ae52-b8a9e673f344 Ready <none> 1h v1.11.5 172.15.0.5 172.15.0.5 Ubuntu 16.04.5 LTS 4.15.0-42-generic docker://17.12.1-ce b5cc05e7-4c1e-49f1-9475-bfd9beb93a07 Ready <none> 1h v1.11.5 172.15.0.4 172.15.0.4 Ubuntu 16.04.5 LTS 4.15.0-42-generic docker://17.12.1-ce
So as you can see, its literally that easy to create a k8s cluster with one simple command.
NSX-T Objects
When we create a k8s cluster using the PKS API, there is lots of “magic” happening in the background. I’m not going to go into detail of what is happening under the covers in this post; you can attend my VMware PKS workshop for that ;). But lets have a look at what objects are created in NSX-T.
In NSX Manager, go Networking > Routers and we can see below we have 6 new tier-1 routers. Prior to K8s cluster creation, we only had T0-LR and t1-pks-mgmt.
The new object names include the UUID of the PKS K8s cluster, 910f1410-9c48-435c-8f8a-a775c6688021, which is the same UUID returned by pks clusters and pks cluster <cluster-name>. The first, lb-pks-<uuid>-cluster-router is a tier-1 for the load balancer. Second, pks-<uuid>-cluster-router is for the k8s node network where the master and workers reside. The remaining 4x tier-1’s are for the 4x default namespaces when a new cluster is created, them being pks-system, kube-system, kube-public, and default. You will also notice that these new routers have a icon beside them, whereas the T0-LR and t1-pks-mgmt don’t, this denotes that they are protected objects created by the Superuser we created previously.
In Networking > Switching, there are also 6x new logical switches, again 1x for load balancer, 1x node network, and 4x for default namespaces.
In Networking > Routing > T0-LR > Services > NAT we see 5x new NAT rules auto created. The rules (2064 to 2075) with Prioirty 1024 below. The first, rule 2064, is for our node network, and the remaining four are for the namespace pod networks
In Networking > Load Balancing we see a new load balancer, again with the UUID of our cluster
The load balancer has 3x virtual servers, again with the cluster UUID making up part of their name. The first with virtual server at the end of its name is a layer 4 virtual server for our k8s master node. You will notice the IP 10.0.80.11 is the same as returned by pks cluster <cluster-name>. The remaining two virtual servers are http and https layer 7 virtual servers.
That concludes this post on creating a PKS Kubernetes cluster.
I’m commenting because, with your help, I was able to make everything in your post series work on my SuperMicro E300-8D with 128G of RAM and 700GB SSD / 2TB HDD. A couple of mods that I made (and I am no expert, just had some things set up already):
– Used a single v-center on my physical host with 2 datacenters – one for p-esxi and 1 for nested.
– Nested ESXi hosts each have 10GB + 100GB thin disks added for VSAN
– On the nested DC, I added my 3 virtual esx hosts into a single cluster and configured VSAN. I had to mark my 100GB disks as HDD to add them to capacity tier.
– I defined a VSAN Policy for no redundancy to save space – probably not recommended
– NSX – Used 2.3.1. Manager and Controller on physical esxi instead of inside nested. NSX Edge inside nested cluster
– No pfSense was used – I defined VLANs using UBNT edge router + static route for 10.0.80.0/24
– I defined Win2016 and Ubuntu jump hosts on physical ESXI, but did not use them for DNS or NTP
– Used Google Cloud DNS (works with non-public IPs) and 8.8.8.8,8.8.4.4 as DNS servers and 0.pool.ntp.org for NTP
– I followed the NSX-T and PKS install directions to the letter
I’m putting together some screen shots in the hope that others will go through this series and do the same. It is a great learning experience, and I came out of it understanding way more than when I started. Any of the settings I changed could be wrong, but I will find out as I use my new clusters to test a bunch of things.