PKS NSX-T Home Lab – Part 10: Install Ops Man and BOSH

Intro

In the previous post in this series we prepared NSX-T for our PKS install. This included creating a T1 router and logical switch for the PKS Management network, DNAT and SNAT rules, static routes, IP Blocks, IP Pools, NSX Manager API certificate and NSX Manager Superuser Principle Identity. In this post we are going to install Pivotal Cloud Foundry Operations Manager, aka, Ops Man, and then BOSH. Let’s get started.

Complete list of blog post in this series…

Downloads

Before we can install Ops Man (and BOSH) we need to download them. Go to network.pivotal.io, aka PivNet, and register for an account, its free! Once registered, login, and navigate to Pivotal Cloud Foundry Operations Manager. After clicking that, from the Release dropdown, select the latest 2.3.x release. Then, download the vSphere version, it will have a name similar to Pivotal Cloud Foundry Ops Manager for vSphere – 2.3-build.194 and will be approx 4GB in size. We don’t download BOSH from PivNet as its part of the Ops Man OVA.

While also on PivNet, grab the latest release of PKS (approx 3.7GB), Harbor (approx 736MB), and Ubuntu Xenial stemcell (approx 542MB) as shown above. We will use these later.

Availability Zones

One last item before we can start installing is to discuss availability zones, aka AZs. An AZ is defined as an operator-assigned, functionally independent segment of network infrastructure. To elaborate on that, an availability zone is a collection of infrastructure components that are isolated to stop the propagation of failure or outage across zone boundaries. An outage that is caused by external factors such as power, cooling, and physical integrity affects only one zone. Each availability zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Each zone should have independent power supply, cooling system, network, and security. Additionally, these zones should be physically separate so that even uncommon disasters affect only a single availability zone. So while that is great and all, this is a home lab, not a HA production env, so we are going to break the rules!

For our home lab env we are going to create four resource pools (RP) all in our compute cluster, one for the PKS Management plane (Mgmt-AZ) and then three for compute plane (AZ1, AZ2, AZ3) to simulate a production like env but all in a single cluster (not production like!). To create a RP, in vSphere Client > Host and Clusters view > Right click compute cluster > New Resource Pool > enter a name > OK. Repeat for again for the other RPs.

Deploy Ops Man

Finally we get to start the PKS install process, it only took 10 blog posts!! First up is Ops Man. Ops Man is a set of APIs and a web-based graphical interface used to configure and deploy platform components. Follow the steps here to deploy the Ops Man OVA we downloaded above to the Mgmt-AZ AZ we just created. When prompted to select the network, select the T1 logical switch we created earlier for the PKS Management plane, ls-pks-mgmt.

At the customize template step, enter an IP Address from our PKS Management network. Remember that this we be NAT’ed, so enter that private/non-routable address eg 172.14.0.2, not the routable destination address we configured in our DNAT rule eg 10.0.80.2. The default gateway is the router port we created on the T1 eg 172.14.0.1.

Once deployed, power on the VM.

Configure Ops Man

Before we can configure Ops Man we need to be able to access it. So we can access it using a FQDN, we need to create an entry in our DNS for Ops Man. Create an A record entry using the routable destination IP address from the DNAT rule we created, 10.0.80.2.

Navigate to the Ops Man FQDN in a web browser. On first time access we need to configure the authentication system.

Select Internal Authentication and then enter a username, password, and decryption passphrase. The passphrase encrypts the Ops Manager datastore and is used during the restore process of Ops Manager from a backup. Check the box to agree to sell your kids and kidneys and then click Setup Authentication.

After a few moments the authentication system will have initialized and you will be presented with the login screen to which you login with the credentials you just provided when configuring the internal authentication. After successful login you will be presented with the Installation Dashboard.

In the Installation Dashboard we see the BOSH Director for vSphere tile with a orange bar along the bottom. This is actually a progress bar. After the required parameters in the tile are configured it will turn green. So lets start filling out those parameters!

Click the tile. It will open the tile’s setting tab with the vCenter Config parameters page. Follow the steps here to configure the tile. I will provide some complimentary commentary along the way.

vCenter Config

All the parameters here are self explanatory. For Networking, select NSX-T and when prompted for the NSX CA Cert paste the contents of the nsx.crt certificate we generated in the previous blog post. On clicking Save, the parameters are validated and you should get a Setting updated banner message across the top of the window.

If a validation check fails then an alert similar to below will be displayed

Director Config

Only four things to do here 1) enter your NTP server 2) check Enable VM Resurrector Plugin 3) check Enable Post Deploy Scripts 4) click Save. Simples!

The Resurrector plugin is a BOSH Health Monitor plugin. When BOSH Health Monitor detects it can’t communicate with the BOSH Agent on a VM it deployed, the resurrector will resurrect it by recreating it.

Create Availability Zones

Click the Add button on the far right of the screen four times as we are going to define four AZs. One for each resource pool we create earlier in this post; Mgmt-AZ, AZ1, AZ2, and AZ3. For example…

Name: Mgmt-AZ
IaaS Configuration: default
Cluster: Compute-cluster
Resource Pool: Mgmt-AZ

Create Networks

Click the Add Network button the far right of the screen just once this time. Here we add the parameters for our PKS Management Network.

Name: Any name of your choosing but something that easily identifies the network, PKS-MGMT
vSphere Network Name:
Enter the name of the PKS Management logical switch we create in NSX Manager, ls-pks-mgmt
CIDR:
The CIDR of the PKS Management Network, in our case 172.14.0.0/24
Reserved Ranges:
The IP addresses we don’t want BOSH to use which is our gateway and Ops Manager, 172.14.0.1-172.14.0.2
DNS:
our DNS!
Gateway:
The PKS Management network T1 router port, 172.14.0.1
Availability Zones: Mgmt-AZ, AZ1, AZ2, and AZ3

 

Assign AZs and Networks

Singleton Availability Zone: Mgmt-AZ
Network: PKS-MGMT

Resource Config

The BOSH Director VM by default consumes 2 vCPUs, 8GB memory, 64GB disk and also has a persistent disk of 50GB. Each of the four Compilation VMs consume 4 vCPUs, 4GB memory, 16GB disk each.

If your are resource constrained in your env, you can scale these parameters back, such as the BOSH Director persistent disk from 50GB to 20GB, and the VM Type from large.disk to large. Note, BOSH Director needs at a min 6GB memory so you will have to chose an option that has at least 8GB memory. Can also scale back the Compilation VMs from large.cpu to small.

Apply Changes

With all the parameters filled out, return to the Installation Dashboard view by clicking Installation Dashboard at the top of the window. The BOSH Director tile will now have a green bar indicating all the required parameters have been entered. Next we click REVIEW PENDING CHANGES

Followed by APPLY CHANGES

The install of BOSH Director can then be monitored

After approx 20 minutes the install will be complete

In vSphere Client we can now see the BOSH Director VM. As BOSH deployed VMs don’t have “pet” names, more so “cattle” names, they can be identified by looking at the Custom Attributes. The stemcell can also be seen where its name is prefixed with “sc”.

And that completes the install of Ops Man and BOSH and this post. Next post we will install the PKS tile.

Leave a Reply

Your email address will not be published. Required fields are marked *