Lab Setup

Published by Oga Ajima on 2018-02-15

KTHW - Initial Setup

KTHW uses GCP to setup the infrastructure needed for the lab. As noted in the previous article, we will be utilizing KVM so we will need to do some initial setup. This article gives a brief overview of the various tools and application we will use to setup the environment on KVM. We already have Ubuntu server installed and won't go into the steps taken to achieve that as there are multiple good guides on achieving this1 2. Assuming there's a functional Ubuntu Server, the following tools will be needed:

  • bridge-utils
  • genisoimage
  • qemu-uitls
  • virt-install
  • virsh

Install tools

bridge-utils is a collection of bridge administration utilities that can be used to create and manage bridges (switches) on Linux. We will use it to create a switch for the Kubernetes cluster.

sudo apt install bridge-utils

genisoimage is a CLI tool that will be used for creating ISO images, more on this later.

sudo apt install genisoimage

qemu-utils QEMU administration utility that includes a disk image creator.

sudo apt install qemu-utils

virt-install is a CLI tool that can be used to create VMs using libvirt, we will use this instead of the gcloud command to provision VM on our KVM host.

sudo apt install virtinst

virsh is the main interface for managing virsh guest domains, we will mostly use this to view VM domain on the host and to console into the VM (while we will mostly be using SSH to connect to the VM, having virsh was initially very useful in getting the configuration for cloud-init right).

sudo apt install libvirt-clients

Once we have this tool installed, we can proceed into VM provisioning but before that, a brief detour to describe Ubuntu Cloud images and Cloud Init.

Ubuntu Cloud Image

If you have just installed your Ubuntu Server, you will notice that it took quite a while to get it up and running; starting from scratch, it usually takes anywhere from 20 minutes to over an hour to get the server installed and most times you have to be present in order to provide some required information. Image-based installation of OS sidesteps this wait by capturing the state of an already installed OS, removing machine-specific attributes and saving the resulting artifact as an image that can then be applied to multiple machines without going through the whole installation process all over again. This also has the advantage of immutability since you can decide on a specific version of the OS with all the required applications preinstalled and be certain that every such subsequent deployment matches the desired environment (Similar to the promise of Kubernetes and the whole devops concept of pets and cattle).

Ubuntu Cloud Images are pre-installed images provided by Canonical. It represents their idea of an Ubuntu server installation customized for cloud environments. It has most of the basic tools we will need in this lab already preinstalled and is simiar to images on Cloud computing platforms which provide Ubuntu Certified Images such as AWS, GCP and Azure. We will be using Ubuntu Server Cloud Image 16.04 LTS.

Cloud-Init

As noted above, images provide repeatable and hence identical deployment of an operating systems, while this is a useful trait, there is usually a need to customize the installation such as specifying system parameter like hostname, IP address and other such data that may be specific to a given OS deployment depending on the intended use of the machine. Cloud init provides such a facility for configuring a vast variety of different details including hostname and IP address as noted above and also other features like application installation, adding user, configuring login and password details, setting SSH keys and a vast many other options. For providing configuration data for a booting instance, cloud-init uses datasources; these could be user data, instance data or network configuration and these data are provided in files that are named user-data, meta-data and network-config respectively. Depending on the cloud platform, there are various ways of providing this data but in this lab, we will be using the no-cloud option, this loads cloud-init data into booting VMs without the need to provide a network. The options for the filesystem must be either vfat/iso9660. In order to do this, we will need to create 3 files: user-data, meta-data and network-config. The contents of my files are as given below:

  • user-data: An indepth look of the various fields can be found at cloudinit docs but things to note and configure are the passwd and ssh-authorized-keys section. The passwd allows console login and this requires the hashed password and not the plain-text. From the docs, the hash can be generated using mkpasswd --method=SHA-512 --rounds=4096 (note this is not recommended and is mostly for convenience, it helps in troubleshooting in the event things are not working as expected but should ideally be disabled). The ssh-authorized-keys add ssh keys to the authroized keys file and you can add as many keys as systems you plan to login from, I also use a Windows system (with MobaXterm and have added the keys in addition to the KVM host)
#cloud-config
users:
  - name: dude
    gecos: dude
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    shell: /bin/bash
    groups: sudo
    lock_passwd: false
    passwd: XXX /
    ssh-authorized-keys:
      - ssh-rsa 
      - ssh-rsa 
manage_etc_hosts: localhost
package_upgrade: true
power_state:
  delay: "+2"
  mode: reboot
  message: Bye Bye
  timeout: 5
  condition: True
  • meta-data
instance-id: iid-instance00
local-hostname: initial
  • network-config
---
version: 1
config:
- type: physical
  name: ens3
  subnets:
  - type: static
    address: 10.240.0.60
    netmask: 255.255.255.0
    dns_search: kvm.kthw.test
    routes:
    - network: 0.0.0.0
      netmask: 0.0.0.0
      gateway: 10.240.0.1
- type: nameserver
  address: [10.240.0.31, 10.240.0.32]
  search: [host]

At this point we are almost ready to actually dive into setting up the lab infrastructure, in order to do that, we will need to create the datasource files for the various VM required for the lab. For the lab, we will need a DNS server, a Loadbalncer, the Kubernetes Controller and Worker hosts. To keep with the theme of redundancy and high availability as followed in the lab, we will have two hosts for both the DNS server and Loadbalancers. In total, we will be setting up 10 VM hosts and an additional VM to serve as a router in order to accomplish the lab. Our architecture of the lab is as given below

KVM KTHW Lab

We can now run the commands to generate the nocloud isos. For this lab, I have created a folder structure in my home directory named kthw like so:

kthw
└─bak
   │   user-data.bak
   │   meta-data.bak
   │   network-config
└─cert
        

In the kthw folder, I have dowload Ubuntu Server 16.04 and vyos images, the bak folder contains the contents of the nocloud datasource described earlier. I then use a simple bash file to generate the necessary files for the various VMs datasources. This file is just a bunch of for loop statements that copies the files and modifies them as necessary. The file expects user-data.bak, meta-data.bak and network-config.bak to be in a bak folder. Download and inspect the file and then run by typing ./cloudinit.sh. At this point we should have a bunch of iso files within the kthw folder and we can now actually start the VM provisioning. The next couple of posts will cover deploying the vyos router, DNS servers, keepalived loadbalancers and the controller and worker nodes.