vyos, DNS and Loadbalancer

Published by Oga Ajima on 2018-02-18

Setup

We will continue setting up the supporting infrastructure from where we left off. To confirm we still have a bridge setup, run brctl show, this should show the bridges configured on the system, and there should be a kube1 bridge. If you ran this script then you should have among the images and iso files, the following vyos-kube1.qcow2, seeddns-1.iso, seeddns-2.iso, dns-1.img, dns-2.img, seedLoadbalancer-1.iso, seedLoadbalancer-2.iso, Loadbalancer-1.img, and Loadbalancer-2.img. These are the cloud images and cloudinit no-source iso. To see the bootup and process during installation, you can type virsh console "vm name", it is a relatively fast process and in less than a minute, you will be presented with a login screen which if you set a password in the user-data file, you can then log in. We will be connecting to the machines using SSH.

vyos

To install the router, run the following:

virt-install --name vyos --ram 512 \
 --vcpus=1 --cpu host \
 --hvm --disk path=vyos-kube1.qcow2,size=2 \
 --os-type=generic --os-variant=generic \
 --graphics none --network bridge=br1 \
 --network bridge=kube1 \
 --console pty,target_type=serial \
 --cdrom ./vyos-1.1.8-amd64.iso &

For initial configuration of the router, we will need to login through the console, type virsh console vyos and you will be presented with with the login prompt. The default username/password is vyos/vyos. Type install image to begin installation. For full installation directives, visit the wiki, note that you will need to change the default password in order to be able to save/commit the configuration. To do that, navigate to system >> login >> user vyos

edit system login user vyos
set authentication plaintext-password "type password here"
commit
save

We now need to configure the router to provide internet connectivity for the virtual machines but first in order to be able to do that, we will need to know which interfaces are connected to which switch, the order in which they are connected is not always as one would think so we need to confirm the precise details. To do this, type show interfaces ethernet and note the interface name and MAC address. The example below shows my already configured router but you will want to note down the names (etho & eth1) and their associated hw-id (52:54:00:b9:61:50 & 52:54:00:b9:c4:0c)

vyos interfaces

Exit back to the KVM host and type brctl showmacs br0 and brctl showmacs kube1 to list the MAC of the VMs connected with the switches.

switch interfaces

In our case, our internet facing switch is br1 and we can see eth0 is connected to br1 and eth1 is connected to kube1. We now go back to vyos (virsh console vyos) to configure the interfaces substituting the proper interfaces; the 192.168.2.251 is internet facing while 10.240.0.1 refers to the kubernetes cluster switch, kube1, substitute as necessary:

set interface ethernet eth0 address 192.168.2.251/24
set interface ethernet eth1 address 10.240.0.1/24

set system gateway-address 192.168.2.1

set nat source rule 10
set nat source rule 10 source address 10.240.0.0/24
set nat source rule 10 outbound-interface eth0
set nat source rule 10 translation address 192.168.2.251

commit
save

Check connectivity by pinging ping 8.8.8.8, you should receive replies if everything is okay. Internet connectivity is required since deploying the other VMs require connectivity.

DNS

To install the dns VM, run the following:

for i in 1 2; do
  virt-install --name dns-${i} \
    --ram=512 --vcpus=1 --cpu host --hvm \
    --disk path=dns-${i}.img \
    --import --disk path=seeddns-${i}.iso,device=cdrom \
    --network bridge=kube1 &
done

This creates two VMs, dns-1 and dns-2, type virsh list to show the VMs on the host, you should see vyos, dns-1 and dns-2. ssh dude@dns-1 should get us into the system and we can setup our DNS infrastructure. For a more step-by-step tutorial on configuring DNS, this DigitalOcean tutorial is a good one and what I mostly used so that isn't covered. My various config files are given below:

dns-1

ssh dude@dns-1

named.conf.option

Type sudo nano /etc/bind/named.conf.options and paste the configuration below:

acl "trusted" {
        10.240.0.31;   # dns-1
        10.240.0.32;   # dns-2
        10.240.0.40;   # apiserver
        10.240.0.41;   # Lb1
        10.240.0.42;   # Lb2
        10.240.0.60;   # Controller-0
        10.240.0.61;   # Controller-1
        10.240.0.62;   # Controller-2
        10.240.0.65;   # apiserver
        10.240.0.70;   # Worker-0
        10.240.0.71;   # Worker-1
        10.240.0.72;   # Worker-2
        10.240.0.150;  # dev-kvm
};

acl "outside" {
        any;
};

options {
        directory "/var/cache/bind";

        recursion yes;                  # enables recursive queries
        allow-recursion { trusted; };   # allows recursive queries from "trusted" clients
        listen-on { 10.240.0.31; };    # dns-1 private IP address - listen on private network only
        allow-transfer { none; };       # disable zone transfers by default

        forwarders {
                8.8.8.8;
                8.8.4.4;
        };
named.conf.local

sudo nano /etc/bind/named.conf.local

zone "kvm.kthw.test" {
        type master;
        file "/etc/bind/zones/db.kvm.kthw.test";        # zone file path
        allow-transfer { 10.240.0.32; };               # dns-2 private IP address - secondary
};

zone "0.240.10.in-addr.arpa" {
        type master;
        file "/etc/bind/zones/db.0.240.10";            # 10.240.0.0/24 subnet
        allow-transfer { 10.240.0.32; };               # dns-2 private IP address - secondary
};
//
// Do any local configuration here
//

// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918";
db.kvm.kthw.test
sudo mkdir /etc/bind/zones/
sudo cp /etc/bind/db.local /etc/bind/zones/db.kvm.kthw.test
sudo nano /etc/bind/db.kvm.kthw.test
;
; BIND data file for local loopback interface
;
$TTL    604800
@       IN      SOA     dns-1.kvm.kthw.test. admin.kvm.kthw.test. (
                              3         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL

; name servers - NS records
        IN      NS      dns-1.kvm.kthw.test.
        IN      NS      dns-2.kvm.kthw.test.

; name servers - A records
dns-1.kvm.kthw.test.     IN      A       10.240.0.31
dns-2.kvm.kthw.test.     IN      A       10.240.0.32

; 10.240.0.0/24 - A records
loadbalancer-1.kvm.kthw.test.    IN      A       10.240.0.41
loadbalancer-2.kvm.kthw.test.    IN      A       10.240.0.42
controller-0.kvm.kthw.test.      IN      A       10.240.0.60
controller-1.kvm.kthw.test.      IN      A       10.240.0.61
controller-2.kvm.kthw.test.      IN      A       10.240.0.62
apiserver.kvm.kthw.test          IN      A       10.240.0.65
worker-0.kvm.kthw.test.          IN      A       10.240.0.70
worker-1.kvm.kthw.test.          IN      A       10.240.0.71
worker-2.kvm.kthw.test.          IN      A       10.240.0.72
db.0.240.10
sudo cp /etc/bind/db.127 /etc/bind/zones/db.0.240.10
sudo nano /etc/bind/zones/db.0.240.10
;
; BIND reverse data file for local loopback interface
;
$TTL    604800
@       IN      SOA     dns-1.kvm.kthw.test. admin.kvm.kthw.test. (
                              3         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;

; name servers - NS records
        IN      NS      dns-1.kvm.kthw.test.
        IN      NS      dns-2.kvm.kthw.test.
;

; PTR Records
31      IN      PTR     dns-1.kvm.kthw.test.            ; 10.240.0.31
32      IN      PTR     dns-2.kvm.kthw.test.            ; 10.240.0.32
41      IN      PTR     loadbalancer-1.kvm.kthw.test.   ; 10.240.0.41
42      IN      PTR     loadbalancer-2.kvm.kthw.test.   ; 10.240.0.42
60      IN      PTR     controller-0.kvm.kthw.test.     ; 10.240.0.60
61      IN      PTR     controller-1.kvm.kthw.test.     ; 10.240.0.61
62      IN      PTR     controller-2.kvm.kthw.test.     ; 10.240.0.62
65      IN      PTR     apiserver.kvm.kthw.test.        ; 10.240.0.65
70      IN      PTR     worker-0.kvm.kthw.test.          ; 10.240.0.70
71      IN      PTR     worker-1.kvm.kthw.test.          ; 10.240.0.71
72      IN      PTR     worker-2.kvm.kthw.test.          ; 10.240.0.72

dns-2 Configuration

ssh dude@dns-2

named.conf.options

sudo nano /etc/bind/named.conf.options

acl "trusted" {
        10.240.0.31;   # dns-1
        10.240.0.32;   # dns-2
        10.240.0.40;   # apiserver
        10.240.0.41;   # Lb1
        10.240.0.42;   # Lb2
        10.240.0.60;   # Controller-0
        10.240.0.61;   # Controller-1
        10.240.0.62;   # Controller-2
        10.240.0.65;   # apiserver
        10.240.0.70;   # Worker-0
        10.240.0.71;   # Worker-1
        10.240.0.72;   # Worker-2
        10.240.0.150;  # dev-kvm
};

acl "outside" {
        any;
};

options {
        directory "/var/cache/bind";

        recursion yes;                  # enables recursive queries
        allow-recursion { trusted; };   # allows recursive queries from "trusted" clients
        listen-on { 10.240.0.32; };    # dns-2 private IP address - listen on private network only
        allow-transfer { none; };       # disable zone transfers by default

        forwarders {
                8.8.8.8;
                8.8.4.4;
        };
};
named.conf.local

sudo nano /etc/bind/named.conf.local

zone "kvm.kthw.test" {
        type slave;
        file "slaves/db.kvm.kthw.test";
        masters { 10.240.0.31; };  # ns1 private IP
};

zone "0.240.10.in-addr.arpa" {
        type slave;
        file "slaves/db.0.240.10";  
        masters { 10.240.0.31; }; # ns1 private IP
};
//
// Do any local configuration here
//

// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918";
Check Configuration
sudo named-checkconf
sudo named-checkzone kvm.kthw.test /etc/bind/zones/db.kvm.kthw.test
sudo named-checkzone 128.10.in-addr.arpa /etc/bind/zones/db.0.240.10
Restart BIND

sudo service bind9 restart

Loadbalancer

To install the loadbalancer VMs, run:

for i in 1 2; do
  virt-install --name Loadbalancer-${i} \
    --ram=1024 --vcpus=1 --cpu host --hvm \
    --disk path=Loadbalancer-${i}.img \
    --import --disk path=seedLoadbalancer-${i}.iso,device=cdrom \
    --network bridge=kube1 &
done

Wait for about a minute or so for the VMs to finish installation and bootup, then SSH into them. virsh list on the KMV host should now show vyos, dns-1, dns-2, Loadbalancer-1 and Loadbalancer-2. The various config files are given below:

Loadbalancer-1

ssh dude@loadbalancer-1

Set IPv4 Mode

sudo systemctl edit --full bind9

Add "-4" to the end of the ExecStart directive

Edit Keepalived configuration (/etc/keepalived/keepalived.conf)

sudo nano /etc/keepalived/keepalived.conf

Paste the configuration below
! Configuration File for keepalived

global_defs {
   notification_email {
       admin@admin.com
   }
   notification_email_from loadbalancer1@admin.com
! UNIQUE:
   router_id LVS_PRI
}

! ***********************************************************************
! *************************   WEB SERVICES VIP  *************************
! ***********************************************************************
vrrp_instance VirtIP_10 {
    state MASTER
    interface ens3
    virtual_router_id 10
! UNIQUE:
    priority 150
    advert_int 3
    smtp_alert
    authentication {
        auth_type PASS
        auth_pass kubernetes
    }
    virtual_ipaddress {
        10.240.0.65
    }

    lvs_sync_daemon_interface ens3
}

! ************************   WEB SERVERS  **************************

virtual_server 10.240.0.65 6443 {
    delay_loop 10
    lvs_sched rr
    lvs_method DR
    persistence_timeout 5
    protocol TCP

    real_server 10.240.0.60 6443 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }

    real_server 10.240.0.61 6443 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }

    real_server 10.240.0.62 6443 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }
}

virtual_server 10.240.0.65 8080 {
    delay_loop 10
    lvs_sched rr
    lvs_method DR
    persistence_timeout 5
    protocol TCP

    real_server 10.240.0.60 8080 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }

    real_server 10.240.0.61 8080 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }

    real_server 10.240.0.62 8080 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }
}
sudo systemctl preset keepalived.service
sudo systemctl start keepalived.service
Enable ip_forward.
sudo nano /etc/sysctl.conf
net.ipv4.ip_forward = 1
Loadbalancer-2

ssh dude@loadbalancer-2

Edit Keepalived configuration (/etc/keepalived/keepalived.conf)

sudo nano /etc/keepalived/keepalived.conf

Paste the configuration below
! Configuration File for keepalived

global_defs {
   notification_email {
        admin@admin.com
   }
   notification_email_from loadbalancer2@admin.com
! UNIQUE:
   router_id LVS_SEC
}

! ***********************************************************************
! *************************   WEB SERVICES VIP  *************************
! ***********************************************************************

vrrp_instance VirtIP_10 {
    state BACKUP
    interface ens3
    virtual_router_id 10
! UNIQUE:
    priority 50
    advert_int 3
    smtp_alert
    authentication {
        auth_type PASS
        auth_pass kubernetes
    }
    virtual_ipaddress {
        10.240.0.65
    }

    lvs_sync_daemon_interface ens3
}

! ************************   WEB SERVERS  **************************

virtual_server 10.240.0.65 6443 {
    delay_loop 10
    lvs_sched rr
    lvs_method DR
    persistence_timeout 5
    protocol TCP

    real_server 10.240.0.60 6443 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }

    real_server 10.240.0.61 6443 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }

    real_server 10.240.0.62 6443 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }

}
sudo systemctl preset keepalived.service
sudo systemctl start keepalived.service
Enable ip_forward
sudo nano /etc/sysctl.conf
net.ipv4.ip_forward = 1

At this point we should have working DNS servers and Loadbalancers for our Controller machines and the support structure needed to recreate the KTHW lab, subsequent labs will more closely mirror the original labs but with some minor substitutions to account for the difference in the underlying infrastructure (GCP vs KVM).