OpenShift With Libvirt On Your Laptop

From Linux Delta
Jump to: navigation, search

This is to document how I got OpenShift with a 2 node cluster running on my laptop

Why Did You Produce this Guide?[edit]

This guide is intended to show how to get OpenShift with a 2+node cluster running on a single hypervisor running libvirt. Most of the guides were only partially complete. Either differences between OCP versions, or different use cases necessitated a different way of doing things


What Were Your Goals?[edit]

I had the following goals

  • Use Libvirt on the hypervisor
  • work within the constrains of a captive portal (i.e. hotel or other captive portal that does not allow multiple connections from bridged VMs)
  • Use only DNSMasq
  • Have a minimal OCP 4 install
  • have a self-contained environment


What Technologies Were Used[edit]

  • Libvirt
  • HAProxy
  • DNSMasq
  • TFTP
  • Apache
  • Syslinux
  • Red Hat CoreOS 4.3.1
  • CentOS 8 Stream (for the gateway)


Setting Up the Host (Hypervisor)[edit]

In order to setup the host, create a bridge network for all the VMs to communicate with themselves on:

 nmcli con show
 nmcli con add ifname br0 type bridge con-name br0
 nmcli con add type bridge-slave ifname eno1 master br0
 nmcli con up br0
 nmcli con modify "bridge-br0" ifname br0 type ethernet ip4 10.120.120.1/24

This should be all that is required for a bridged network for libvirt guests to use.


Setting up the Gateway VM[edit]

This is a VM that will be independent of the OpenShift install. It will host services that OpenShift requires, but is unaffected by the installation (or lack thereof) of OCP4.

This VM should have 2 network cards, one that is on the bridge, and one that is on the NAT interface. In my case I setup the following:

 Default NAT for libvirt: 192.168.122.x/24
 Bridge Network I Setup: 10.120.120.x/24

Set the Sysctl for Port-Forwarding[edit]

On the guest that is designated as the gateway, ensure that the sysctl is on for port-forwarding

 # Configure the kernel to forward IP packets:
 
 /etc/sysctl.conf
 # Controls IP packet forwarding
 net.ipv4.ip_forward = 1
 sysctl -w net.ipv4.ip_forward=1


Set the firewall zones with firewalld[edit]

Next, make two zones with firewalld. The external zone should be bound to the interface on the Libvirt NAT which can access the internet. This interface/zone should also make sure that masquerading is enabled:

 firewall-cmd --zone=external --add-interface=enp1s0 --permanent
 firewall-cmd --zone=internal --add-interface=enp2s0 --permanent
 firewall-cmd --zone=external --add-masquerade --permanent
 
 # allow all trafic from from the host-only network
 firewall-cmd --permanent --zone=internal --set-target=ACCEPT
 firewall-cmd --complete-reload


I have decided to allow internal traffic on the internal zones instead of opening up all the individual ports. Do this at your own risk


Install and configure HTTPD[edit]

While this section is called installing dnsmasq, we will install all the packages we need for this project to save some commands:

 # install httpd, tftp, dnsmasq etc
 dnf install httpd tftp-server dnsmasq haproxy syslinux -y


Because HAProxy will be binding to port 80, we need to move apache to a different port, 8080 is a reasonable choice

 sed -i s/'Listen 80'/'Listen 8080'/g /etc/httpd/conf/httpd.conf
 
 # Make sure SELinux doesn't get in the way
 semanage port -m -t http_port_t -p tcp 8080
 
 # Add the apache port to the internal firewall... this is not necessary if you have set the internal to accept everything
 # firewall-cmd --add-port=8080/tcp --permanent --zone=internal


Next create the directory for the CoreOS Images and the Ignition files.

 # Create the directories for apache
 mkdir /var/www/html/rhcos
 mkdir /var/www/html/ignition


CoreOS files can be Downloaded from https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/latest/latest/


We are going to use symlinks in order to not have to change configuration files if we update the CoreOS image:

 # Create a symlink to the latest version so that we don't have to update the PXE menu each release
 cd /var/www/html/rhcos
 ln -s rhcos-4.3.0-x86_64-installer-kernel installer-kernel
 ln -s rhcos-4.3.0-x86_64-installer-initramfs.img installer-initramfs.img
 ln -s rhcos-4.3.0-x86_64-metal.raw.gz metal.raw.gz


Setup DNSMasq[edit]

Most of the setup for DNSMasq is done in it's configuration file. Discussed below, however, in a single master OCP4 setup, I found that the maximum number of concurrent DNS requests was being reached. The default is set at 150. To increase this, you need to edit the systemd unit file so that it lookes like this:

 [Unit]
 Description=DNS caching server.
 After=network.target
 
 [Service]
 ExecStart=/usr/sbin/dnsmasq --dns-forward-max=500 -k
 
 [Install]
 WantedBy=multi-user.target


Finally reload systemd:

 systemctl daemon-reload


After that is complete we are ready to start editing the dnsmasq.conf file. This file is located /etc/dnsmasq.conf on CentOS 8. DNSMasq is being used for both DHCP and DNS. DNSMasq implementes the offciail dhcp options. Review this page as it will help you make sense of the dhcp-option= settings in the file. While some of the DHCP options in DNSMasq have 'pretty' names. I avoided using them for consistency sake as several of the options used below did not have a 'pretty' name. The syntax of the dhcp-option is as follows:

 dhcp-option=<option number>,<argument>


It is important to know that Red Hat CoreOS dhcp client does not currently use send-name. This means that while a host will receive it's hostname and IP from the dhcp server, it will not register with the dns portion of dnsmasq. This lead me to statically set the ips with DNSMasq. This is done using tagging like so:

 dhcp-mac=set:<tag name>,<mac address>

Once you have tagged the mac address you can then use this tag to help identify the host. Below is host the hostname is sent to the new vm

 dhcp-option=tag:<tag name>,12,<hostname>


I am not going to cover all of the OpenShift 4 requirements in depth. For DNS you need to have forward/reverse for all hosts, a wildcard entry that points to a load balancer fronting the routers (in our case it will be the gateway ip handled by HAProxy), SRV records for each master/ETCD host, and an API and API-INT record pointing to load balancers (again for me its HAProxy)


 # Setup DNSMasq on Guest For Default NAT interface
 
 # dnsmasq.conf
 # Set this to listen on the internal interface
 listen-address=::1,127.0.0.1,10.120.120.250
 interface=enp2s0
 expand-hosts
 # This is the domain the cluster will be on
 domain=lab-cluster.ocp4.lab
 
 # The 'upstream' dns server... This should be the libvirt NAT
 # interface on your computer/laptop
 server=192.168.122.1
 local=/ocp4.lab/
 local=/lab-cluster.ocp4.lab/
 
 # Set the wildcard DNS to point to the internal interface
 # HAProxy will be listening on this interface
 address=/.apps.lab-cluster.ocp4.lab/10.120.120.250
 
 
 dhcp-range=10.120.120.20,10.120.120.220,12h
 dhcp-leasefile=/var/lib/dnsmasq/dnsmasq.leases
 
 # Specifies the pxe binary to serve and the address of
 # the pxeserver (this host)
 dhcp-boot=pxelinux.0,pxeserver,10.120.120.250
 
 # dhcp option 3 is the router
 dhcp-option=3,10.120.120.250
 # dhcp option 6 is the DNS server
 dhcp-option=6,10.120.120.250
 # dhcp option 28 is the broadcast address
 dhcp-option=28,10.120.120.255
 
 # The set is required for tagging. Set a mac address with a specific tag
 dhcp-mac=set:bootstrap,52:54:00:15:30:f2
 dhcp-mac=set:master,52:54:00:06:94:85
 dhcp-mac=set:worker,52:54:00:82:fc:9e
 
 # Match the tag with a specific hostname
 # dhcp option 12 is the hostname to hand out to clients
 dhcp-option=tag:bootstrap,12,bootstrap.lab-cluster.ocp4.lab
 dhcp-option=tag:master,12,master-0.lab-cluster.ocp4.lab
 dhcp-option=tag:worker,12,worker-0.lab-cluster.ocp4.lab
 dhcp-host=52:54:00:15:30:f2,10.120.120.99
 dhcp-host=52:54:00:06:94:85,10.120.120.50
 dhcp-host=52:54:00:82:fc:9e,10.120.120.60
 
 
 # the log-dhcp option is verbose logging used for debugging
 log-dhcp
 
 pxe-prompt="Press F8 for menu.", 10
 pxe-service=x86PC, "PXEBoot Server", pxelinux
 enable-tftp
 tftp-root=/var/lib/tftpboot
 
 # Set the SRV record for etcd. Normally there are 3 or more entries in
 # SRV record
 srv-host=_etcd-server-ssl._tcp.lab-cluster.ocp4.lab,etcd-0.lab-cluster.ocp4.lab,2380,0,10
 
 # host-record options are linking the mac, ip and hostname for dns
 host-record=52:54:00:dd:5e:07,10.120.120.250,bastion.ocp4.lab
 host-record=52:54:00:15:30:f2,10.120.120.99,bootstrap.lab-cluster.ocp4.lab
 host-record=52:54:00:06:94:85,10.120.120.50,master-0.lab-cluster.ocp4.lab
 host-record=52:54:00:06:94:85,10.120.120.50,etcd-0.lab-cluster.ocp4.lab
 host-record=52:54:00:82:fc:9e,10.120.120.60,worker-0.lab-cluster.ocp4.lab
 
 # These host records are for the api and api-int to point to HAProxy
 host-record=bastion.ocp4.lab,api.lab-cluster.ocp4.lab,10.120.120.250,3600
 host-record=bastion.ocp4.lab,api-int.lab-cluster.ocp4.lab,10.120.120.250,3600


HAProxy Setup[edit]

The first thing to do is make sure that SELinux allows the needed HAProxy ports:

 semanage port -a -t http_port_t -p tcp 6443
 semanage port -a -t http_port_t -p tcp 22623


Then simply edit the haproxy.cfg so that it looks similar to this:

 # HAProxy Config section
 
 # Global settings
 #---------------------------------------------------------------------
 global
     maxconn     20000
     log         /dev/log local0 info
     chroot      /var/lib/haproxy
     pidfile     /var/run/haproxy.pid
     user        haproxy
     group       haproxy
     daemon
 
     # turn on stats unix socket
     stats socket /var/lib/haproxy/stats
 
 #---------------------------------------------------------------------
 # common defaults that all the 'listen' and 'backend' sections will
 # use if not designated in their block
 #---------------------------------------------------------------------
 defaults
     mode                    http
     log                     global
     option                  httplog
     option                  dontlognull
 #    option http-server-close
     option forwardfor       except 127.0.0.0/8
     option                  redispatch
     retries                 3
     timeout http-request    10s
     timeout queue           1m
     timeout connect         10s
     timeout client          300s
     timeout server          300s
     timeout http-keep-alive 10s
     timeout check           10s
     maxconn                 20000
 
 frontend control-plane-api
     bind *:6443
     mode tcp
     default_backend control-plane-api
     option tcplog
 
 backend control-plane-api
     balance source
     mode tcp
     server master-0 master-0.lab-cluster.ocp4.lab:6443 check
     server bootstrap bootstrap.lab-cluster.ocp4.lab:6443 check backup
 
 frontend control-plane-bootstrap
     bind *:22623
     mode tcp
     default_backend control-plane-bootstrap
     option tcplog
 
 backend control-plane-bootstrap
     balance source
     mode tcp
     server master-0 master-0.lab-cluster.ocp4.lab:22623 check
     server bootstrap bootstrap.lab-cluster.ocp4.lab:22623 check backup
 
 frontend ingress-router-http
     bind *:80
     mode tcp
     default_backend ingress-router-http
     option tcplog
 
 backend ingress-router-http
     balance source
     mode tcp
     server worker-0 worker-0.lab-cluster.ocp4.lab:80 check
 
 
 frontend ingress-router-https
     bind *:443
     mode tcp
     default_backend ingress-router-https
     option tcplog
 
 backend ingress-router-https
     balance source
     mode tcp
     server worker-0 worker-0.lab-cluster.ocp4.lab:443 check


I am running the ingress router on the single worker node. This requires some tweaking of OpenShift before and after the installation. This is reflected in the HAProxy config above


HAProxy is prone to failure anytime you restart DNSMasq and it cannot resolve hosts during the restart, however brief. In addition, I have noted that there are times on reboot that HAProxy will not start for the same reason. The solution is to simply restart it after DNSMasq is running


Setting up TFTP[edit]

Setting up the TFTP server involves only a couple of steps:

  • installing the tftp package
  • copying the files from syslinux to the tftp root
 # Copy the bootloaders into the TFTP server
 cp -r /usr/share/syslinux/* /var/lib/tftpboot/
  • Make the directory for the menus
 # make the pxelinux.cfg
 mkdir /var/lib/tftpboot/pxelinux.cfg
  • create a menu file for each server
 # master pxe file
 
 UI menu.c32
 DEFAULT rhcos
 TIMEOUT 100
 
 MENU TITLE RedHat CoreOS Node Installation
 MENU TABMSG Press ENTER to boot or TAB to edit a menu entry
 
 LABEL rhcos
     MENU LABEL Install RHCOS
     KERNEL http://bastion.ocp4.lab:8080/rhcos/installer-kernel
     INITRD http://bastion.ocp4.lab:8080/rhcos/installer-initramfs.img
     APPEND console=tty0 console=ttyS0 ip=dhcp rd.neednet=1 coreos.inst=yes coreos.inst.ignition_url=http://bastion.ocp4.lab:8080/ignition/master.ign coreos.inst.image_url=http://bastion.ocp4.lab:8080/rhcos/metal.raw.gz coreos.inst.install_dev=vda
 
 IPAPPEND 2


Tthe file above references files hosted by Apache so we want to make sure the menu has port 8080 for the pull.

You should create this file for each host you have. The file name should be 01-<mac addresses with '-' instead of ':'>. The DNSMasq lease looks like this:

 1582066860 52:54:00:06:94:85 10.120.120.50 * 01:52:54:00:06:94:85

Therefore use the 5th field to match with the tftp server


You are now ready to start all of the services

 systemctl enable --now httpd dnsmasq tftp.socket
 systemctl enable --now haproxy


Setup the gateway to be the "bastion" for Installation[edit]

First download OC client and the OpenShift Installer from here.

Extract and place the binaries in /bin (or alter your PATH so that the binaries are there)

 tar xvf openshift-client-linux-4.3.1.tar.gz
 tar xvf openshift-install-linux-4.3.1.tar.gz
 
 mv oc openshift-install /bin/


Then obtain your pull secret from here.


Next, create your SSH key and a directory to launch the installer from:

 # generate an ssh key on bastion
 ssh-keygen -N 
 
 # make a new directory for OCP install
 mkdir /root/$(date +%F)
 cd /root/$(date +%F)


Setup your environment variables to make the rest of the conifguration easier[edit]

ENSURE THAT ALL VARIABLES ARE NOT EMPTY! The rest of this guide relies on the environment variables to fill in configuration files


 # Create a variable to hold the SSH KEY
 SSH_KEY=\'$(cat ~/.ssh/id_rsa.pub )\'
 
 # Create a variable to hold the PULL_SECRET
 PULL_SECRET=\'$(cat OCP4_pullsecret.txt)\'
 
 BASE_DOMAIN=ocp4.lab
 CLUSTER_NAME=lab-cluster
 CLUSTER_NETWORK='10.128.0.0/14'
 SERVICE_NETWORK='172.30.0.0/16'
 NUMBER_OF_WORKER_VMS=1
 NUMBER_OF_MASTER_VMS=1
 INGRESS_WILDCARD=apps.${CLUSTER_NAME}.${BASE_DOMAIN}


The cat command below will replace the variables with the shell variables created above. This is the intiail cluster configuration YAML that will be used to create the manifests and ignition files

 cat << EOF > install-config.yaml.bak
 apiVersion: v1
 baseDomain: ${BASE_DOMAIN}
 compute:
 - hyperthreading: Enabled   
   name: worker
   replicas: ${NUMBER_OF_WORKER_VMS}
 controlPlane:
   hyperthreading: Enabled   
   name: master 
   replicas: ${NUMBER_OF_MASTER_VMS}
 metadata:
   name: ${CLUSTER_NAME} 
 networking:
   clusterNetwork:
   - cidr: ${CLUSTER_NETWORK} 
     hostPrefix: 23 
   networkType: OpenShiftSDN
   serviceNetwork: 
   - ${SERVICE_NETWORK}
 platform:
   none: {} 
 fips: false 
 pullSecret: ${PULL_SECRET} 
 sshKey: ${SSH_KEY} 
 EOF


The reason we create the backup is because running the openshift-install command will consume the file and we want to have a copy of it in case we start over or create a new cluster

Next copy the file to the name which will be consumed by the installation process and create the manifests

 cp install-config.yaml.bak install-config.yaml
 openshift-install create manifests

Since we are running a less-than ideal cluster, we need to modify the ingress controller or else it will never initialize properly and the cluster will be unhealthy. We are going to assign the controller to a host we will labe with "infra"

 cat << EOF > manifests/cluster-ingress-02-config.yml
 apiVersion: config.openshift.io/v1
 kind: Ingress
 metadata:
   creationTimestamp: null
   name: cluster
 spec:
   domain: ${INGRESS_WILDCARD}
   nodePlacement:
     nodeSelector:
       matchLabels:
         node-role.kubernetes.io/infra: "true"
 status: {}
 EOF

Next generate the Ignition configs, which will consume the manifests directory. Copy these ignition configs to the appropriate Apache directory:

 openshift-install create ignition-configs
 cp *.ign /var/www/html/ignition/
 
 # Change the ownership on all the files so apache can access them
 chown -Rv apache. /var/www/html/*

Finally, PXEboot the VMs. If everything is configured properly the cluster will start to configure itself.

It is recommended that on a computer with limited resources, only start the bootstrap and master vms. After the bootstrap process is complete you can start the worker/inra/compute node and take down the bootstrap machine. It can take quite some time for this to complete (30 minutes or more)


Post Installation Cluster Config[edit]

As long as you are interacting with the cluster on the same date as you started the process you can run the following to export the KUBECONFIG variabe which is needed for interacting with the cluster

 export KUBECONFIG=/root/$(date %+F)/auth/kubeconfig


You can run commands such as

 openshift-install wait-for bootstrap-complete
 openshift-install wait-for install-complete

During the addition of the worker node, you may see it not being ready, or not even listed. Use the following commands to view the status

 # check to see if the nodes are ready
 oc get nodes
 
 # Sometimes there are CSRs to approve so put a watch on
 watch oc get csr

When a new server attempts to join a cluster there are often certificate signing requests that are issued by said client so that the client will trust the new node. If you do not see nodes, and you have CSR you will need to approve them

 # If there are Pending CSRs approve them:
 oc adm certificate approve `oc get csr |grep Pending |awk '{print $1}' `

Once the worker shows up in the list label it so that the ingress operator can proceed

 oc label node worker-0.lab-cluster.ocp4.lab node-role.kubernetes.io/infra="true"

Finally watch the clusteroperators. This can take up to a couple of hours to complete before everything goes true:

 oc get clusteroperators
 
 NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
 authentication                             4.3.1     True        False         False      14h
 cloud-credential                           4.3.1     True        False         False      20h
 cluster-autoscaler                         4.3.1     True        False         False      18h
 console                                    4.3.1     True        False         False      9h
 dns                                        4.3.1     True        False         False      18m
 image-registry                             4.3.1     True        False         False      18h
 ingress                                    4.3.1     False       True          True       11m
 insights                                   4.3.1     True        False         False      19h
 kube-apiserver                             4.3.1     True        False         False      19h
 kube-controller-manager                    4.3.1     True        False         False      19h
 kube-scheduler                             4.3.1     True        False         False      19h
 machine-api                                4.3.1     True        False         False      19h
 machine-config                             4.3.1     True        False         False      19h
 marketplace                                4.3.1     True        False         False      18m
 monitoring                                 4.3.1     False       True          True       9h
 network                                    4.3.1     True        True          True       19h
 node-tuning                                4.3.1     True        False         False      9h
 openshift-apiserver                        4.3.1     True        False         False      17m
 openshift-controller-manager               4.3.1     True        False         False      18m
 openshift-samples                          4.3.1     True        False         False      18h
 operator-lifecycle-manager                 4.3.1     True        False         False      19h
 operator-lifecycle-manager-catalog         4.3.1     True        False         False      19h
 operator-lifecycle-manager-packageserver   4.3.1     True        False         False      18m
 service-ca                                 4.3.1     True        False         False      19h
 service-catalog-apiserver                  4.3.1     True        False         False      18h
 service-catalog-controller-manager         4.3.1     True        False         False      18h
 storage                                    4.3.1     True        False         False      18h