A declarative operator for libvirt configuration
Go to file
Kostiantyn Kalynovskyi 31f5e96402 Add integration tests and fix BMO integration
The commit adds integration test that includes baremetal operator
- test is driven by airshipctl phases
- Deploys BMO from airshipctl repository as a phase
- Verifies that after VINO-CR is deployed BMHs are created
- Verifies that BMO can install an image into those BMHs using pxe
- Various fixes that allow to integrate with BMO
- Disables password authentication for BMHs untill we have a fix
- BMO fails to authenticate against simple auth provided by nginx
- Removes unit-tests for BMO creation. The whole approach of
requesting VMs from vino-builder should be changed. When we have
final view of the process, we will well define vino-builder API
and add unit-tests to vino controller and builder

Change-Id: I51976ca20811b227ecb069c4ffd81d8afe086e57
2021-05-11 16:41:20 +00:00
.github/ISSUE_TEMPLATE Added Bug and Feature Request templates 2021-03-05 22:09:10 +00:00
certs Support proxy certs in vino 2021-04-09 14:22:44 -04:00
config Add integration tests and fix BMO integration 2021-05-11 16:41:20 +00:00
docs/api Add bootMACAddress to BMH generated objects 2021-04-28 00:10:04 +00:00
hack Generate api docs for CRD types 2021-02-02 13:37:28 -06:00
nodelabeler Add image build jobs 2021-01-15 15:08:09 +00:00
pkg Add integration tests and fix BMO integration 2021-05-11 16:41:20 +00:00
playbooks Add integration tests and fix BMO integration 2021-05-11 16:41:20 +00:00
roles Pass dynamic IPAM information to vino-buider 2021-04-10 19:17:11 +00:00
tools Add integration tests and fix BMO integration 2021-05-11 16:41:20 +00:00
zuul.d Add integration tests and fix BMO integration 2021-05-11 16:41:20 +00:00
.gitignore Fix linting issues 2020-12-09 13:21:55 -06:00
.gitreview Add a gitreview 2021-01-06 12:07:46 -06:00
.golangci.yaml Add linters 2020-12-09 13:19:59 -06:00
Dockerfile Support proxy certs in vino 2021-04-09 14:22:44 -04:00
go.mod Build BMH network config 2021-03-09 10:40:31 -06:00
go.sum Build BMH network config 2021-03-09 10:40:31 -06:00
LICENSE kubebuilder init 2020-11-19 15:28:17 -05:00
main.go Build BMH network config 2021-03-09 10:40:31 -06:00
Makefile Generate api docs for CRD types 2021-02-02 13:37:28 -06:00
PROJECT Add IPPool CR 2021-02-22 13:24:40 -06:00
README.md README update for hugepages 2021-04-12 15:36:57 -05:00

ViNO Cluster Operator

Docker Repository on Quay

Overview

The lifecycle of the Virtual Machines and their relationship to the Kubernetes cluster will be managed using two operators: vNode-Operator(ViNO), and the Support Infra Provider Operator (SIP).

Description

ViNO is responsible for setting up VM infrastructure, such as:

  • per-node vino pod:
    • libvirt init, e.g.
      • setup vm-infra bridge
      • provisioning tftp/dhcp definition
    • libvirt launch
    • sushi pod
  • libvirt domains
  • networking
  • bmh objects, with labels:
    • location - i.e. rack: 8 and node: rdm8r008c002 - should follow k8s semi-standard
    • vm role - i.e. node-type: worker
    • vm flavor - i.e node-flavor: foobar
    • networks - i.e. networks: [foo, bar] and the details for ViNO can be found here

The Cluster Support Infrastructure Provider, or SIP, is responsible for the lifecycle of:

  • identifying the correct BareMetalHost resources to label (or unlabel) based on scheduling constraints.
  • extract IP address information from BareMetalHost objects to use in the creation of supporting infrastructure.
  • creating support infra for the tenant k8s cluster:
    • load balancers (for tenant Kubernetes API)
    • jump pod to access the cluster and nodes via ssh
    • an OIDC provider for the tenant cluster, i.e. Dex
    • potentially more in the future

Development Environment

Pre-requisites

Install Golang 1.15+

ViNO is a project written in Go, and the make targets used to deploy ViNO leverage both Go and Kustomize commands which require Golang be installed.

For detailed installation instructions, please see the Golang installation guide.

Install Kustomize v3.2.3+

In order to apply manifests to your cluster via Make targets we suggest the use of Kustomize.

For detailed installation instructions, please see the Kustomize installation guide.

Proxy Setup

If your organization requires development behind a proxy server, you will need to define the following environment variables with your organization's information:

HTTP_PROXY=http://username:password@host:port
HTTPS_PROXY=http://username:password@host:port
NO_PROXY="localhost,127.0.0.1,10.96.0.0/12"
PROXY=http://username:password@host:port
USE_PROXY=true

10.96.0.0/12 is the Kubernetes service CIDR.

Configure hugepages support

The default vino-builder VM flavors use 1G hugepages. To ensure hugepages support is configured correctly at the host level, do the following:

For Ubuntu 18.04 and earlier, install the 'hugepages' package:

# sudo apt install hugepages

For later ubuntu versions, install the 'libhugetlbfs-bin' package:

# sudo apt install libhugetlbfs-bin

Then run:

# hugeadm --explain

This will display a block like the following describing the current hugepages configuration:

Total System Memory: 96676 MB

Mount Point          Options
/dev/hugepages       rw,relatime,pagesize=1024M

Huge page pools:
      Size Minimum Current Maximum Default
1073741824      64      64      64       *

In the example above, pagesize=1024M indicates a default 1GB page size for the /dev/hugepages mount, and 64 pages are preallocated for 64GB available memory. If the default pagesize is other than 1024M or the number of preallocated pages is insufficient for the subcluster VMs to be created, add the parameters default_hugepagesz, hugepagesz and hugepages to the end of GRUB_CMDLINE_LINUX_DEFAULT in your host's grub config file. For example,

# Set the default commandline
GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 default_hugepagesz=1G hugepagesz=1G hugepages=64"

(In a typical Ubuntu installation, the grub configuration can usually be found at /etc/default/grub or somewhere under /etc/grub.d. If the host is a VM built from a cloud image, the grub config may be found at /etc/default/grub.d/50-cloudimg-settings.cfg.)

After making changes, run the following to make them take effect:

sudo update-grub
sudo reboot now

Deploy ViNO

Airship projects often have to deploy Kubernetes, with common requirements such as supporting network policies or working behind corporate proxies. To that end the community maintains a Kubernetes deployment script and is the suggested way of deploying your Kubernetes cluster for development purposes.

Deploy Kubernetes

# curl -Lo deploy-k8s.sh https://opendev.org/airship/charts/raw/branch/master/tools/gate/deploy-k8s.sh
# chmod +x deploy-k8s.sh
# sudo ./deploy-k8s.sh

(Optional) Configure Docker to run as non-root

When Kubernetes is deployed from the script above it installs Docker but does not configure it to run as a non-root user. The shell commands below are optional to configure Docker to run as a non-root user. They include creating the docker group, adding the current user to that group updating the group without having to log out and testing functionality with the hello-world container.

If you choose to skip these steps, please continue with the developer environment steps as a root user.

# sudo groupadd docker
# sudo usermod -aG docker $USER

Log out and log back in again for the changes to take effect, then test functionality with a hello world container.

# docker run hello-world

Deploy ViNO

Once your cluster is up and running, you'll need to build the ViNO image to use, and to deploy the operator on your cluster:

# make docker-build-controller
# make deploy

Once these steps are completed, you should have a working cluster with ViNO deployed on top of it:

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7985fc4dd6-6q5l4    1/1     Running   0          3h7m
kube-system   calico-node-lqzxp                           1/1     Running   0          3h7m
kube-system   coredns-f9fd979d6-gbdzl                     1/1     Running   0          3h7m
kube-system   etcd-ubuntu-virtualbox                      1/1     Running   0          3h8m
kube-system   kube-apiserver-ubuntu-virtualbox            1/1     Running   0          3h8m
kube-system   kube-controller-manager-ubuntu-virtualbox   1/1     Running   0          3h8m
kube-system   kube-proxy-ml4gd                            1/1     Running   0          3h7m
kube-system   kube-scheduler-ubuntu-virtualbox            1/1     Running   0          3h8m
kube-system   storage-provisioner                         1/1     Running   0          3h8m
vino-system   vino-controller-manager-788b994c74-sbf26    2/2     Running   0          25m

Configure the VM Bridge Interface

To configure the bare metal networking interface that should be used for the VM Bridge, please specify it in your vino CR at field spec.vmBridge.

Test basic functionality

# kubectl apply -f config/samples/vino_cr.yaml
# kubectl -n vino-system get pods
# kubectl -n vino-system get ds

delete vino CR and make sure DaemonSet is deleted as well

# kubectl delete vino vino-test-cr
# kubectl -n vino-system get ds
# kubectl -n vino-system get cm

Get in Touch

For any questions on the ViNo, or other Airship projects, we encourage you to join the community on Slack/IRC or by participating in the mailing list. Please see this Wiki for contact information, and the community meeting schedules.