Flesh out usage documentation

This commit is contained in:
Mark Goddard 2017-03-29 19:01:32 +01:00
parent 41686a5031
commit df20c90e2e

View File

@ -12,13 +12,64 @@ the bare metal nodes that will form the workload node pool.
Configuration Configuration
============= =============
As an Ansible-based project, Kayobe is for the most part configured using YAML
files.
Configuration Location
----------------------
Kayobe configuration is by default located in ``/etc/kayobe`` on the Ansible Kayobe configuration is by default located in ``/etc/kayobe`` on the Ansible
control host. This can be overridden to a different location to avoid touching control host. This location can be overridden to a different location to avoid
the system configuration directory by setting the environment variable touching the system configuration directory by setting the environment variable
``KAYOBE_CONFIG_PATH``. Similarly, kolla configuration on the Ansible control ``KAYOBE_CONFIG_PATH``. Similarly, kolla configuration on the Ansible control
host will by default be located in ``/etc/kolla`` and can be overridden via host will by default be located in ``/etc/kolla`` and can be overridden via
``KOLLA_CONFIG_PATH``. ``KOLLA_CONFIG_PATH``.
Configuration Directory Layout
------------------------------
The Kayobe configuration directory contains Ansible ``extra-vars`` files and
the Ansible inventory. An example of the directory structure is as follows::
extra-vars1.yml
extra-vars2.yml
inventory/
group_vars/
group1-vars
group2-vars
groups
host_vars/
host1-vars
host2-vars
hosts
Configuration Patterns
----------------------
Ansible's variable precedence rules are `fairly well documented
<http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable>`_
and provide a mechanism we can use for providing site localisation and
customisation of OpenStack in combination with some reasonable default values.
For global configuration options, Kayobe typically uses the following patterns:
- Playbook group variables for the *all* group in
``<kayobe repo>/ansible/group_vars/all/*`` set **global defaults**. These
files should not be modified.
- Playbook group variables for other groups in
``<kayobe repo>/ansible/group_vars/<group>/*`` set **defaults for some subsets
of hosts**. These files should not be modified.
- Extra-vars files in ``${KAYOBE_CONFIG_PATH}/*.yml`` set **custom values
for global variables** and should be used to apply global site localisation
and customisation. By default these variables are commented out.
Additionally, variables can be set on a per-host basis using inventory host
variables files in ``${KAYOBE_CONFIG_PATH}/inventory/host_vars/*``. It should
be noted that variables set in extra-vars files take precedence over per-host
variables.
Configuring Kayobe
------------------
From a checkout of the Kayobe repository, the baseline Kayobe configuration From a checkout of the Kayobe repository, the baseline Kayobe configuration
should be copied to the Kayobe configuration path:: should be copied to the Kayobe configuration path::
@ -27,6 +78,30 @@ should be copied to the Kayobe configuration path::
Once in place, each of the YAML and inventory files should be manually Once in place, each of the YAML and inventory files should be manually
inspected and configured as required. inspected and configured as required.
Inventory
^^^^^^^^^
The inventory should contain the following hosts:
Control host
This should be localhost and should be a member of the ``config-mgmt``
group.
Seed hypervisor
If provisioning a seed VM, a host should exist for the hypervisor that
will run the VM, and should be a member of the ``seed-hypervisor`` group.
Seed
The seed host, whether provisioned as a VM by Kayobe or externally managed,
should exist in the ``seed`` group.
Cloud hosts and bare metal compute hosts are not required to exist in the
inventory.
Site Localisation and Customisation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Site localisation and customisation is applied using Ansible extra-vars files
in ``${KAYOBE_CONFIG_PATH}/*.yml``.
Command Line Interface Command Line Interface
====================== ======================
@ -68,9 +143,31 @@ To bootstrap the Ansible control host::
(kayobe-venv) $ kayobe control host bootstrap (kayobe-venv) $ kayobe control host bootstrap
Physical Network
================
The physical network can be managed by Kayobe, which uses Ansible's network
modules. Currently Dell Network OS 6 and Dell Network OS 9 switches are
supported but this could easily be extended. To provision the physical
network::
(kayobe-venv) $ kayobe physical network configure --group <group>
The ``--group`` argument is used to specify an Ansible group containing
the switches to be configured.
Seed Seed
==== ====
VM Provisioning
---------------
.. note::
It is not necesary to run the seed services in a VM. To use an existing
bare metal host or a VM provisioned outside of Kayobe, this step may be
skipped. Ensure that the Ansible inventory contains a host for the seed.
The seed hypervisor should have CentOS and ``libvirt`` installed. It should The seed hypervisor should have CentOS and ``libvirt`` installed. It should
have ``libvirt`` networks configured for all networks that the seed VM needs have ``libvirt`` networks configured for all networks that the seed VM needs
access to and a ``libvirt`` storage pool available for the seed VM's volumes. access to and a ``libvirt`` storage pool available for the seed VM's volumes.
@ -81,10 +178,8 @@ To provision the seed VM::
When this command has completed the seed VM should be active and accessible via When this command has completed the seed VM should be active and accessible via
SSH. Kayobe will update the Ansible inventory with the IP address of the VM. SSH. Kayobe will update the Ansible inventory with the IP address of the VM.
At this point the seed services need to be deployed on the seed VM. These Host Configuration
services include Docker and the kolla ``bifrost-deploy`` container. This ------------------
command will also build the Operating System image that will be used to deploy
the overcloud nodes using Disk Image Builder (DIB).
To configure the seed host OS:: To configure the seed host OS::
@ -98,6 +193,14 @@ To configure the seed host OS::
(kayobe-venv) $ kayobe seed host configure --wipe-disks (kayobe-venv) $ kayobe seed host configure --wipe-disks
Building Container Images
-------------------------
.. note::
It is possible to use prebuilt container images from an image registry such
as Dockerhub. In this case, this step can be skipped.
It is possible to use prebuilt container images from an image registry such as It is possible to use prebuilt container images from an image registry such as
Dockerhub. In some cases it may be necessary to build images locally either to Dockerhub. In some cases it may be necessary to build images locally either to
apply local image customisation or to use a downstream version of kolla. To apply local image customisation or to use a downstream version of kolla. To
@ -105,16 +208,24 @@ build images locally::
(kayobe-venv) $ kayobe seed container image build (kayobe-venv) $ kayobe seed container image build
Deploying Containerised Services
--------------------------------
At this point the seed services need to be deployed on the seed VM. These
services are deployed in the ``bifrost_deploy`` container. This command will
also build the Operating System image that will be used to deploy the overcloud
nodes using Disk Image Builder (DIB).
To deploy the seed services in containers:: To deploy the seed services in containers::
(kayobe-venv) $ kayobe seed service deploy (kayobe-venv) $ kayobe seed service deploy
After this command has completed the seed services will be active. After this command has completed the seed services will be active.
Accessing the Seed via SSH Accessing the Seed via SSH (Optional)
-------------------------- -------------------------------------
For SSH access to the seed VM, first determine the seed VM's IP address. We can For SSH access to the seed, first determine the seed's IP address. We can
use the ``kayobe configuration dump`` command to inspect the seed's IP use the ``kayobe configuration dump`` command to inspect the seed's IP
address:: address::
@ -137,18 +248,55 @@ Leave the seed VM and return to the shell on the control host::
Overcloud Overcloud
========= =========
Discovery
---------
.. note:: .. note::
Automated discovery of the overcloud nodes is not currently documented. If discovery of the overcloud is not possible, a static inventory of servers
using the bifrost ``servers.yml`` file format may be configured using the
``kolla_bifrost_servers`` variable in ``${KAYOBE_CONFIG_PATH}/bifrost.yml``.
Provisioning of the overcloud is performed by bifrost running in a container on Discovery of the overcloud is supported by the ironic inspector service running
the seed. A static inventory of servers may be configured using the in the ``bifrost_deploy`` container on the seed. The service is configured to
``kolla_bifrost_servers`` variable. To provision the overcloud nodes:: PXE boot unrecognised MAC addresses with an IPA ramdisk for introspection. If
an introspected node does not exist in the ironic inventory, ironic inspector
will create a new entry for it.
Discovery of the overcloud is triggered by causing the nodes to PXE boot using
a NIC attached to the overcloud provisioning network. For many servers this
will be the factory default and can be performed by powering them on.
On completion of the discovery process, the overcloud nodes should be
registered with the ironic service running in the seed host's
``bifrost_deploy`` container. The node inventory can be viewed by executing
the following on the seed::
$ docker exec -it bifrost_deploy bash
(bifrost_deploy) $ source env-vars
(bifrost_deploy) $ ironic node-list
In order to interact with these nodes using Kayobe, run the following command
to add them to the Kayobe and bifrost Ansible inventories::
(kayobe-venv) $ kayobe overcloud inventory discover
Provisioning
------------
Provisioning of the overcloud is performed by the ironic service running in the
bifrost container on the seed. To provision the overcloud nodes::
(kayobe-venv) $ kayobe overcloud provision (kayobe-venv) $ kayobe overcloud provision
After this command has completed the overcloud nodes should have been After this command has completed the overcloud nodes should have been
provisioned with an OS image. To configure the overcloud hosts' OS:: provisioned with an OS image. The command will wait for the nodes to become
``active`` in ironic and accessible via SSH.
Host Configuration
------------------
To configure the overcloud hosts' OS::
(kayobe-venv) $ kayobe overcloud host configure (kayobe-venv) $ kayobe overcloud host configure
@ -160,34 +308,69 @@ provisioned with an OS image. To configure the overcloud hosts' OS::
(kayobe-venv) $ kayobe overcloud host configure --wipe-disks (kayobe-venv) $ kayobe overcloud host configure --wipe-disks
It is possible to use prebuilt container images from an image registry such as Building Container Images
Dockerhub. In some cases it may be necessary to build images locally either to -------------------------
apply local image customisation or to use a downstream version of kolla. To
build images locally:: .. note::
It is possible to use prebuilt container images from an image registry such
as Dockerhub. In this case, this step can be skipped.
In some cases it may be necessary to build images locally either to apply local
image customisation or to use a downstream version of kolla. To build images
locally::
(kayobe-venv) $ kayobe overcloud container image build (kayobe-venv) $ kayobe overcloud container image build
Deploying Containerised Services
--------------------------------
To deploy the overcloud services in containers:: To deploy the overcloud services in containers::
(kayobe-venv) $ kayobe overcloud service deploy (kayobe-venv) $ kayobe overcloud service deploy
Once this command has completed the overcloud nodes should have OpenStack Once this command has completed the overcloud nodes should have OpenStack
services running in Docker containers. Kolla-ansible writes out an environment services running in Docker containers.
file that can be used to access the OpenStack services::
Interacting with the Control Plane
----------------------------------
Kolla-ansible writes out an environment file that can be used to access the
OpenStack services::
$ source ${KOLLA_CONFIG_PATH:-/etc/kolla}/admin-openrc.sh $ source ${KOLLA_CONFIG_PATH:-/etc/kolla}/admin-openrc.sh
Other Useful Commands Other Useful Commands
===================== =====================
To run an arbitrary Kayobe playbook:: Running Kayobe Playbooks on Demand
----------------------------------
In some situations it may be necessary to run an individual Kayobe playbook.
Playbooks are stored in ``<kayobe repo>/ansible/*.yml``. To run an arbitrary
Kayobe playbook::
(kayobe-venv) $ kayobe playbook run <playbook> [<playbook>] (kayobe-venv) $ kayobe playbook run <playbook> [<playbook>]
Running Kolla-ansible Commands
------------------------------
To execute a kolla-ansible command:: To execute a kolla-ansible command::
(kayobe-venv) $ kayobe kolla ansible run <command> (kayobe-venv) $ kayobe kolla ansible run <command>
To dump Kayobe configuration for one or more hosts:: Dumping Kayobe Configuration
----------------------------
The Ansible configuration space is quite large, and it can be hard to determine
the final values of Ansible variables. We can use Kayobe's
``configuration dump`` command to view individual variables or the variables
for one or more hosts. To dump Kayobe configuration for one or more hosts::
(kayobe-venv) $ kayobe configuration dump (kayobe-venv) $ kayobe configuration dump
The output is a JSON-formatted object mapping hosts to their hostvars.
We can use the ``--var-name`` argument to inspect a particular variable or the
``--host`` or ``--hosts`` arguments to view a variable or variables for a
specific host or set of hosts.