2017-06-30 15:52:49 +01:00
|
|
|
==========
|
|
|
|
Deployment
|
|
|
|
==========
|
|
|
|
|
|
|
|
This section describes usage of Kayobe to install an OpenStack cloud onto a set
|
|
|
|
of bare metal servers. We assume access is available to a node which will act
|
|
|
|
as the hypervisor hosting the seed node in a VM. We also assume that this seed
|
|
|
|
hypervisor has access to the bare metal nodes that will form the OpenStack
|
|
|
|
control plane. Finally, we assume that the control plane nodes have access to
|
|
|
|
the bare metal nodes that will form the workload node pool.
|
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
Information on the configuration of a Kayobe environment is available
|
|
|
|
:ref:`here <configuration-kayobe>`.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Ansible Control Host
|
|
|
|
====================
|
|
|
|
|
|
|
|
Before starting deployment we must bootstrap the Ansible control host. Tasks
|
|
|
|
performed here include:
|
|
|
|
|
2021-03-03 12:52:30 +01:00
|
|
|
- Install required Ansible roles from Ansible Galaxy.
|
2017-06-30 15:52:49 +01:00
|
|
|
- Generate an SSH key if necessary and add it to the current user's authorised
|
|
|
|
keys.
|
2019-11-11 14:11:04 +00:00
|
|
|
- Install Kolla Ansible locally at the configured version.
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
To bootstrap the Ansible control host::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe control host bootstrap
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2018-11-14 09:48:21 +01:00
|
|
|
.. _physical-network:
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Physical Network
|
|
|
|
================
|
|
|
|
|
|
|
|
The physical network can be managed by Kayobe, which uses Ansible's network
|
2023-03-06 17:22:58 +00:00
|
|
|
modules. Currently the most popular switches for cloud infrastructure are
|
2017-06-30 15:52:49 +01:00
|
|
|
supported but this could easily be extended. To provision the physical
|
|
|
|
network::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe physical network configure --group <group> [--enable-discovery]
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
The ``--group`` argument is used to specify an Ansible group containing
|
|
|
|
the switches to be configured.
|
|
|
|
|
|
|
|
The ``--enable-discovery`` argument enables a one-time configuration of ports
|
|
|
|
attached to baremetal compute nodes to support hardware discovery via ironic
|
|
|
|
inspector.
|
|
|
|
|
2017-10-16 19:26:23 +01:00
|
|
|
It is possible to limit the switch interfaces that will be configured, either
|
|
|
|
by interface name or interface description::
|
|
|
|
|
|
|
|
(kayobe) $ kayobe physical network configure --group <group> --interface-limit <interface names>
|
|
|
|
(kayobe) $ kayobe physical network configure --group <group> --interface-description-limit <interface descriptions>
|
|
|
|
|
|
|
|
The names or descriptions should be separated by commas. This may be useful
|
|
|
|
when adding compute nodes to an existing deployment, in order to avoid changing
|
|
|
|
the configuration interfaces in use by active nodes.
|
|
|
|
|
2018-11-01 17:38:41 +00:00
|
|
|
The ``--display`` argument will display the candidate switch configuration,
|
2017-10-17 11:43:53 +00:00
|
|
|
without actually applying it.
|
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
Information on configuration of physical network devices is available
|
|
|
|
:ref:`here <configuration-physical-network>`.
|
|
|
|
|
2017-08-01 15:05:08 +00:00
|
|
|
Seed Hypervisor
|
|
|
|
===============
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
It is not necessary to run the seed services in a VM. To use an existing
|
|
|
|
bare metal host or a VM provisioned outside of Kayobe, this section may be
|
|
|
|
skipped.
|
|
|
|
|
2021-03-29 12:24:44 +02:00
|
|
|
.. _deployment-seed-hypervisor-host-configure:
|
|
|
|
|
2017-08-01 15:05:08 +00:00
|
|
|
Host Configuration
|
|
|
|
------------------
|
|
|
|
|
|
|
|
To configure the seed hypervisor's host OS, and the Libvirt/KVM virtualisation
|
|
|
|
support::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe seed hypervisor host configure
|
2017-08-01 15:05:08 +00:00
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
Information on configuration of hosts is available :ref:`here
|
|
|
|
<configuration-hosts>`.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Seed
|
|
|
|
====
|
|
|
|
|
|
|
|
VM Provisioning
|
|
|
|
---------------
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
2017-08-01 15:05:08 +00:00
|
|
|
It is not necessary to run the seed services in a VM. To use an existing
|
2017-06-30 15:52:49 +01:00
|
|
|
bare metal host or a VM provisioned outside of Kayobe, this step may be
|
|
|
|
skipped. Ensure that the Ansible inventory contains a host for the seed.
|
|
|
|
|
2022-03-03 11:27:11 +01:00
|
|
|
The seed hypervisor should have CentOS or Rocky or Ubuntu with ``libvirt``
|
|
|
|
installed. It should have ``libvirt`` networks configured for all networks
|
|
|
|
that the seed VM needs access to and a ``libvirt`` storage pool available
|
|
|
|
for the seed VM's volumes. To provision the seed VM::
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe seed vm provision
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
When this command has completed the seed VM should be active and accessible via
|
|
|
|
SSH. Kayobe will update the Ansible inventory with the IP address of the VM.
|
|
|
|
|
2021-03-29 12:24:44 +02:00
|
|
|
.. _deployment-seed-host-configure:
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Host Configuration
|
|
|
|
------------------
|
|
|
|
|
|
|
|
To configure the seed host OS::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe seed host configure
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
If the seed host uses disks that have been in use in a previous
|
|
|
|
installation, it may be necessary to wipe partition and LVM data from those
|
|
|
|
disks. To wipe all disks that are not mounted during host configuration::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe seed host configure --wipe-disks
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
Information on configuration of hosts is available :ref:`here
|
|
|
|
<configuration-hosts>`.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Building Container Images
|
|
|
|
-------------------------
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
It is possible to use prebuilt container images from an image registry such
|
2023-01-06 11:11:46 +01:00
|
|
|
as Quay.io. In this case, this step can be skipped.
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
It is possible to use prebuilt container images from an image registry such as
|
2023-01-06 11:11:46 +01:00
|
|
|
Quay.io. In some cases it may be necessary to build images locally either to
|
2017-11-23 09:54:17 +00:00
|
|
|
apply local image customisation or to use a downstream version of kolla.
|
|
|
|
Images are built by hosts in the ``container-image-builders`` group, which by
|
|
|
|
default includes the ``seed``.
|
|
|
|
|
|
|
|
To build container images::
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe seed container image build
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
It is possible to build a specific set of images by supplying one or more
|
|
|
|
image name regular expressions::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe seed container image build bifrost-deploy
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
In order to push images to a registry after they are built, add the ``--push``
|
|
|
|
argument.
|
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
Information on configuration of Kolla for building container images is
|
|
|
|
available :ref:`here <configuration-kolla>`.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Deploying Containerised Services
|
|
|
|
--------------------------------
|
|
|
|
|
|
|
|
At this point the seed services need to be deployed on the seed VM. These
|
2019-11-08 11:06:47 +00:00
|
|
|
services are deployed in the ``bifrost_deploy`` container.
|
|
|
|
|
|
|
|
This command will also build the Operating System image that will be used to
|
2022-12-01 12:17:57 +00:00
|
|
|
deploy the overcloud nodes using Disk Image Builder (DIB), if
|
|
|
|
``overcloud_dib_build_host_images`` is set to ``False``.
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2022-03-03 11:27:11 +01:00
|
|
|
.. note::
|
|
|
|
|
|
|
|
If you are using Rocky Linux - building of the Operating System image
|
|
|
|
needs to be done using ``kayobe overcloud host image build``.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
To deploy the seed services in containers::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe seed service deploy
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
After this command has completed the seed services will be active.
|
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
Information on configuration of Kolla Ansible is available :ref:`here
|
|
|
|
<configuration-kolla-ansible>`. See :ref:`here <configuration-bifrost>` for
|
|
|
|
information about configuring Bifrost.
|
|
|
|
:ref:`configuration-bifrost-overcloud-root-image` provides information on
|
2020-02-25 11:31:08 +01:00
|
|
|
configuring the root disk image build process. See :ref:`here
|
|
|
|
<configuration-seed-custom-containers>` for information about deploying
|
2021-03-29 12:24:44 +02:00
|
|
|
additional, custom services (containers) on a seed node.
|
2019-11-08 11:06:47 +00:00
|
|
|
|
2017-08-15 14:54:23 +00:00
|
|
|
Building Deployment Images
|
|
|
|
--------------------------
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
It is possible to use prebuilt deployment images. In this case, this step
|
|
|
|
can be skipped.
|
|
|
|
|
|
|
|
It is possible to use prebuilt deployment images from the `OpenStack hosted
|
|
|
|
tarballs <https://tarballs.openstack.org/ironic-python-agent>`_ or another
|
|
|
|
source. In some cases it may be necessary to build images locally either to
|
|
|
|
apply local image customisation or to use a downstream version of Ironic Python
|
|
|
|
Agent (IPA). In order to build IPA images, the ``ipa_build_images`` variable
|
2019-11-08 11:06:47 +00:00
|
|
|
should be set to ``True``.
|
|
|
|
|
|
|
|
To build images locally::
|
2017-08-15 14:54:23 +00:00
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe seed deployment image build
|
2017-08-15 14:54:23 +00:00
|
|
|
|
2018-05-10 14:16:27 +00:00
|
|
|
If images have been built previously, they will not be rebuilt. To force
|
|
|
|
rebuilding images, use the ``--force-rebuild`` argument.
|
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
See :ref:`here <configuration-ipa-build>` for information on how to
|
|
|
|
configure the IPA image build process.
|
|
|
|
|
2022-01-07 12:25:18 +01:00
|
|
|
Building Overcloud Host Disk Images
|
|
|
|
-----------------------------------
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
This step is only relevant if ``overcloud_dib_build_host_images`` is set to
|
2022-12-01 12:17:57 +00:00
|
|
|
``True``, which is the default since the Zed release.
|
2022-01-07 12:25:18 +01:00
|
|
|
|
|
|
|
Host disk images are deployed on overcloud hosts during provisioning. To build
|
|
|
|
host disk images::
|
|
|
|
|
|
|
|
(kayobe) $ kayobe overcloud host image build
|
|
|
|
|
|
|
|
If images have been built previously, they will not be rebuilt. To force
|
|
|
|
rebuilding images, use the ``--force-rebuild`` argument.
|
|
|
|
|
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
See :ref:`here <overcloud-dib>` for information on how to configure the
|
|
|
|
overcloud host disk image build process.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Accessing the Seed via SSH (Optional)
|
|
|
|
-------------------------------------
|
|
|
|
|
|
|
|
For SSH access to the seed, first determine the seed's IP address. We can
|
|
|
|
use the ``kayobe configuration dump`` command to inspect the seed's IP
|
|
|
|
address::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe configuration dump --host seed --var-name ansible_host
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
The ``kayobe_ansible_user`` variable determines which user account will be used
|
|
|
|
by Kayobe when accessing the machine via SSH. By default this is ``stack``.
|
|
|
|
Use this user to access the seed::
|
|
|
|
|
|
|
|
$ ssh <kayobe ansible user>@<seed VM IP>
|
|
|
|
|
|
|
|
To see the active Docker containers::
|
|
|
|
|
|
|
|
$ docker ps
|
|
|
|
|
2018-07-11 16:41:02 +01:00
|
|
|
Leave the seed VM and return to the shell on the Ansible control host::
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
$ exit
|
|
|
|
|
2021-03-29 12:24:44 +02:00
|
|
|
.. _deployment-infrastructure-vms:
|
|
|
|
|
|
|
|
Infrastructure VMs
|
|
|
|
===================
|
|
|
|
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
Support for infrastructure VMs is considered experimental: its
|
|
|
|
design may change in future versions without a deprecation period.
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
It necessary to perform some configuration before these steps
|
|
|
|
can be followed. Please see :ref:`configuration-infra-vms`.
|
|
|
|
|
|
|
|
VM Provisioning
|
|
|
|
---------------
|
|
|
|
|
|
|
|
The hypervisor used to host a VM is controlled via the ``infra_vm_hypervisor``
|
|
|
|
variable. It defaults to use the seed hypervisor. All hypervisors should have
|
|
|
|
CentOS or Ubuntu with ``libvirt`` installed. It should have ``libvirt`` networks
|
|
|
|
configured for all networks that the VM needs access to and a ``libvirt``
|
|
|
|
storage pool available for the VM's volumes. The steps needed for for the
|
|
|
|
:ref:`seed<deployment-seed-host-configure>` and the
|
|
|
|
:ref:`seed hypervisor<deployment-seed-hypervisor-host-configure>` can be found
|
|
|
|
above.
|
|
|
|
|
|
|
|
To provision the infra VMs::
|
|
|
|
|
|
|
|
(kayobe) $ kayobe infra vm provision
|
|
|
|
|
|
|
|
When this command has completed the infra VMs should be active and accessible
|
|
|
|
via SSH. Kayobe will update the Ansible inventory with the IP address of the
|
|
|
|
VM.
|
|
|
|
|
|
|
|
Host Configuration
|
|
|
|
------------------
|
|
|
|
|
|
|
|
To configure the infra VM host OS::
|
|
|
|
|
|
|
|
(kayobe) $ kayobe infra vm host configure
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
If the infra VM host uses disks that have been in use in a previous
|
|
|
|
installation, it may be necessary to wipe partition and LVM data from those
|
|
|
|
disks. To wipe all disks that are not mounted during host configuration::
|
|
|
|
|
|
|
|
(kayobe) $ kayobe infra vm host configure --wipe-disks
|
|
|
|
|
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
Information on configuration of hosts is available :ref:`here
|
|
|
|
<configuration-hosts>`.
|
|
|
|
|
|
|
|
Using Hooks to deploy services on the VMs
|
|
|
|
-----------------------------------------
|
|
|
|
|
|
|
|
A no-op service deployment command is provided to perform additional
|
|
|
|
configuration. The intention is for users to define :ref:`hooks to custom
|
|
|
|
playbooks <custom-playbooks-hooks>` that define any further configuration or
|
|
|
|
service deployment necessary.
|
|
|
|
|
|
|
|
To trigger the hooks::
|
|
|
|
|
|
|
|
(kayobe) $ kayobe infra vm service deploy
|
|
|
|
|
|
|
|
Example
|
|
|
|
^^^^^^^
|
|
|
|
|
|
|
|
In this example we have an infra VM host called ``dns01`` that provides DNS
|
|
|
|
services. The host could be added to a ``dns-servers`` group in the inventory:
|
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
:caption: ``$KAYOBE_CONFIG_PATH/inventory/infra-vms``
|
|
|
|
|
|
|
|
[dns-servers]
|
|
|
|
an-example-vm
|
|
|
|
|
|
|
|
[infra-vms:children]
|
|
|
|
dns-servers
|
|
|
|
|
|
|
|
We have a custom playbook targeting the ``dns-servers`` group that sets up
|
|
|
|
the DNS server:
|
|
|
|
|
|
|
|
.. code-block:: yaml
|
|
|
|
:caption: ``$KAYOBE_CONFIG_PATH/ansible/dns-server.yml``
|
|
|
|
|
|
|
|
---
|
|
|
|
- name: Deploy DNS servers
|
|
|
|
hosts: dns-servers
|
|
|
|
tasks:
|
|
|
|
- name: Install bind packages
|
|
|
|
package:
|
|
|
|
name:
|
|
|
|
- bind
|
|
|
|
- bind-utils
|
|
|
|
become: true
|
|
|
|
|
|
|
|
Finally, we add a symlink to set up the playbook as a hook for the ``kayobe
|
|
|
|
infra vm service deploy`` command::
|
|
|
|
|
|
|
|
(kayobe) $ mkdir -p ${KAYOBE_CONFIG_PATH}/hooks/infra-vm-host-configure/post.d
|
|
|
|
(kayobe) $ cd ${KAYOBE_CONFIG_PATH}/hooks/infra-vm-host-configure/post.d
|
2022-07-19 10:29:22 +02:00
|
|
|
(kayobe) $ ln -s ../../../ansible/dns-server.yml 50-dns-server.yml
|
2021-03-29 12:24:44 +02:00
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Overcloud
|
|
|
|
=========
|
|
|
|
|
Add note to docs about overcloud node names being required
If the kayobe overcloud provision command is executed with overcloud
ironic nodes without names, cloud-init will fail with the following
Traceback when the deployed image boots:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 447, in find_source
if s.get_data():
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 132, in get_data
self._get_standardized_metadata())
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 110, in _get_standardized_metadata
'local-hostname': self.get_hostname(),
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 317, in get_hostname
if util.is_ipv4(lhost):
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 544, in is_ipv4
toks = instr.split('.')
This is a bug [1] in cloud-init. The symptom is that hosts provision
successfully, but are not accessible via SSH. A solution is to ensure
that all nodes are named.
Until we have a release of CentOS with the cloud-init fix included, we
should at least mention this limitation in our documentation.
[1] https://bugs.launchpad.net/cloud-init/+bug/1852100
Change-Id: If54f56fb9f0b626d9fae80db622b0feeae5464b9
Story: 2006832
Task: 37405
2019-11-11 17:41:58 +00:00
|
|
|
.. _deployment-discovery:
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Discovery
|
|
|
|
---------
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
If discovery of the overcloud is not possible, a static inventory of servers
|
|
|
|
using the bifrost ``servers.yml`` file format may be configured using the
|
|
|
|
``kolla_bifrost_servers`` variable in ``${KAYOBE_CONFIG_PATH}/bifrost.yml``.
|
|
|
|
|
|
|
|
Discovery of the overcloud is supported by the ironic inspector service running
|
|
|
|
in the ``bifrost_deploy`` container on the seed. The service is configured to
|
|
|
|
PXE boot unrecognised MAC addresses with an IPA ramdisk for introspection. If
|
|
|
|
an introspected node does not exist in the ironic inventory, ironic inspector
|
|
|
|
will create a new entry for it.
|
|
|
|
|
|
|
|
Discovery of the overcloud is triggered by causing the nodes to PXE boot using
|
|
|
|
a NIC attached to the overcloud provisioning network. For many servers this
|
|
|
|
will be the factory default and can be performed by powering them on.
|
|
|
|
|
|
|
|
On completion of the discovery process, the overcloud nodes should be
|
|
|
|
registered with the ironic service running in the seed host's
|
|
|
|
``bifrost_deploy`` container. The node inventory can be viewed by executing
|
|
|
|
the following on the seed::
|
|
|
|
|
|
|
|
$ docker exec -it bifrost_deploy bash
|
2021-01-27 11:00:36 +01:00
|
|
|
(bifrost_deploy) $ export OS_CLOUD=bifrost
|
|
|
|
(bifrost_deploy) $ baremetal node list
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
In order to interact with these nodes using Kayobe, run the following command
|
2018-12-05 16:25:00 +00:00
|
|
|
to add them to the Kayobe and Kolla-Ansible inventories::
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud inventory discover
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2019-11-08 12:31:08 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
This `blog post <https://www.stackhpc.com/ironic-idrac-ztp.html>`__
|
|
|
|
provides a case study of the discovery process, including automatically
|
|
|
|
naming Ironic nodes via switch port descriptions, Ironic Inspector and
|
|
|
|
LLDP.
|
|
|
|
|
2017-08-22 18:24:29 +00:00
|
|
|
Saving Hardware Introspection Data
|
|
|
|
----------------------------------
|
|
|
|
|
|
|
|
If ironic inspector is in use on the seed host, introspection data will be
|
|
|
|
stored in the local nginx service. This data may be saved to the control
|
|
|
|
host::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud introspection data save
|
2017-08-22 18:24:29 +00:00
|
|
|
|
|
|
|
``--output-dir`` may be used to specify the directory in which introspection
|
|
|
|
data files will be saved. ``--output-format`` may be used to set the format of
|
|
|
|
the files.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
BIOS and RAID Configuration
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
BIOS and RAID configuration may require one or more power cycles of the
|
|
|
|
hardware to complete the operation. These will be performed automatically.
|
|
|
|
|
2019-11-08 12:43:00 +00:00
|
|
|
.. note::
|
|
|
|
|
|
|
|
Currently, BIOS and RAID configuration of overcloud hosts is supported for
|
|
|
|
Dell servers only.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Configuration of BIOS settings and RAID volumes is currently performed out of
|
|
|
|
band as a separate task from hardware provisioning. To configure the BIOS and
|
|
|
|
RAID::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud bios raid configure
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
After configuring the nodes' RAID volumes it may be necessary to perform
|
|
|
|
hardware inspection of the nodes to reconfigure the ironic nodes' scheduling
|
|
|
|
properties and root device hints. To perform manual hardware inspection::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud hardware inspect
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2019-11-08 12:43:00 +00:00
|
|
|
There are currently a few limitations to configuring BIOS and RAID:
|
|
|
|
|
|
|
|
* The Ansible control host must be able to access the BMCs of the servers being
|
|
|
|
configured.
|
|
|
|
* The Ansible control host must have the ``python-dracclient`` Python module
|
|
|
|
available to the Python interpreter used by Ansible. The path to the Python
|
|
|
|
interpreter is configured via ``ansible_python_interpreter``.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Provisioning
|
|
|
|
------------
|
|
|
|
|
Add note to docs about overcloud node names being required
If the kayobe overcloud provision command is executed with overcloud
ironic nodes without names, cloud-init will fail with the following
Traceback when the deployed image boots:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 447, in find_source
if s.get_data():
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 132, in get_data
self._get_standardized_metadata())
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 110, in _get_standardized_metadata
'local-hostname': self.get_hostname(),
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 317, in get_hostname
if util.is_ipv4(lhost):
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 544, in is_ipv4
toks = instr.split('.')
This is a bug [1] in cloud-init. The symptom is that hosts provision
successfully, but are not accessible via SSH. A solution is to ensure
that all nodes are named.
Until we have a release of CentOS with the cloud-init fix included, we
should at least mention this limitation in our documentation.
[1] https://bugs.launchpad.net/cloud-init/+bug/1852100
Change-Id: If54f56fb9f0b626d9fae80db622b0feeae5464b9
Story: 2006832
Task: 37405
2019-11-11 17:41:58 +00:00
|
|
|
.. note::
|
|
|
|
|
|
|
|
There is a `cloud-init issue
|
|
|
|
<https://storyboard.openstack.org/#!/story/2006832>`__ which prevents Ironic
|
|
|
|
nodes without names from being accessed via SSH after provisioning. To avoid
|
|
|
|
this issue, ensure that all Ironic nodes in the Bifrost inventory are named.
|
|
|
|
This may be achieved via :ref:`autodiscovery <deployment-discovery>`, or
|
|
|
|
manually, e.g. from the seed::
|
|
|
|
|
|
|
|
$ docker exec -it bifrost_deploy bash
|
2021-01-27 11:00:36 +01:00
|
|
|
(bifrost_deploy) $ export OS_CLOUD=bifrost
|
|
|
|
(bifrost_deploy) $ baremetal node set ee77b4ca-8860-4003-a18f-b00d01295bda --name controller0
|
Add note to docs about overcloud node names being required
If the kayobe overcloud provision command is executed with overcloud
ironic nodes without names, cloud-init will fail with the following
Traceback when the deployed image boots:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 447, in find_source
if s.get_data():
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 132, in get_data
self._get_standardized_metadata())
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 110, in _get_standardized_metadata
'local-hostname': self.get_hostname(),
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 317, in get_hostname
if util.is_ipv4(lhost):
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 544, in is_ipv4
toks = instr.split('.')
This is a bug [1] in cloud-init. The symptom is that hosts provision
successfully, but are not accessible via SSH. A solution is to ensure
that all nodes are named.
Until we have a release of CentOS with the cloud-init fix included, we
should at least mention this limitation in our documentation.
[1] https://bugs.launchpad.net/cloud-init/+bug/1852100
Change-Id: If54f56fb9f0b626d9fae80db622b0feeae5464b9
Story: 2006832
Task: 37405
2019-11-11 17:41:58 +00:00
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Provisioning of the overcloud is performed by the ironic service running in the
|
|
|
|
bifrost container on the seed. To provision the overcloud nodes::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud provision
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
After this command has completed the overcloud nodes should have been
|
|
|
|
provisioned with an OS image. The command will wait for the nodes to become
|
|
|
|
``active`` in ironic and accessible via SSH.
|
|
|
|
|
|
|
|
Host Configuration
|
|
|
|
------------------
|
|
|
|
|
|
|
|
To configure the overcloud hosts' OS::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud host configure
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
If the controller hosts use disks that have been in use in a previous
|
|
|
|
installation, it may be necessary to wipe partition and LVM data from those
|
|
|
|
disks. To wipe all disks that are not mounted during host configuration::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud host configure --wipe-disks
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
Information on configuration of hosts is available :ref:`here
|
|
|
|
<configuration-hosts>`.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Building Container Images
|
|
|
|
-------------------------
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
It is possible to use prebuilt container images from an image registry such
|
2023-01-06 11:11:46 +01:00
|
|
|
as Quay.io. In this case, this step can be skipped.
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
In some cases it may be necessary to build images locally either to apply local
|
2017-11-23 09:54:17 +00:00
|
|
|
image customisation or to use a downstream version of kolla. Images are built
|
|
|
|
by hosts in the ``container-image-builders`` group, which by default includes
|
|
|
|
the ``seed``. If no seed host is in use, for example in an all-in-one
|
|
|
|
controller development environment, this group may be modified to cause
|
|
|
|
containers to be built on the controllers.
|
|
|
|
|
|
|
|
To build container images::
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud container image build
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
It is possible to build a specific set of images by supplying one or more
|
|
|
|
image name regular expressions::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud container image build ironic- nova-api
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2024-02-02 14:39:50 +00:00
|
|
|
When your environment uses OVN, OVS images will not be built. If you want to
|
|
|
|
build all Neutron images at the same time, extra variable ``kolla_build_neutron_ovs``
|
|
|
|
needs to be set to ``true``::
|
|
|
|
|
|
|
|
(kayobe) $ kayobe overcloud container image build -e kolla_build_neutron_ovs=true
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
In order to push images to a registry after they are built, add the ``--push``
|
|
|
|
argument.
|
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
Information on configuration of Kolla for building container images is
|
|
|
|
available :ref:`here <configuration-kolla>`.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Pulling Container Images
|
|
|
|
------------------------
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
It is possible to build container images locally avoiding the need for an
|
2023-01-06 11:11:46 +01:00
|
|
|
image registry such as Quay.io. In this case, this step can be skipped.
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2023-01-06 11:11:46 +01:00
|
|
|
In most cases suitable prebuilt kolla images will be available on Quay.io. The
|
|
|
|
`openstack.kolla organisation <https://quay.io/organization/openstack.kolla>`_
|
|
|
|
provides image repositories suitable for use with kayobe and will be used by
|
|
|
|
default. To pull images from the configured image registry::
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud container image pull
|
2017-06-30 15:52:49 +01:00
|
|
|
|
2017-08-22 15:06:04 +00:00
|
|
|
Building Deployment Images
|
|
|
|
--------------------------
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
It is possible to use prebuilt deployment images. In this case, this step
|
|
|
|
can be skipped.
|
|
|
|
|
2018-05-10 14:16:27 +00:00
|
|
|
.. note::
|
|
|
|
|
|
|
|
Deployment images are only required for the overcloud when Ironic is in use.
|
|
|
|
Otherwise, this step can be skipped.
|
|
|
|
|
2017-08-22 15:06:04 +00:00
|
|
|
It is possible to use prebuilt deployment images from the `OpenStack hosted
|
|
|
|
tarballs <https://tarballs.openstack.org/ironic-python-agent>`_ or another
|
|
|
|
source. In some cases it may be necessary to build images locally either to
|
|
|
|
apply local image customisation or to use a downstream version of Ironic Python
|
|
|
|
Agent (IPA). In order to build IPA images, the ``ipa_build_images`` variable
|
2019-11-08 11:06:47 +00:00
|
|
|
should be set to ``True``.
|
|
|
|
|
|
|
|
To build images locally::
|
2017-08-22 15:06:04 +00:00
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud deployment image build
|
2017-08-22 15:06:04 +00:00
|
|
|
|
2018-05-10 14:16:27 +00:00
|
|
|
If images have been built previously, they will not be rebuilt. To force
|
|
|
|
rebuilding images, use the ``--force-rebuild`` argument.
|
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
See :ref:`here <configuration-ipa-build>` for information on how to
|
|
|
|
configure the IPA image build process.
|
|
|
|
|
Support for Ceph and Swift storage networks, and improvements to Swift
In a deployment that has both Ceph or Swift deployed it can be useful to seperate the network traffic.
This change adds support for dedicated storage networks for both Ceph and Swift. By default, the storage hosts are
attached to the following networks:
* Overcloud admin network
* Internal network
* Storage network
* Storage management network
This adds four additional networks, which can be used to seperate the storage network traffic as follows:
* Ceph storage network (ceph_storage_net_name) is used to carry Ceph storage
data traffic. Defaults to the storage network (storage_net_name).
* Ceph storage management network (ceph_storage_mgmt_net_name) is used to carry
storage management traffic. Defaults to the storage management network
(storage_mgmt_net_name).
* Swift storage network (swift_storage_net_name) is used to carry Swift storage data
traffic. Defaults to the storage network (storage_net_name).
* Swift storage replication network (swift_storage_replication_net_name) is used to
carry storage management traffic. Defaults to the storage management network
(storage_mgmt_net_name).
This change also includes several improvements to Swift device management and ring generation.
The device management and ring generation are now separate, with device management occurring during
'kayobe overcloud host configure', and ring generation during a new command, 'kayobe overcloud swift rings generate'.
For the device management, we now use standard Ansible modules rather than commands for device preparation.
File system labels can be configured for each device individually.
For ring generation, all commands are run on a single host, by default a host in the Swift storage group.
A python script runs in one of the kolla Swift containers, which consumes an autogenerated YAML config file that defines
the layout of the rings.
Change-Id: Iedc7535532d706f02d710de69b422abf2f6fe54c
2019-03-15 14:54:02 +00:00
|
|
|
Building Swift Rings
|
|
|
|
--------------------
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
This section can be skipped if Swift is not in use.
|
|
|
|
|
|
|
|
Swift uses ring files to control placement of data across a cluster. These
|
|
|
|
files can be generated automatically using the following command::
|
|
|
|
|
|
|
|
(kayobe) $ kayobe overcloud swift rings generate
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Deploying Containerised Services
|
|
|
|
--------------------------------
|
|
|
|
|
|
|
|
To deploy the overcloud services in containers::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ kayobe overcloud service deploy
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
Once this command has completed the overcloud nodes should have OpenStack
|
|
|
|
services running in Docker containers.
|
|
|
|
|
2019-11-08 11:06:47 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
Information on configuration of Kolla Ansible is available :ref:`here
|
|
|
|
<configuration-kolla-ansible>`.
|
|
|
|
|
2017-06-30 15:52:49 +01:00
|
|
|
Interacting with the Control Plane
|
|
|
|
----------------------------------
|
|
|
|
|
|
|
|
Kolla-ansible writes out an environment file that can be used to access the
|
|
|
|
OpenStack admin endpoints as the admin user::
|
|
|
|
|
|
|
|
$ source ${KOLLA_CONFIG_PATH:-/etc/kolla}/admin-openrc.sh
|
|
|
|
|
|
|
|
Kayobe also generates an environment file that can be used to access the
|
|
|
|
OpenStack public endpoints as the admin user which may be required if the
|
2018-07-11 16:41:02 +01:00
|
|
|
admin endpoints are not available from the Ansible control host::
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
$ source ${KOLLA_CONFIG_PATH:-/etc/kolla}/public-openrc.sh
|
|
|
|
|
|
|
|
Performing Post-deployment Configuration
|
|
|
|
----------------------------------------
|
|
|
|
|
|
|
|
To perform post deployment configuration of the overcloud services::
|
|
|
|
|
2017-09-20 17:54:26 +01:00
|
|
|
(kayobe) $ source ${KOLLA_CONFIG_PATH:-/etc/kolla}/admin-openrc.sh
|
|
|
|
(kayobe) $ kayobe overcloud post configure
|
2017-06-30 15:52:49 +01:00
|
|
|
|
|
|
|
This will perform the following tasks:
|
|
|
|
|
|
|
|
- Register Ironic Python Agent (IPA) images with glance
|
|
|
|
- Register introspection rules with ironic inspector
|
|
|
|
- Register a provisioning network and subnet with neutron
|
2018-01-17 16:52:12 +00:00
|
|
|
- Configure Grafana organisations, dashboards and datasources
|