Update docs for playbooks collectification

Ths aims to update our main docs to reffer to playbooks as a collection
rather then rely on old playbooks inside of the integrated repo.

Change-Id: If4c26099cd1b0850ad13e890fbfc2c036d96003b
This commit is contained in:
Dmitriy Rabotyagov 2024-10-02 18:36:13 +02:00
parent 394c840dc2
commit 71bc8edf80
9 changed files with 92 additions and 70 deletions

View File

@ -9,16 +9,16 @@ Run playbooks
The installation process requires running three main playbooks:
- The ``setup-hosts.yml`` Ansible foundation playbook prepares the target
- The ``openstack.osa.setup_hosts`` Ansible foundation playbook prepares the target
hosts for infrastructure and OpenStack services, builds and restarts
containers on target hosts, and installs common components into containers
on target hosts.
- The ``setup-infrastructure.yml`` Ansible infrastructure playbook installs
- The ``openstack.osa.setup_infrastructure`` Ansible infrastructure playbook installs
infrastructure services: Memcached, the repository server, Galera and
RabbitMQ.
- The ``setup-openstack.yml`` OpenStack playbook installs OpenStack services,
- The ``openstack.osa.setup_openstack`` OpenStack playbook installs OpenStack services,
including Identity (keystone), Image (glance), Block Storage (cinder),
Compute (nova), Networking (neutron), etc.
@ -36,12 +36,11 @@ Before running any playbook, check the integrity of the configuration files.
To check your YAML syntax online, you can use the `YAML Lint program <http://www.yamllint.com/>`_.
#. Change to the ``/opt/openstack-ansible/playbooks`` directory, and run the
following command:
#. Run the following command:
.. code-block:: console
# openstack-ansible setup-infrastructure.yml --syntax-check
# openstack-ansible openstack.osa.setup_infrastructure --syntax-check
#. Recheck that all indentation is correct. This is important because the
syntax of the configuration files can be correct while not being meaningful
@ -50,13 +49,11 @@ Before running any playbook, check the integrity of the configuration files.
Run the playbooks to install OpenStack
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Change to the ``/opt/openstack-ansible/playbooks`` directory.
#. Run the host setup playbook:
.. code-block:: console
# openstack-ansible setup-hosts.yml
# openstack-ansible openstack.osa.setup_hosts
Confirm satisfactory completion with zero items unreachable or
failed:
@ -72,7 +69,7 @@ Run the playbooks to install OpenStack
.. code-block:: console
# openstack-ansible setup-infrastructure.yml
# openstack-ansible openstack.osa.setup_infrastructure
Confirm satisfactory completion with zero items unreachable or
failed:
@ -86,6 +83,13 @@ Run the playbooks to install OpenStack
#. Run the following command to verify the database cluster:
.. note::
In order to run ad-hoc commands, you need to execute command from the
location of ``openstack-ansible`` repository (ie `/opt/openstack-ansible`)
or explicitly load required envirnoment variables for Ansible configuration
through ``source /usr/local/bin/openstack-ansible.rc``.
.. code-block:: console
# ansible galera_container -m shell \
@ -124,7 +128,7 @@ Run the playbooks to install OpenStack
.. code-block:: console
# openstack-ansible setup-openstack.yml
# openstack-ansible openstack.osa.setup_openstack
Confirm satisfactory completion with zero items unreachable or
failed.

View File

@ -149,14 +149,14 @@ one of the nodes.
Galera cluster recovery
-----------------------
Run the ``galera-install`` playbook using the ``galera_force_bootstrap`` variable
Run the ``openstack.osa.galera_server`` playbook using the ``galera_force_bootstrap`` variable
to automatically recover a node or an entire environment.
#. Run the following Ansible command to show the failed nodes:
.. code-block:: shell-session
# openstack-ansible galera-install.yml -e galera_force_bootstrap=True --tags galera_server-config
# openstack-ansible openstack.osa.galera_server -e galera_force_bootstrap=True --tags galera_server-config
You can additionally define a different bootstrap node through
``galera_server_bootstrap_node`` variable, in case current bootstrap node is in
@ -364,9 +364,8 @@ Recovering from certain failures require rebuilding one or more containers.
.. code-block:: shell-session
# lxc-stop -n node3_galera_container-3ea2cbd3
# lxc-destroy -n node3_galera_container-3ea2cbd3
# rm -rf /openstack/node3_galera_container-3ea2cbd3/*
# openstack-ansible openstack.osa.containers_lxc_destroy \
-l node3_galera_container-3ea2cbd3
In this example, node 3 failed.
@ -374,7 +373,7 @@ Recovering from certain failures require rebuilding one or more containers.
.. code-block:: shell-session
# openstack-ansible setup-hosts.yml -l node3 \
# openstack-ansible oopenstack.osa.containers_lxc_create -l node3 \
-l node3_galera_container-3ea2cbd3
@ -385,7 +384,7 @@ Recovering from certain failures require rebuilding one or more containers.
.. code-block:: shell-session
# openstack-ansible setup-infrastructure.yml \
# openstack-ansible openstack.osa.setup_infrastructure \
--limit node3_galera_container-3ea2cbd3

View File

@ -234,8 +234,8 @@ onwards.
.. code-block:: console
$ openstack-ansible rabbitmq-install.yml --tags rabbitmq-config
$ openstack-ansible setup-openstack.yml --tags common-mq,post-install
$ openstack-ansible openstack.osa.rabbitmq_server --tags rabbitmq-config
$ openstack-ansible openstack.osa.setup_openstack --tags common-mq,post-install
In order to take advantage of these steps, we suggest setting
`oslomsg_rabbit_quorum_queues` to False before upgrading to 2024.1. Then, once

View File

@ -38,13 +38,13 @@ needed in an environment, it is possible to create additional nodes.
.. code:: console
# openstack-ansible playbooks/setup-hosts.yml --limit localhost,infra<node-ID>,infra<node-ID>-host_containers
# openstack-ansible openstack.osa.setup_hosts --limit localhost,infra<node-ID>,infra<node-ID>-host_containers
#. In case you're relying on ``/etc/hosts`` content, you should also update it for all hosts
.. code:: console
# openstack-ansible openstack-hosts-setup.yml -e openstack_hosts_group=all --tags openstack_hosts-file
# openstack-ansible openstack.osa.openstack_hosts_setup -e openstack_hosts_group=all --tags openstack_hosts-file
#. Next we need to expand galera/rabbitmq clusters, which is done during
``setup-infrastructure.yml``. So we will run this playbook without limits.
@ -64,7 +64,7 @@ needed in an environment, it is possible to create additional nodes.
.. code:: console
# openstack-ansible playbooks/setup-infrastructure.yml -e galera_force_bootstrap=true
# openstack-ansible openstack.osa.setup_infrastructure -e galera_force_bootstrap=true
#. Once infrastructure playboks are done, it's turn of openstack services to be
deployed. Most of the services are fine to be ran with limits, but some,
@ -72,8 +72,8 @@ needed in an environment, it is possible to create additional nodes.
.. code:: console
# openstack-ansible playbooks/os-keystone-install.yml
# openstack-ansible playbooks/setup-openstack.yml --limit '!keystone_all',localhost,infra<node-ID>,infra<node-ID>-host_containers
# openstack-ansible openstack.osa.keystone
# openstack-ansible openstack.osa.setup_openstack --limit '!keystone_all',localhost,infra<node-ID>,infra<node-ID>-host_containers
Test new infra nodes
@ -113,9 +113,9 @@ cluster.
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible setup-hosts.yml --limit localhost,NEW_HOST_NAME
# openstack-ansible openstack-hosts-setup.yml -e openstack_hosts_group=nova_compute --tags openstack_hosts-file
# openstack-ansible setup-openstack.yml --limit localhost,NEW_HOST_NAME
# openstack-ansible openstack.osa.setup_hosts --limit localhost,NEW_HOST_NAME
# openstack-ansible openstack.osa.openstack_hosts_setup -e openstack_hosts_group=nova_compute --tags openstack_hosts-file
# openstack-ansible openstack.osa.setup_openstack --limit localhost,NEW_HOST_NAME
Alternatively you can try using new compute nodes deployment script
``/opt/openstack-ansible/scripts/add-compute.sh``.
@ -396,8 +396,7 @@ Destroying Containers
.. code-block:: console
# cd /opt/openstack-ansible/playbooks
# openstack-ansible lxc-containers-destroy --limit localhost,<container name|container group>
# openstack-ansible openstack.osa.containers_lxc_destroy --limit localhost,<container name|container group>
.. note::
@ -420,7 +419,7 @@ Destroying Containers
.. code-block:: console
# cd /opt/openstack-ansible/playbooks
# openstack-ansible lxc-containers-create --limit localhost,lxc_hosts,<container name|container
# openstack-ansible openstack.osa.containers_lxc_create --limit localhost,lxc_hosts,<container name|container
group>
The lxc_hosts host group must be included as the playbook and roles executed require the

View File

@ -195,7 +195,7 @@ Deploying Infrastructure Hosts
.. code:: console
openstack-ansible setup-hosts.yml --limit localhost,reinstalled_host*
openstack-ansible openstack.osa.setup_hosts --limit localhost,reinstalled_host*
#. This step should be executed when you are re-configuring one of haproxy
hosts
@ -208,17 +208,17 @@ Deploying Infrastructure Hosts
.. code:: console
openstack-ansible haproxy-install.yml --limit localhost,reinstalled_host --skip-tags keepalived
openstack-ansible repo-install.yml --tags haproxy-service-config
openstack-ansible galera-install.yml --tags haproxy-service-config
openstack-ansible rabbitmq-install.yml --tags haproxy-service-config
openstack-ansible setup-openstack.yml --tags haproxy-service-config
openstack-ansible openstack.osa.haproxy --limit localhost,reinstalled_host --skip-tags keepalived
openstack-ansible openstack.osa.repo --tags haproxy-service-config
openstack-ansible openstack.osa.galera_server --tags haproxy-service-config
openstack-ansible openstack.osa.rabbitmq_server --tags haproxy-service-config
openstack-ansible openstack.osa.setup_openstack --tags haproxy-service-config
Once this is done, you can deploy keepalived again:
.. code:: console
openstack-ansible haproxy-install.yml --tags keepalived --limit localhost,reinstalled_host
openstack-ansible openstack.osa.haproxy --tags keepalived --limit localhost,reinstalled_host
After that you might want to ensure that "local" backends remain disabled.
You can also use a playbook from `OPS repository`_ for this:
@ -231,8 +231,8 @@ Deploying Infrastructure Hosts
.. code:: console
openstack-ansible setup-infrastructure.yml --limit localhost,repo_all,rabbitmq_all,reinstalled_host*
openstack-ansible setup-openstack.yml --limit localhost,keystone_all,reinstalled_host*
openstack-ansible openstack.osa.setup_infrastructure --limit localhost,repo_all,rabbitmq_all,reinstalled_host*
openstack-ansible openstack.osa.setup_openstack --limit localhost,keystone_all,reinstalled_host*
(* because we need to include containers in the limit)
@ -253,7 +253,7 @@ Deploying Infrastructure Hosts
.. code:: console
openstack-ansible galera-install.yml --limit localhost,reinstalled_host* -e galera_server_bootstrap_node="{{ groups['galera_all'][-1] }}"
openstack-ansible openstack.osa.galera_server --limit localhost,reinstalled_host* -e galera_server_bootstrap_node="{{ groups['galera_all'][-1] }}"
You'll now have mariadb running, and it should be synced with
non-primaries.
@ -287,20 +287,20 @@ Deploying Infrastructure Hosts
.. code:: console
openstack-ansible rabbitmq-install.yml -e rabbitmq_primary_cluster_node="{{ hostvars[groups['rabbitmq_all'][-1]]['ansible_facts']['hostname'] }}"
openstack-ansible openstack.osa.rabbitmq_server -e rabbitmq_primary_cluster_node="{{ hostvars[groups['rabbitmq_all'][-1]]['ansible_facts']['hostname'] }}"
#. Now the repo host primary
.. code:: console
openstack-ansible repo-install.yml -e glusterfs_bootstrap_node="{{ groups['repo_all'][-1] }}"
openstack-ansible openstack.osa.repo -e glusterfs_bootstrap_node="{{ groups['repo_all'][-1] }}"
#. Everything should now be in a working state and we can finish it off with
.. code:: console
openstack-ansible setup-infrastructure.yml --limit localhost,repo_all,rabbitmq_all,reinstalled_host*
openstack-ansible setup-openstack.yml --limit localhost,keystone_all,reinstalled_host*
openstack-ansible openstack.osa.setup_infrastructure --limit localhost,repo_all,rabbitmq_all,reinstalled_host*
openstack-ansible openstack.osa.setup_openstack --limit localhost,keystone_all,reinstalled_host*
#. Adjust HAProxy status
@ -338,9 +338,9 @@ Deploying Compute & Network Hosts
.. code:: console
openstack-ansible setup-hosts.yml --limit localhost,reinstalled_host*
openstack-ansible setup-infrastructure.yml --limit localhost,reinstalled_host*
openstack-ansible setup-openstack.yml --limit localhost,reinstalled_host*
openstack-ansible openstack.osa.setup_hosts --limit localhost,reinstalled_host*
openstack-ansible openstack.osa.setup_infrastructure --limit localhost,reinstalled_host*
openstack-ansible openstack.osa.setup_openstack --limit localhost,reinstalled_host*
(* because we need to include containers in the limit)

View File

@ -182,7 +182,7 @@ After that you can proceed with standard OpenStack upgrade steps:
.. code-block:: console
# openstack-ansible setup-hosts.yml --limit '!galera_all:!rabbitmq_all' -e package_state=latest
# openstack-ansible openstack.osa.setup_hosts --limit '!galera_all:!rabbitmq_all' -e package_state=latest
This command is the same setting up hosts on a new installation. The
``galera_all`` and ``rabbitmq_all`` host groups are excluded to prevent
@ -194,7 +194,7 @@ container restarts.
.. code-block:: console
# openstack-ansible setup-hosts.yml -e 'lxc_container_allow_restarts=false' --limit 'galera_all:rabbitmq_all'
# openstack-ansible openstack.osa.setup_hosts -e 'lxc_container_allow_restarts=false' --limit 'galera_all:rabbitmq_all'
Upgrade infrastructure
~~~~~~~~~~~~~~~~~~~~~~
@ -204,7 +204,7 @@ ensure that rabbitmq and mariadb are upgraded, we pass the appropriate flags.
.. code-block:: console
# openstack-ansible setup-infrastructure.yml -e 'galera_upgrade=true' -e 'rabbitmq_upgrade=true' -e package_state=latest
# openstack-ansible openstack.osa.setup_infrastructure -e 'galera_upgrade=true' -e 'rabbitmq_upgrade=true' -e package_state=latest
With this complete, we can now restart the mariadb containers one at a time,
ensuring that each is started, responding, and synchronized with the other
@ -223,7 +223,7 @@ We can now go ahead with the upgrade of all the OpenStack components.
.. code-block:: console
# openstack-ansible setup-openstack.yml -e package_state=latest
# openstack-ansible openstack.osa.setup_openstack -e package_state=latest
Upgrade Ceph
~~~~~~~~~~~~
@ -244,8 +244,8 @@ lab environment before upgrading.
.. warning::
Ceph related playbooks are included as part of ``setup-infrastructure.yml``
and ``setup-openstack.yml`` playbooks, so you should be cautious when
Ceph related playbooks are included as part of ``openstack.osa.setup_infrastructure``
and ``openstack.osa.setup_openstack`` playbooks, so you should be cautious when
running them during OpenStack upgrades.
If you have ``upgrade_ceph_packages: true`` in your user variables or
provided ``-e upgrade_ceph_packages=true`` as argument and run

View File

@ -49,20 +49,20 @@ A minor upgrade typically requires the following steps:
.. code-block:: console
# openstack-ansible setup-hosts.yml -e package_state=latest
# openstack-ansible openstack.osa.setup_hosts -e package_state=latest
#. Update the infrastructure:
.. code-block:: console
# openstack-ansible -e rabbitmq_upgrade=true \
setup-infrastructure.yml
openstack.osa.setup_infrastructure
#. Update all OpenStack services:
.. code-block:: console
# openstack-ansible setup-openstack.yml -e package_state=latest
# openstack-ansible openstack.osa.setup_openstack -e package_state=latest
.. note::
@ -80,13 +80,13 @@ command:
.. code-block:: console
# openstack-ansible os-nova-install.yml --limit nova_compute
# openstack-ansible openstack.osa.nova --limit nova_compute
To update only a single Compute host, run the following command:
.. code-block:: console
# openstack-ansible os-nova-install.yml --limit <node-name>
# openstack-ansible openstack.osa.nova --limit <node-name>
.. note::
@ -117,23 +117,18 @@ script to show all groups and their hosts. For example:
To see which hosts a playbook runs against, and to see which tasks are
performed, run the following commands (for example):
#. Change directory to the repository clone playbooks directory:
.. code-block:: console
# cd /opt/openstack-ansible/playbooks
#. See the hosts in the ``nova_compute`` group that a playbook runs against:
.. code-block:: console
# openstack-ansible os-nova-install.yml --limit nova_compute \
# openstack-ansible openstack.osa.nova --limit nova_compute \
--list-hosts
#. See the tasks that are executed on hosts in the ``nova_compute`` group:
.. code-block:: console
# openstack-ansible os-nova-install.yml --limit nova_compute \
# openstack-ansible openstack.osa.nova --limit nova_compute \
--skip-tags 'nova-key' \
--list-tasks

View File

@ -253,10 +253,9 @@ Finally, run the playbooks by executing:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible setup-hosts.yml
# openstack-ansible setup-infrastructure.yml
# openstack-ansible setup-openstack.yml
# openstack-ansible openstack.osa.setup_hosts
# openstack-ansible openstack.osa.setup_infrastructure
# openstack-ansible openstack.osa.setup_openstack
The installation process will take a while to complete, but here are some
general estimates:

View File

@ -0,0 +1,26 @@
---
prelude: >
All playbooks for OpenStack-Ansible were moved under openstack.osa
collection, which is being installed as a part of bootstrap-ansible.sh
process.
We left playbooks under their original names and locations for
backwards compatability, though they are just importing corresponsive
playbooks from the collection.
features:
- |
Functional code for playbooks were moved from playbooks/ folder of
the OpenStack-Ansible repository to a openstack.osa collection.
This means, you can control versions of playbooks separately from the
OpenStack-Ansible repository itself. This also enables to call
playbooks without providing explicit path to them, but through FQCN,
for example: ``openstack-ansible openstack.osa.setup_openstack``
We also have renamed some playbooks to better reflect their purpose.
For instance ``playbooks/os-nova-install.yml`` become
``openstack.osa.nova``
For backwards compatability we left old playbooks names/paths, though
they contain simple import of corresponsive playbook from the collection.