Impovements to additional cellv2 documentation

Fixes a statement in 'Configuring AZs for Nova' in relation to
multiple cells.
Other improvements to creating and managing cells.

Change-Id: I1724eadfe572732aca1425435d3a08cdf7f7ecac
This commit is contained in:
Martin Schuppert 2019-11-21 13:06:02 +01:00
parent 252e86c930
commit 11e5661b47
5 changed files with 96 additions and 53 deletions

View File

@ -19,20 +19,6 @@ gets created. The central cell must also be configured as a specific AZs
Configuring AZs for Nova (compute)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Nova AZ configuration for compute nodes in the stack can be set with the
`NovaComputeAvailabilityZone` parameter during the deployment.
The value of the parameter is the name of the AZ where compute nodes in that
stack will be added.
For example, the following environment file would be used to add compute nodes
in the `cell-1` stack to the `cell-1` AZ:
.. code-block:: yaml
parameter_defaults:
NovaComputeAvailabilityZone: cell1
It's also possible to configure the AZ for a compute node by adding it to a
host aggregate after the deployment is completed. The following commands show
creating a host aggregate, an associated AZ, and adding compute nodes to a
@ -45,6 +31,15 @@ creating a host aggregate, an associated AZ, and adding compute nodes to a
openstack aggregate add host cell1 hostA
openstack aggregate add host cell1 hostB
.. note::
Right now we can not use `OS::TripleO::Services::NovaAZConfig` to auto
create the AZ during the deployment as at this stage the initial cell
creation is not complete. Further work is needed to fully automate the
post cell creation steps before `OS::TripleO::Services::NovaAZConfig`
can be used.
Routed networks
---------------

View File

@ -172,6 +172,11 @@ a parameter file for the cell compute stack, e.g. `cell1/cell1-cmp.yaml`:
.. code-block:: yaml
resource_registry:
# Since the compute stack deploys only compute nodes ExternalVIPPorts
# are not required.
OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
parameter_defaults:
# number of controllers/computes in the cell
CellControllerCount: 0
@ -228,9 +233,10 @@ as explained in :ref:`cell_host_discovery`.
Create and add the node to an Availability Zone
_______________________________________________
After a cell got provisioned, it is required to create an availability zone for the
cell to make sure an instance created in the cell, stays in the cell when performing
a migration. Check :ref:`cell_availability_zone` on more about how to create an
availability zone and add the node.
compute stack, it is not enough to just create an availability zone for the complete
cell. In this used case we want to make sure an instance created in the compute group,
stays in it when performing a migration. Check :ref:`cell_availability_zone` on more
about how to create an availability zone and add the node.
After that the cell is deployed and can be used.

View File

@ -123,7 +123,7 @@ Each cell has some mandatory parameters which need to be set using an
environment file.
Add the following content into a parameter file for the cell, e.g. `cell1/cell1.yaml`:
.. code-block:: yaml
.. code-block::
resource_registry:
# since the same networks are used in this example, the
@ -134,9 +134,11 @@ Add the following content into a parameter file for the cell, e.g. `cell1/cell1.
OS::TripleO::Network::StorageMgmt: OS::Heat::None
OS::TripleO::Network::Tenant: OS::Heat::None
OS::TripleO::Network::Management: OS::Heat::None
OS::TripleO::Network::Ports::OVNDBsVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
parameter_defaults:
# new CELL Parameter to reflect that this is an additional CELL
# CELL Parameter to reflect that this is an additional CELL
NovaAdditionalCell: True
# The DNS names for the VIPs for the cell
@ -150,10 +152,14 @@ Add the following content into a parameter file for the cell, e.g. `cell1/cell1.
OvercloudCellControllerFlavor: cellcontroller
OvercloudComputeFlavor: compute
# number of controllers/computes in the cell
# Number of controllers/computes in the cell
CellControllerCount: 1
ComputeCount: 1
  # Compute names need to be uniq across cells. Make sure to have a uniq
# hostname format for cell nodes
  ComputeHostnameFormat: 'cell1-compute-%index%'
# default gateway
ControlPlaneStaticRoutes:
- ip_netmask: 0.0.0.0/0
@ -166,6 +172,12 @@ The above file disables creating networks as the networks from the overcloud sta
are reused. It also specifies that this will be an additional cell using parameter
`NovaAdditionalCell`.
.. note::
Compute hostnames need to be uniq across cells. Make sure to use
`ComputeHostnameFormat` to have uniq hostnames.
Create the network configuration for `cellcontroller` and add to environment file
_________________________________________________________________________________
Depending on the network configuration of the used hardware and network
@ -260,16 +272,24 @@ to create a cell after the deployment steps finished successfully. In
addition :ref:`cell_create_cell_manual` explains the tasks being automated
by this ansible way.
.. note::
When using multiple additional cells, don't place all inventories of the cells
in one directory. The current version of the `create-nova-cell-v2.yaml` playbook
uses `CellController[0]` to get the `database_connection` and `transport_url`
to create the new cell. When all cell inventories get added to the same directory
`CellController[0]` might not be the correct cell controller for the new cell.
.. code-block:: bash
source stackrc
mkdir inventories
for i in $(openstack stack list -f value -c 'Stack Name'); do \
mkdir inventories-cell1
for i in overcloud cell1; do \
/usr/bin/tripleo-ansible-inventory \
--static-yaml-inventory inventories/${i}.yaml --stack ${i}; \
done
ansible-playbook -i inventories \
ANSIBLE_HOST_KEY_CHECKING=False ANSIBLE_SSH_RETRIES=3 ansible-playbook -i inventories \
/usr/share/ansible/tripleo-playbooks/create-nova-cell-v2.yaml \
-e tripleo_cellv2_cell_name=cell1 \
-e tripleo_cellv2_containercli=docker

View File

@ -14,7 +14,8 @@ the cell host discovery:
.. code-block:: bash
CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
CTRL=overcloud-controller-0
CTRL_IP=$(openstack server list -f value -c Networks --name $CTRL | sed 's/ctlplane=//')
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
@ -27,16 +28,27 @@ the cell host discovery:
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 list_hosts
# add new node to the availability zone
source overcloudrc
(overcloud) $ openstack aggregate add host <cell name> <compute host>
.. note::
Optionally the cell uuid cal be specificed to the `discover_hosts` and
`list_hosts` command to only target against a specific cell.
Delete a compute from a cell
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As initial step migrate all instances off the compute.
* As initial step migrate all instances off the compute.
#. From one of the overcloud controllers, delete the computes from the cell:
* From one of the overcloud controllers, delete the computes from the cell:
.. code-block:: bash
CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
source stackrc
CTRL=overcloud-controller-0
CTRL_IP=$(openstack server list -f value -c Networks --name $CTRL | sed 's/ctlplane=//')
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
@ -49,15 +61,15 @@ As initial step migrate all instances off the compute.
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 delete_host --cell_uuid <uuid> --host <compute>
#. Delete the node from the cell stack
* Delete the node from the cell stack
See :doc:`../post_deployment/delete_nodes`.
#. Delete the resource providers from placement
* Delete the resource providers from placement
This step is required as otherwise adding a compute node with the same hostname
will make it to fail to register and update the resources with the placement
service.::
service.:
.. code-block:: bash
@ -74,13 +86,14 @@ As initial step migrate all instances off the compute.
Delete a cell
~~~~~~~~~~~~~
As initial step delete all instances from cell
* As initial step delete all instances from the cell.
#. From one of the overcloud controllers, delete all computes from the cell:
* From one of the overcloud controllers, delete all computes from the cell:
.. code-block:: bash
CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
CTRL=overcloud-controller-0
CTRL_IP=$(openstack server list -f value -c Networks --name $CTRL | sed 's/ctlplane=//')
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
@ -93,23 +106,25 @@ As initial step delete all instances from cell
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 delete_host --cell_uuid <uuid> --host <compute>
#. On the cell controller delete all deleted instances from the database:
* On the cell controller delete all deleted instances from the database:
.. code-block:: bash
CELL_CTRL_IP=$(openstack server list -f value -c Networks --name cellcontrol-0 | sed 's/ctlplane=//')
CELL_CTRL=cell1-cellcontrol-0
CELL_CTRL_IP=$(openstack server list -f value -c Networks --name $CELL_CTRL | sed 's/ctlplane=//')
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
ssh heat-admin@${CELL_CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_conductor \
nova-manage db archive_deleted_rows --verbose
nova-manage db archive_deleted_rows --until-complete --verbose
#. From one of the overcloud controllers, delete the cell:
* From one of the overcloud controllers, delete the cell:
.. code-block:: bash
CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
CTRL=overcloud-controller-0
CTRL_IP=$(openstack server list -f value -c Networks --name $CTRL | sed 's/ctlplane=//')
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
@ -122,7 +137,7 @@ As initial step delete all instances from cell
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 delete_cell --cell_uuid <uuid>
#. Delete the cell stack:
* Delete the cell stack:
.. code-block:: bash
@ -133,15 +148,16 @@ As initial step delete all instances from cell
If the cell consist of a controller and compute stack, delete as a first step the
compute stack and then the controller stack.
#. From a system which can reach the placement endpoint, delete the resource providers from placement
* From a system which can reach the placement endpoint, delete the resource providers from placement
This step is required as otherwise adding a compute node with the same hostname
will make it to fail to register and update the resources with the placement
service:
will make it to fail to register as a resource with the placement service.
In case of Centos/RHEL 8 the required packages is `python3-osc-placement`:
.. code-block:: bash
sudo yum install python2-osc-placement
source overcloudrc
openstack resource provider list
+--------------------------------------+---------------------------------------+------------+
| uuid | name | generation |
@ -165,7 +181,8 @@ the steps from the minor update procedure.
Once the control plane stack is updated, re-run the export command to recreate the
required input files for each separate cell stack.
.. note:
.. note::
Before re-running the export command, backup the previously used input file so that
the previous versions are not overwritten. In the event that a separate cell stack
needs a stack update operation performed prior to the minor update procedure, the

View File

@ -374,6 +374,10 @@ Add the following content into a parameter file for the cell, e.g. `cell1/cell1.
CellControllerCount: 3
ComputeCount: 0
# Compute names need to be uniq, make sure to have a uniq
# hostname format for cell nodes
ComputeHostnameFormat: 'cell1-compute-%index%'
# default gateway
ControlPlaneStaticRoutes:
- ip_netmask: 0.0.0.0/0
@ -693,9 +697,10 @@ as explained in :ref:`cell_host_discovery`.
Create and add the node to an Availability Zone
_______________________________________________
After a cell got provisioned, it is required to create an availability zone for the
cell to make sure an instance created in the cell, stays in the cell when performing
a migration. Check :ref:`cell_availability_zone` on more about how to create an
availability zone and add the node.
compute stack, it is not enough to just create an availability zone for the complete
cell. In this used case we want to make sure an instance created in the compute group,
stays in it when performing a migration. Check :ref:`cell_availability_zone` on more
about how to create an availability zone and add the node.
After that the cell is deployed and can be used.