Merge "Enhancement to cell v2 doc with split for stein/train"

This commit is contained in:
Zuul 2019-08-29 13:34:42 +00:00 committed by Gerrit Code Review
commit bb19cbc444
7 changed files with 2040 additions and 490 deletions

View File

@ -2,23 +2,16 @@ Deploy an additional nova cell v2
=================================
.. warning::
This currently is only supported in Stein or newer versions.
Multi cell cupport is only supported in Stein and later versions.
.. contents::
:depth: 3
:backlinks: none
This guide assumes that you are ready to deploy a new overcloud, or have
already installed an overcloud (min Stein release).
The different sections in this guide assume that you are ready to deploy a new
overcloud, or already have installed an overcloud (min Stein release).
.. note::
Starting with CentOS 8 and TripleO Stein release, podman is the CONTAINERCLI
to be used in the following steps.
Initial Deploy
--------------
The minimum requirement for having multiple cells is to have a central OpenStack
controller cluster running all controller services. Additional cells will
have cell controllers running the cell DB, cell MQ and a nova cell conductor
@ -28,483 +21,11 @@ service acts as a super conductor of the whole environment.
For more details on the cells v2 layout check `Cells Layout (v2)
<https://docs.openstack.org/nova/latest/user/cellsv2-layout.html>`_
.. note::
Right now the current implementation does not support running nova metadata
API per cell as explained in the cells v2 layout section `Local per cell
<https://docs.openstack.org/nova/latest/user/cellsv2-layout.html#nova-metadata-api-service>`_
The following example uses six nodes and the split control plane method to
simulate a distributed cell deployment. The first Heat stack deploys a controller
cluster and a compute. The second Heat stack deploys a cell controller and a
compute node::
openstack overcloud status
+-----------+---------------------+---------------------+-------------------+
| Plan Name | Created | Updated | Deployment Status |
+-----------+---------------------+---------------------+-------------------+
| overcloud | 2019-02-12 09:00:27 | 2019-02-12 09:00:27 | DEPLOY_SUCCESS |
+-----------+---------------------+---------------------+-------------------+
openstack server list -c Name -c Status -c Networks
+----------------------------+--------+------------------------+
| Name | Status | Networks |
+----------------------------+--------+------------------------+
| overcloud-controller-1 | ACTIVE | ctlplane=192.168.24.19 |
| overcloud-controller-2 | ACTIVE | ctlplane=192.168.24.11 |
| overcloud-controller-0 | ACTIVE | ctlplane=192.168.24.29 |
| overcloud-novacompute-0 | ACTIVE | ctlplane=192.168.24.15 |
+----------------------------+--------+------------------------+
The above deployed overcloud shows the nodes from the first stach and should be
configured with Swift as Glance backend, using the following parameter, that
images are pulled by the remote cell compute node over HTTP::
parameter_defaults:
GlanceBackend: swift
.. note::
In this example the default cell and the additional cell uses the same
network, When configuring another network scenario keep in mind that it
will be necessary for the systems to be able to communicate with each
other. E.g. the IPs for the default cell controller node will be in the
endpoint map that later will be extracted from the overcloud stack and
passed as a parameter to the second cell stack for it to access its
endpoints. In this example both cells share an L2 network. In a production
deployment it may be necessary instead to route.
Extract deployment information from the overcloud stack
-------------------------------------------------------
Any additional cell stack requires information from the overcloud Heat stack
where the central OpenStack services are located. The extracted parameters are
needed as input for additional cell stacks. To extract these parameters
into separate files in a directory (e.g. DIR=cell1) run the following::
source stackrc
mkdir cell1
export DIR=cell1
#. Export the default cell EndpointMap
.. code::
openstack stack output show overcloud EndpointMap --format json \
| jq '{"parameter_defaults": {"EndpointMapOverride": .output_value}}' \
> $DIR/endpoint-map.json
#. Export the default cell HostsEntry
.. code::
openstack stack output show overcloud HostsEntry -f json \
| jq -r '{"parameter_defaults":{"ExtraHostFileEntries": .output_value}}' \
> $DIR/extra-host-file-entries.json
#. Export AllNodesConfig and GlobalConfig information
In addition to the ``GlobalConfig``, which contains the RPC information (port,
ssl, scheme, user and password), additional information from the ``AllNodesConfig``
is required to point components to the default cell service instead of the
service served by the cell controller. These are
* ``oslo_messaging_notify_short_bootstrap_node_name`` - default cell overcloud
messaging notify bootstrap node information
* ``oslo_messaging_notify_node_names`` - default cell overcloud messaging notify
node information
* ``oslo_messaging_rpc_node_names`` - default cell overcloud messaging rpc node
information as e.g. neutron agent needs to point to the overcloud messaging
cluster
* ``memcached_node_ips`` - memcached node information used by the cell services.
.. code::
(openstack stack output show overcloud AllNodesConfig --format json \
| jq '.output_value | {oslo_messaging_notify_short_bootstrap_node_name: \
.oslo_messaging_notify_short_bootstrap_node_name, \
oslo_messaging_notify_node_names: .oslo_messaging_notify_node_names, \
oslo_messaging_rpc_node_names: .oslo_messaging_rpc_node_names, \
memcached_node_ips: .memcached_node_ips}'; \
openstack stack output show overcloud GlobalConfig --format json \
| jq '.output_value') |jq -s '.[0] * .[1]| {"parameter_defaults": \
{"AllNodesExtraMapData": .}}' > $DIR/all-nodes-extra-map-data.json
An example of a ``all-nodes-extra-map-data.json`` file::
{
"parameter_defaults": {
"AllNodesExtraMapData": {
"oslo_messaging_notify_short_bootstrap_node_name": "overcloud-controller-0",
"oslo_messaging_notify_node_names": [
"overcloud-controller-0.internalapi.site1.test",
"overcloud-controller-1.internalapi.site1.test",
"overcloud-controller-2.internalapi.site1.test"
],
"oslo_messaging_rpc_node_names": [
"overcloud-controller-0.internalapi.site1.test",
"overcloud-controller-1.internalapi.site1.test",
"overcloud-controller-2.internalapi.site1.test"
],
"memcached_node_ips": [
"172.16.2.232",
"172.16.2.29",
"172.16.2.49"
],
"oslo_messaging_rpc_port": 5672,
"oslo_messaging_rpc_use_ssl": "False",
"oslo_messaging_notify_scheme": "rabbit",
"oslo_messaging_notify_use_ssl": "False",
"oslo_messaging_rpc_scheme": "rabbit",
"oslo_messaging_rpc_password": "7l4lfamjPp6nqJgBMqb1YyM2I",
"oslo_messaging_notify_password": "7l4lfamjPp6nqJgBMqb1YyM2I",
"oslo_messaging_rpc_user_name": "guest",
"oslo_messaging_notify_port": 5672,
"oslo_messaging_notify_user_name": "guest"
}
}
}
#. Export passwords
.. code::
openstack object save --file - overcloud plan-environment.yaml \
| python -c 'import yaml as y, sys as s; \
s.stdout.write(y.dump({"parameter_defaults": \
y.load(s.stdin.read())["passwords"]}));' > $DIR/passwords.yaml
The same passwords are used for the cell services.
#. Create roles file for cell stack
.. code::
openstack overcloud roles generate --roles-path \
/usr/share/openstack-tripleo-heat-templates/roles \
-o $DIR/cell_roles_data.yaml Compute CellController
.. note::
In case a different default heat stack name or compute role name is used,
modify the above commands.
#. Create cell parameter file for additional customization (e.g. cell1/cell1.yaml)
Add the following content into a parameter file for the cell, e.g. ``cell1/cell1.yaml``::
resource_registry:
# since the same network is used, the creation of the
# different kind of networks is omitted for additional
# cells
OS::TripleO::Network::External: OS::Heat::None
OS::TripleO::Network::InternalApi: OS::Heat::None
OS::TripleO::Network::Storage: OS::Heat::None
OS::TripleO::Network::StorageMgmt: OS::Heat::None
OS::TripleO::Network::Tenant: OS::Heat::None
OS::TripleO::Network::Management: OS::Heat::None
parameter_defaults:
# new CELL Parameter to reflect that this is an additional CELL
NovaAdditionalCell: True
# In case of an tls-everywhere environment the CloudName*
# parameters need to be set for the cell as connection to
# endpoints are done via DNS names, like MySQL and MQ endpoints.
# This is optional for non tls-everywhere environments.
#CloudName: computecell1.ooo.test
#CloudNameInternal: computecell1.internalapi.ooo.test
#CloudNameStorage: computecell1.storage.ooo.test
#CloudNameStorageManagement: computecell1.storagemgmt.ooo.test
#CloudNameCtlplane: computecell1.ctlplane.ooo.test
# CloudDomain is the same as in the default cell.
#CloudDomain: ooo.test
# Flavors used for the cell controller and computes
OvercloudControllerFlavor: cellcontroller
OvercloudComputeFlavor: compute
# number of controllers/computes in the cell
ControllerCount: 1
ComputeCount: 1
CephStorageCount: 0
# default gateway
ControlPlaneStaticRoutes:
- ip_netmask: 0.0.0.0/0
next_hop: 192.168.24.1
default: true
Debug: true
DnsServers:
- x.x.x.x
The above file disables creating networks as the same as the overcloud stack
created are used. It also specifies that this will be an additional cell using
parameter `NovaAdditionalCell`.
Deploy the cell
---------------
#. Create new flavor used to tag the cell controller
.. code::
openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 cellcontroller
openstack flavor set --property "cpu_arch"="x86_64" \
--property "capabilities:boot_option"="local" \
--property "capabilities:profile"="cellcontroller" \
--property "resources:CUSTOM_BAREMETAL=1" \
--property "resources:DISK_GB=0" \
--property "resources:MEMORY_MB=0" \
--property "resources:VCPU=0" \
cellcontroller
The properties need to be modified to the needs of the environment.
#. Tag node into the new flavor using the following command
.. code::
openstack baremetal node set --property \
capabilities='profile:cellcontroller,boot_option:local' <node id>
Verify the tagged cellcontroller::
openstack overcloud profiles list
#. Deploy the cell
To deploy the overcloud we can use use the same ``overcloud deploy`` command as
it was used to deploy the ``overcloud`` stack and add the created export
environment files::
openstack overcloud deploy --override-ansible-cfg \
/home/stack/custom_ansible.cfg \
--stack computecell1 \
--templates /usr/share/openstack-tripleo-heat-templates \
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \
-e ... additional environment files used for overcloud stack, like container
prepare parameters, or other specific parameters for the cell
...
-r $HOME/$DIR/cell_roles_data.yaml \
-e $HOME/$DIR/passwords.yaml \
-e $HOME/$DIR/endpoint-map.json \
-e $HOME/$DIR/all-nodes-extra-map-data.json \
-e $HOME/$DIR/extra-host-file-entries.json \
-e $HOME/$DIR/cell1.yaml
Wait for the deployment to finish::
openstack stack list
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| 890e4764-1606-4dab-9c2f-6ed853e3fed8 | computecell1 | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None |
| 09531653-1074-4568-b50a-48a7b3cc15a6 | overcloud | 2b303a97f4664a69ba2dbcfd723e76a4 | UPDATE_COMPLETE | 2019-02-09T09:52:56Z | 2019-02-11T08:33:37Z |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
Create the cell and discover compute nodes
------------------------------------------
#. Add cell information to overcloud controllers
On all central controllers add information on how to reach the messaging cell
controller endpoint (usually internalapi) to ``/etc/hosts``, from the undercloud::
API_INFO=$(ssh heat-admin@<cell controlle ip> grep cellcontrol-0.internalapi /etc/hosts)
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b \
-m lineinfile -a "dest=/etc/hosts line=\"$API_INFO\""
.. note::
Do this outside the ``HEAT_HOSTS_START`` .. ``HEAT_HOSTS_END`` block, or
add it to an `ExtraHostFileEntries` section of an environment file for the
central overcloud controller. Add the environment file to the next
`overcloud deploy` run.
#. Extract transport_url and database connection
Get the ``transport_url`` and database ``connection`` endpoint information
from the cell controller. This information is used to create the cell in the
next step::
ssh heat-admin@<cell controller ip> sudo crudini --get \
/var/lib/config-data/nova/etc/nova/nova.conf DEFAULT transport_url
ssh heat-admin@<cell controller ip> sudo crudini --get \
/var/lib/config-data/nova/etc/nova/nova.conf database connection
#. Create the cell
Login to one of the central controllers create the cell with reference to
the IP of the cell controller in the ``database_connection`` and the
``transport_url`` extracted from previous step, like::
ssh heat-admin@<ctlplane ip overcloud-controller-0>
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
sudo $CONTAINERCLI exec -it -u root nova_api /bin/bash
nova-manage cell_v2 create_cell --name computecell1 \
--database_connection \
'{scheme}://{username}:{password}@172.16.2.102/nova?{query}' \
--transport-url \
'rabbit://guest:7l4lfamjPp6nqJgBMqb1YyM2I@computecell1-cellcontrol-0.internalapi.cell1.test:5672/?ssl=0'
.. note::
Templated transport cells URLs could be used if the same amount of controllers
are in the default and add on cell.
.. code::
nova-manage cell_v2 list_cells --verbose
After the cell got created the nova services on all central controllers need to
be restarted.
Docker::
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
"docker restart nova_api nova_scheduler nova_conductor"
Podman::
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
"systemctl restart tripleo_nova_api tripleo_nova_conductor tripleo_nova_scheduler"
#. Perform cell host discovery
Login to one of the overcloud controllers and run the cell host discovery::
ssh heat-admin@<ctlplane ip overcloud-controller-0>
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
sudo $CONTAINERCLI exec -it -u root nova_api /bin/bash
nova-manage cell_v2 discover_hosts --by-service --verbose
nova-manage cell_v2 list_hosts
+--------------+--------------------------------------+---------------------------------------+
|  Cell Name   |              Cell UUID               |                Hostname               |
+--------------+--------------------------------------+---------------------------------------+
| computecell1 | 97bb4ee9-7fe9-4ec7-af0d-72b8ef843e3e | computecell1-novacompute-0.site1.test |
| default | f012b67d-de96-471d-a44f-74e4a6783bca | overcloud-novacompute-0.site1.test |
+--------------+--------------------------------------+---------------------------------------+
The cell is now deployed and can be used.
Managing the cell
-----------------
Add a compute to a cell
~~~~~~~~~~~~~~~~~~~~~~~
To increase resource capacity of a running cell, you can start more servers of
a selected role. For more details on how to add nodes see :doc:`../post_deployment/scale_roles`.
After the node got deployed, login to one of the overcloud controllers and run
the cell host discovery::
ssh heat-admin@<ctlplane ip overcloud-controller-0>
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
sudo $CONTAINERCLI exec -it -u root nova_api /bin/bash
nova-manage cell_v2 discover_hosts --by-service --verbose
nova-manage cell_v2 list_hosts
Delete a compute from a cell
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As initial step migrate all instances off the compute.
#. From one of the overcloud controllers, delete the computes from the cell::
ssh heat-admin@<ctlplane ip overcloud-controller-0>
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
sudo $CONTAINERCLI exec -it -u root nova_api /bin/bash
nova-manage cell_v2 delete_host --cell_uuid <uuid> --host <compute>
#. Delete the resource providers from placement
This step is required as otherwise adding a compute node with the same hostname
will make it to fail to register and update the resources with the placement
service.::
sudo yum install python2-osc-placement
openstack resource provider list
+--------------------------------------+---------------------------------------+------------+
| uuid | name | generation |
+--------------------------------------+---------------------------------------+------------+
| 9cd04a8b-5e6c-428e-a643-397c9bebcc16 | computecell1-novacompute-0.site1.test | 11 |
+--------------------------------------+---------------------------------------+------------+
openstack resource provider delete 9cd04a8b-5e6c-428e-a643-397c9bebcc16
#. Delete the node from the cell stack
See :doc:`../post_deployment/delete_nodes`.
Deleting a cell
~~~~~~~~~~~~~~~
As initial step delete all instances from cell
#. From one of the overcloud controllers, delete all computes from the cell::
ssh heat-admin@<ctlplane ip overcloud-controller-0>
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
sudo $CONTAINERCLI exec -it -u root nova_api /bin/bash
nova-manage cell_v2 delete_host --cell_uuid <uuid> --host <compute>
#. On the cell controller delete all deleted instances::
ssh heat-admin@<ctlplane ip cell controller>
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
sudo $CONTAINERCLI exec -it -u root nova_conductor /bin/bash
nova-manage db archive_deleted_rows --verbose
#. From one of the overcloud controllers, delete the cell::
ssh heat-admin@<ctlplane ip overcloud-controller-0>
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
sudo $CONTAINERCLI exec -it -u root nova_api /bin/bash
nova-manage cell_v2 delete_cell --cell_uuid <uuid>
#. From a system which can reach the placement endpoint, delete the resource providers from placement
This step is required as otherwise adding a compute node with the same hostname
will make it to fail to register and update the resources with the placement
service.::
sudo yum install python2-osc-placement
openstack resource provider list
+--------------------------------------+---------------------------------------+------------+
| uuid | name | generation |
+--------------------------------------+---------------------------------------+------------+
| 9cd04a8b-5e6c-428e-a643-397c9bebcc16 | computecell1-novacompute-0.site1.test | 11 |
+--------------------------------------+---------------------------------------+------------+
openstack resource provider delete 9cd04a8b-5e6c-428e-a643-397c9bebcc16
#. Delete the cell stack::
openstack stack delete computecell1 --wait --yes && openstack overcloud plan delete computecell1
.. toctree::
deploy_cellv2_stein.rst
deploy_cellv2_basic.rst
deploy_cellv2_advanced.rst
deploy_cellv2_routed.rst
deploy_cellv2_additional.rst
deploy_cellv2_manage_cell.rst

View File

@ -0,0 +1,160 @@
Additional cell considerations and features
===========================================
.. warning::
Multi cell support is only supported in Stein or later versions.
.. contents::
:depth: 3
:backlinks: none
.. _cell_availability_zone:
Availability Zones (AZ)
-----------------------
A nova AZ must be configured for each cell to make sure instances stay in the
cell when performing migration and to be able to target a cell when an instance
gets created. The central cell must also be configured as a specific AZs
(or multiple AZs) rather than the default.
Configuring AZs for Nova (compute)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Nova AZ configuration for compute nodes in the stack can be set with the
`NovaComputeAvailabilityZone` parameter during the deployment.
The value of the parameter is the name of the AZ where compute nodes in that
stack will be added.
For example, the following environment file would be used to add compute nodes
in the `cell-1` stack to the `cell-1` AZ:
.. code-block:: yaml
parameter_defaults:
NovaComputeAvailabilityZone: cell1
It's also possible to configure the AZ for a compute node by adding it to a
host aggregate after the deployment is completed. The following commands show
creating a host aggregate, an associated AZ, and adding compute nodes to a
`cell-1` AZ:
.. code-block:: bash
source overcloudrc
openstack aggregate create cell1 --zone cell1
openstack aggregate add host cell1 hostA
openstack aggregate add host cell1 hostB
Routed networks
---------------
A routed spine and leaf networking layout can be used to deploy the additional
cell nodes in a distributed nature. Not all nodes need to be co-located at the
same physical location or datacenter. See :ref:`routed_spine_leaf_network` for
more details.
Reusing networks from an already deployed stack
-----------------------------------------------
When deploying separate stacks it may be necessary to reuse networks, subnets,
and VIP resources between stacks if desired. Only a single Heat stack can own a
resource and be responsible for its creation and deletion, however the
resources can be reused in other stacks.
Usually the internal api network in case of split cell controller and cell
compute stacks are shared.
To reuse network related resources between stacks, the following parameters
have been added to the network definitions in the `network_data.yaml` file
format:
.. code-block:: bash
external_resource_network_id: Existing Network UUID
external_resource_subnet_id: Existing Subnet UUID
external_resource_segment_id: Existing Segment UUID
external_resource_vip_id: Existing VIP UUID
These parameters can be set on each network definition in the
`network_data.yaml` file used for the deployment of the separate stack.
Not all networks need to be reused or shared across stacks. The
`external_resource_*` parameters can be set for only the networks that are
meant to be shared, while the other networks can be newly created and managed.
For example, to reuse the `internal_api` network from the cell controller stack
in the compute stack, run the following commands to show the UUIDs for the
related network resources:
.. code-block:: bash
openstack network show internal_api -c id -f value
openstack subnet show internal_api_subnet -c id -f value
openstack port show internal_api_virtual_ip -c id -f value
Save the values shown in the output of the above commands and add them to the
network definition for the `internal_api` network in the `network_data.yaml`
file for the separate stack.
In case the overcloud and the cell controller stack uses the same internal
api network there are two ports with the name `internal_api_virtual_ip`.
In this case it is required to identify the correct port and use the id
instead of the name in the `openstack port show` command.
An example network definition would look like:
.. code-block:: bash
- name: InternalApi
external_resource_network_id: 93861871-7814-4dbc-9e6c-7f51496b43af
external_resource_subnet_id: c85c8670-51c1-4b17-a580-1cfb4344de27
external_resource_vip_id: 8bb9d96f-72bf-4964-a05c-5d3fed203eb7
name_lower: internal_api
vip: true
ip_subnet: '172.16.2.0/24'
allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}]
ipv6_subnet: 'fd00:fd00:fd00:2000::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}]
mtu: 1400
.. note::
When *not* sharing networks between stacks, each network defined in
`network_data.yaml` must have a unique name across all deployed stacks.
This requirement is necessary since regardless of the stack, all networks are
created in the same tenant in Neutron on the undercloud.
For example, the network name `internal_api` can't be reused between
stacks, unless the intent is to share the network between the stacks.
The network would need to be given a different `name` and `name_lower`
property such as `InternalApiCompute0` and `internal_api_compute_0`.
Configuring nova-metadata API per-cell
--------------------------------------
.. note::
Deploying nova-metadata API per-cell is only supported in Train
and later.
.. note::
NovaLocalMetadataPerCell is only tested with ovn metadata agent to
automatically forward requests to the nova metadata api.
It is possible to configure the nova-metadata API service local per-cell.
In this situation the cell controllers also host the nova-metadata API
service. The `NovaLocalMetadataPerCell` parameter, which defaults to
`false` need to be set to `true`.
Using nova-metadata API service per-cell can have better performance and
data isolation in a multi-cell deployment. Users should consider the use
of this configuration depending on how neutron is setup. If networks span
cells, you might need to run nova-metadata API service centrally.
If your networks are segmented along cell boundaries, then you can
run nova-metadata API service per cell.
.. code-block:: yaml
parameter_defaults:
NovaLocalMetadataPerCell: True
See also information on running nova-metadata API per cell as explained
in the cells v2 layout section `Local per cell <https://docs.openstack.org/nova/latest/user/cellsv2-layout.html#nova-metadata-api-service>`_

View File

@ -0,0 +1,241 @@
Example 2. - Split Cell controller/compute Architecture in Train release
========================================================================
.. warning::
Multi cell support is only supported in Stein or later versions.
This guide addresses Train release and later!
.. contents::
:depth: 3
:backlinks: none
This guide assumes that you are ready to deploy a new overcloud, or have
already installed an overcloud (min Train release).
.. note::
Starting with CentOS 8 and TripleO Stein release, podman is the CONTAINERCLI
to be used in the following steps.
.. _advanced_cell_arch:
In this scenario the cell computes get split off in its own stack, e.g. to
manage computes from each edge site in its own stack.
This section only explains the differences to the :doc:`deploy_cellv2_basic`.
Like before the following example uses six nodes and the split control plane method
to deploy a distributed cell deployment. The first Heat stack deploys the controller
cluster. The second Heat stack deploys the cell controller. The computes will then
again be split off in its own stack.
.. _cell_export_cell_controller_info:
Extract deployment information from the overcloud stack
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Again like in :ref:`cell_export_overcloud_info` information from the control
plane stack needs to be exported:
.. code-block:: bash
source stackrc
mkdir cell1
export DIR=cell1
openstack overcloud cell export cell1-ctrl -o cell1/cell1-ctrl-input.yaml
Create roles file for the cell stack
____________________________________
The same roles get exported as in :ref:`cell_create_roles_file`.
Create cell parameter file for additional customization (e.g. cell1/cell1.yaml)
_______________________________________________________________________________
The cell parameter file remains the same as in :ref:`cell_parameter_file` with
the only difference that the `ComputeCount` gets set to 0. This is required as
we use the roles file contain both `CellController` and `Compute` role and the
default count for the `Compute` role is 1 (e.g. `cell1/cell1.yaml`):
.. code-block:: yaml
parameter_defaults:
...
# number of controllers/computes in the cell
CellControllerCount: 1
ComputeCount: 0
...
Create the network configuration for `cellcontroller` and add to environment file
_________________________________________________________________________________
Depending on the network configuration of the used hardware and network
architecture it is required to register a resource for the `CellController`
role.
.. code-block:: yaml
resource_registry:
OS::TripleO::CellController::Net::SoftwareConfig: single-nic-vlans/controller.yaml
.. note::
For details on network configuration consult :ref:`network_isolation` guide, chapter *Customizing the Interface Templates*.
Deploy the cell
^^^^^^^^^^^^^^^
Create new flavor used to tag the cell controller
_________________________________________________
Follow the instructions in :ref:`cell_create_flavor_and_tag` on how to create
a new flavor and tag the cell controller.
Run cell deployment
___________________
To deploy the cell controller stack we use use the same `overcloud deploy`
command as it was used to deploy the `overcloud` stack and add the created
export environment files:
.. code-block:: bash
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates \
-e ... additional environment files used for overcloud stack, like container
prepare parameters, or other specific parameters for the cell
...
    --stack cell1-ctrl \
    -r $HOME/$DIR/cell_roles_data.yaml \
    -e $HOME/$DIR/cell1-ctrl-input.yaml \
    -e $HOME/$DIR/cell1.yaml
Wait for the deployment to finish:
.. code-block:: bash
openstack stack list
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| 890e4764-1606-4dab-9c2f-6ed853e3fed8 | cell1-ctrl | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None |
| 09531653-1074-4568-b50a-48a7b3cc15a6 | overcloud | 2b303a97f4664a69ba2dbcfd723e76a4 | UPDATE_COMPLETE | 2019-02-09T09:52:56Z | 2019-02-11T08:33:37Z |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
Create the cell
^^^^^^^^^^^^^^^
As in :ref:`cell_create_cell` create the cell, but we can skip the final host
discovery step as the computes are note yet deployed.
Extract deployment information from the cell controller stack
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The cell compute stack again requires input information from both the control
plane stack (`overcloud`) and the cell controller stack (`cell1-ctrl`):
.. code-block:: bash
source stackrc
export DIR=cell1
Export EndpointMap, HostsEntry, AllNodesConfig, GlobalConfig and passwords information
______________________________________________________________________________________
As before the `openstack overcloud cell export` functionality of the tripleo-client
is used to export the required data from the cell controller stack.
.. code-block:: bash
openstack overcloud cell export cell1-cmp -o cell1/cell1-cmp-input.yaml -e cell1-ctrl
`cell1-cmp` is the chosen name for the new compute stack. This parameter is used to
set the default export file name, which is then stored on the current directory.
In this case a dedicated export file was set via `-o`.
In addition it is required to use the `--cell-stack <cell stack>` or `-e <cell stack>`
parameter to point the export command to the cell controller stack and indicate
that this is a compute child stack. This is required as the input information for
the cell controller and cell compute stack is not the same.
.. note::
If the export file already exists it can be forced to be overwritten using
`--force-overwrite` or `-f`.
.. note::
The services from the cell stacks use the same passwords services as the
control plane services.
Create cell compute parameter file for additional customization
_______________________________________________________________
A new parameter file is used to overwrite, or customize settings which are
different from the cell controller stack. Add the following content into
a parameter file for the cell compute stack, e.g. `cell1/cell1-cmp.yaml`:
.. code-block:: yaml
parameter_defaults:
# number of controllers/computes in the cell
CellControllerCount: 0
ComputeCount: 1
The above file overwrites the values from `cell1/cell1.yaml` to not deploy
a controller in the cell compute stack. Since the cell compute stack uses
the same role file the default `CellControllerCount` is 1.
If there are other differences, like network config, parameters, ... for
the computes, add them here.
Deploy the cell computes
^^^^^^^^^^^^^^^^^^^^^^^^
Run cell deployment
___________________
To deploy the overcloud we can use use the same `overcloud deploy` command as
it was used to deploy the `cell1-ctrl` stack and add the created export
environment files:
.. code-block:: bash
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates \
-e ... additional environment files used for overcloud stack, like container
prepare parameters, or other specific parameters for the cell
...
    --stack cell1-cmp \
-n $HOME/$DIR/cell1-cmp/network_data.yaml \
    -r $HOME/$DIR/cell_roles_data.yaml \
    -e $HOME/$DIR/cell1-ctrl-input.yaml \
    -e $HOME/$DIR/cell1-cmp-input.yaml \
    -e $HOME/$DIR/cell1.yaml \
    -e $HOME/$DIR/cell1-cmp.yaml
Wait for the deployment to finish:
.. code-block:: bash
openstack stack list
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| 790e4764-2345-4dab-7c2f-7ed853e7e778 | cell1-cmp | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None |
| 890e4764-1606-4dab-9c2f-6ed853e3fed8 | cell1-ctrl | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None |
| 09531653-1074-4568-b50a-48a7b3cc15a6 | overcloud | 2b303a97f4664a69ba2dbcfd723e76a4 | UPDATE_COMPLETE | 2019-02-09T09:52:56Z | 2019-02-11T08:33:37Z |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
Perform cell host discovery
___________________________
The final step is to discover the computes deployed in the cell. Run the host discovery
as explained in :ref:`cell_host_discovery`.
Create and add the node to an Availability Zone
_______________________________________________
After a cell got provisioned, it is required to create an availability zone for the
cell to make sure an instance created in the cell, stays in the cell when performing
a migration. Check :ref:`cell_availability_zone` on more about how to create an
availability zone and add the node.
After that the cell is deployed and can be used.
.. note::
Migrating instances between cells is not supported. To move an instance to
a different cell it needs to be re-created in the new target cell.

View File

@ -0,0 +1,363 @@
Example 1. - Basic Cell Architecture in Train release
=====================================================
.. warning::
Multi cell support is only supported in Stein or later versions.
This guide addresses Train release and later!
.. contents::
:depth: 3
:backlinks: none
This guide assumes that you are ready to deploy a new overcloud, or have
already installed an overcloud (min Train release).
.. note::
Starting with CentOS 8 and TripleO Stein release, podman is the CONTAINERCLI
to be used in the following steps.
.. _basic_cell_arch:
The following example uses six nodes and the split control plane method to
deploy a distributed cell deployment. The first Heat stack deploys a controller
cluster and a compute. The second Heat stack deploys a cell controller and a
compute node:
.. code-block:: bash
openstack overcloud status
+-----------+---------------------+---------------------+-------------------+
| Plan Name | Created | Updated | Deployment Status |
+-----------+---------------------+---------------------+-------------------+
| overcloud | 2019-02-12 09:00:27 | 2019-02-12 09:00:27 | DEPLOY_SUCCESS |
+-----------+---------------------+---------------------+-------------------+
openstack server list -c Name -c Status -c Networks
+----------------------------+--------+------------------------+
| Name | Status | Networks |
+----------------------------+--------+------------------------+
| overcloud-controller-1 | ACTIVE | ctlplane=192.168.24.19 |
| overcloud-controller-2 | ACTIVE | ctlplane=192.168.24.11 |
| overcloud-controller-0 | ACTIVE | ctlplane=192.168.24.29 |
| overcloud-novacompute-0 | ACTIVE | ctlplane=192.168.24.15 |
+----------------------------+--------+------------------------+
The above deployed overcloud shows the nodes from the first stack.
.. note::
In this example the default cell and the additional cell uses the
same network, When configuring another network scenario keep in
mind that it will be necessary for the systems to be able to
communicate with each other.
Extract deployment information from the overcloud stack
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Any additional cell stack requires information from the overcloud Heat stack
where the central OpenStack services are located. The extracted parameters are
needed as input for additional cell stacks. To extract these parameters
into separate files in a directory (e.g. DIR=cell1) run the following:
.. code-block:: bash
source stackrc
mkdir cell1
export DIR=cell1
.. _cell_export_overcloud_info:
Export EndpointMap, HostsEntry, AllNodesConfig, GlobalConfig and passwords information
______________________________________________________________________________________
The tripleo-client in Train provides an `openstack overcloud cell export`
functionality to export the required data from the control plane stack which
then is used as an environment file passed to the cell stack.
.. code-block:: bash
openstack overcloud cell export cell1 -o cell1/cell1-cell-input.yaml
`cell1` is the chosen name for the new cell. This parameter is used to
set the default export file name, which is then stored on the current
directory.
In this case a dedicated export file was set via `-o`.
.. note::
If the export file already exists it can be forced to be overwritten using
`--force-overwrite` or `-f`.
.. note::
The services from the cell stacks use the same passwords services as the
control plane services.
.. _cell_create_roles_file:
Create roles file for the cell stack
____________________________________
Different roles are provided within tripleo-heat-templates, depending on
the configuration and desired services to be deployed.
The default compute role at roles/Compute.yaml can be used for cell computes
if that is sufficient for the use case.
A dedicated role, `roles/CellController.yaml` is provided. This role includes
the necessary roles for the cell controller, where the main services are
galera database, rabbitmq, nova-conductor, nova novnc proxy and nova metadata
in case `NovaLocalMetadataPerCell` is enabled.
Create the roles file for the cell:
.. code-block:: bash
openstack overcloud roles generate --roles-path \
/usr/share/openstack-tripleo-heat-templates/roles \
-o $DIR/cell_roles_data.yaml Compute CellController
.. _cell_parameter_file:
Create cell parameter file for additional customization (e.g. cell1/cell1.yaml)
_______________________________________________________________________________
Each cell has some mandatory parameters which need to be set using an
environment file.
Add the following content into a parameter file for the cell, e.g. `cell1/cell1.yaml`:
.. code-block:: yaml
resource_registry:
# since the same networks are used in this example, the
# creation of the different networks is omitted
OS::TripleO::Network::External: OS::Heat::None
OS::TripleO::Network::InternalApi: OS::Heat::None
OS::TripleO::Network::Storage: OS::Heat::None
OS::TripleO::Network::StorageMgmt: OS::Heat::None
OS::TripleO::Network::Tenant: OS::Heat::None
OS::TripleO::Network::Management: OS::Heat::None
parameter_defaults:
# new CELL Parameter to reflect that this is an additional CELL
NovaAdditionalCell: True
# The DNS names for the VIPs for the cell
CloudName: cell1.ooo.test
CloudNameInternal: cell1.internalapi.ooo.test
CloudNameStorage: cell1.storage.ooo.test
CloudNameStorageManagement: cell1.storagemgmt.ooo.test
CloudNameCtlplane: cell1.ctlplane.ooo.test
# Flavors used for the cell controller and computes
OvercloudCellControllerFlavor: cellcontroller
OvercloudComputeFlavor: compute
# number of controllers/computes in the cell
CellControllerCount: 1
ComputeCount: 1
# default gateway
ControlPlaneStaticRoutes:
- ip_netmask: 0.0.0.0/0
next_hop: 192.168.24.1
default: true
DnsServers:
- x.x.x.x
The above file disables creating networks as the networks from the overcloud stack
are reused. It also specifies that this will be an additional cell using parameter
`NovaAdditionalCell`.
Create the network configuration for `cellcontroller` and add to environment file
_________________________________________________________________________________
Depending on the network configuration of the used hardware and network
architecture it is required to register a resource for the `CellController`
role.
.. code-block:: yaml
resource_registry:
OS::TripleO::CellController::Net::SoftwareConfig: single-nic-vlans/controller.yaml
OS::TripleO::Compute::Net::SoftwareConfig: single-nic-vlans/compute.yaml
.. note::
This example just reused the exiting network configs as it is a shared L2
network. For details on network configuration consult :ref:`network_isolation` guide,
chapter *Customizing the Interface Templates*.
Deploy the cell
^^^^^^^^^^^^^^^
.. _cell_create_flavor_and_tag:
Create new flavor used to tag the cell controller
_________________________________________________
Depending on the hardware create a flavor and tag the node to be used.
.. code-block:: bash
openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 cellcontroller
openstack flavor set --property "cpu_arch"="x86_64" \
--property "capabilities:boot_option"="local" \
--property "capabilities:profile"="cellcontroller" \
--property "resources:CUSTOM_BAREMETAL=1" \
--property "resources:DISK_GB=0" \
--property "resources:MEMORY_MB=0" \
--property "resources:VCPU=0" \
cellcontroller
The properties need to be modified to the needs of the environment.
Tag node into the new flavor using the following command
.. code-block:: bash
openstack baremetal node set --property \
capabilities='profile:cellcontroller,boot_option:local' <node id>
Verify the tagged cellcontroller:
.. code-block:: bash
openstack overcloud profiles list
Run cell deployment
___________________
To deploy the overcloud we can use use the same `overcloud deploy` command as
it was used to deploy the `overcloud` stack and add the created export
environment files:
.. code-block:: bash
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates \
-e ... additional environment files used for overcloud stack, like container
prepare parameters, or other specific parameters for the cell
...
    --stack cell1 \
    -r $HOME/$DIR/cell_roles_data.yaml \
    -e $HOME/$DIR/cell1-cell-input.yaml \
    -e $HOME/$DIR/cell1.yaml
Wait for the deployment to finish:
.. code-block:: bash
openstack stack list
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| 890e4764-1606-4dab-9c2f-6ed853e3fed8 | cell1 | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None |
| 09531653-1074-4568-b50a-48a7b3cc15a6 | overcloud | 2b303a97f4664a69ba2dbcfd723e76a4 | UPDATE_COMPLETE | 2019-02-09T09:52:56Z | 2019-02-11T08:33:37Z |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
.. _cell_create_cell:
Create the cell and discover compute nodes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Get control plane and cell controller IPs:
.. code-block:: bash
CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
CELL_CTRL_IP=$(openstack server list -f value -c Networks --name cellcontrol-0 | sed 's/ctlplane=//')
Add cell information to overcloud controllers
_____________________________________________
On all central controllers add information on how to reach the cell controller
endpoint (usually internalapi) to `/etc/hosts`, from the undercloud:
.. code-block:: bash
CELL_INTERNALAPI_INFO=$(ssh heat-admin@${CELL_CTRL_IP} egrep \
cellcontrol.*\.internalapi /etc/hosts)
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b \
-m lineinfile -a "dest=/etc/hosts line=\"$CELL_INTERNALAPI_INFO\""
.. note::
Do this outside the `HEAT_HOSTS_START` .. `HEAT_HOSTS_END` block, or
add it to an `ExtraHostFileEntries` section of an environment file for the
central overcloud controller. Add the environment file to the next
`overcloud deploy` run.
Extract transport_url and database connection
_____________________________________________
Get the `transport_url` and database `connection` endpoint information
from the cell controller. This information is used to create the cell in the
next step:
.. code-block:: bash
CELL_TRANSPORT_URL=$(ssh heat-admin@${CELL_CTRL_IP} sudo \
crudini --get /var/lib/config-data/nova/etc/nova/nova.conf DEFAULT transport_url)
CELL_MYSQL_VIP=$(ssh heat-admin@${CELL_CTRL_IP} sudo \
crudini --get /var/lib/config-data/nova/etc/nova/nova.conf database connection \
| perl -nle'/(\d+\.\d+\.\d+\.\d+)/ && print $1')
Create the cell
_______________
Login to one of the central controllers create the cell with reference to
the IP of the cell controller in the `database_connection` and the
`transport_url` extracted from previous step, like:
.. code-block:: bash
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 create_cell --name computecell1 \
--database_connection "{scheme}://{username}:{password}@$CELL_MYSQL_VIP/nova?{query}" \
--transport-url "$CELL_TRANSPORT_URL"
.. note::
Templated transport cells URLs could be used if the same amount of controllers
are in the default and add on cell. For further information about templated
URLs for cell mappings check: `Template URLs in Cell Mappings
<https://docs.openstack.org/nova/stein/user/cells.html#template-urls-in-cell-mappings>`_
.. code-block:: bash
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 list_cells --verbose
After the cell got created the nova services on all central controllers need to
be restarted.
Docker:
.. code-block:: bash
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
"docker restart nova_api nova_scheduler nova_conductor"
Podman:
.. code-block:: bash
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
"systemctl restart tripleo_nova_api tripleo_nova_conductor tripleo_nova_scheduler"
We now see the cell controller services registered:
.. code-block:: bash
(overcloud) [stack@undercloud ~]$ nova service-list
Perform cell host discovery
___________________________
The final step is to discover the computes deployed in the cell. Run the host discovery
as explained in :ref:`cell_host_discovery`.
Create and add the node to an Availability Zone
_______________________________________________
After a cell got provisioned, it is required to create an availability zone for the
cell to make sure an instance created in the cell, stays in the cell when performing
a migration. Check :ref:`cell_availability_zone` on more about how to create an
availability zone and add the node.
After that the cell is deployed and can be used.
.. note::
Migrating instances between cells is not supported. To move an instance to
a different cell it needs to be re-created in the new target cell.

View File

@ -0,0 +1,172 @@
Managing the cell
-----------------
.. _cell_host_discovery:
Add a compute to a cell
~~~~~~~~~~~~~~~~~~~~~~~
To increase resource capacity of a running cell, you can start more servers of
a selected role. For more details on how to add nodes see :doc:`../post_deployment/scale_roles`.
After the node got deployed, login to one of the overcloud controllers and run
the cell host discovery:
.. code-block:: bash
CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
# run cell host dicovery
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 discover_hosts --by-service --verbose
# verify the cell hosts
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 list_hosts
Delete a compute from a cell
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As initial step migrate all instances off the compute.
#. From one of the overcloud controllers, delete the computes from the cell:
.. code-block:: bash
CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
# list the cell hosts
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 list_hosts
# delete a node from a cell
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 delete_host --cell_uuid <uuid> --host <compute>
#. Delete the node from the cell stack
See :doc:`../post_deployment/delete_nodes`.
#. Delete the resource providers from placement
This step is required as otherwise adding a compute node with the same hostname
will make it to fail to register and update the resources with the placement
service.::
.. code-block:: bash
sudo yum install python2-osc-placement
openstack resource provider list
+--------------------------------------+---------------------------------------+------------+
| uuid | name | generation |
+--------------------------------------+---------------------------------------+------------+
| 9cd04a8b-5e6c-428e-a643-397c9bebcc16 | computecell1-novacompute-0.site1.test | 11 |
+--------------------------------------+---------------------------------------+------------+
openstack resource provider delete 9cd04a8b-5e6c-428e-a643-397c9bebcc16
Delete a cell
~~~~~~~~~~~~~
As initial step delete all instances from cell
#. From one of the overcloud controllers, delete all computes from the cell:
.. code-block:: bash
CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
# list the cell hosts
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 list_hosts
# delete a node from a cell
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 delete_host --cell_uuid <uuid> --host <compute>
#. On the cell controller delete all deleted instances from the database:
.. code-block:: bash
CELL_CTRL_IP=$(openstack server list -f value -c Networks --name cellcontrol-0 | sed 's/ctlplane=//')
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
ssh heat-admin@${CELL_CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_conductor \
nova-manage db archive_deleted_rows --verbose
#. From one of the overcloud controllers, delete the cell:
.. code-block:: bash
CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
# list the cells
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 list_cells
# delete the cell
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 delete_cell --cell_uuid <uuid>
#. Delete the cell stack:
.. code-block:: bash
openstack stack delete <stack name> --wait --yes && openstack overcloud plan delete <stack name>
.. note::
If the cell consist of a controller and compute stack, delete as a first step the
compute stack and then the controller stack.
#. From a system which can reach the placement endpoint, delete the resource providers from placement
This step is required as otherwise adding a compute node with the same hostname
will make it to fail to register and update the resources with the placement
service:
.. code-block:: bash
sudo yum install python2-osc-placement
openstack resource provider list
+--------------------------------------+---------------------------------------+------------+
| uuid | name | generation |
+--------------------------------------+---------------------------------------+------------+
| 9cd04a8b-5e6c-428e-a643-397c9bebcc16 | computecell1-novacompute-0.site1.test | 11 |
+--------------------------------------+---------------------------------------+------------+
openstack resource provider delete 9cd04a8b-5e6c-428e-a643-397c9bebcc16
Updating a cell
~~~~~~~~~~~~~~~
Each stack in a multi-stack cell deployment must be updated to perform a full minor
update across the entire deployment.
Cells can be updated just like the overcloud nodes following update procedure described
in :ref:`package_update` and using appropriate stack name for update commands.
The control plane and cell controller stack should be updated first by completing all
the steps from the minor update procedure.
Once the control plane stack is updated, re-run the export command to recreate the
required input files for each separate cell stack.
.. note:
Before re-running the export command, backup the previously used input file so that
the previous versions are not overwritten. In the event that a separate cell stack
needs a stack update operation performed prior to the minor update procedure, the
previous versions of the exported files should be used.

View File

@ -0,0 +1,705 @@
Example 3. - Advanced example using split cell controller/compute architecture and routed networks in Train release
===================================================================================================================
.. warning::
Multi cell support is only supported in Stein or later versions.
This guide addresses Train release and later!
.. contents::
:depth: 3
:backlinks: none
This guide assumes that you are ready to deploy a new overcloud, or have
already installed an overcloud (min Train release).
.. note::
Starting with CentOS 8 and TripleO Stein release, podman is the CONTAINERCLI
to be used in the following steps.
In this example we use the :doc:`deploy_cellv2_advanced` using a routed spine and
leaf networking layout to deploy an additional cell. Not all nodes need
to be co-located at the same physical location or datacenter. See
:ref:`routed_spine_leaf_network` for more details.
The nodes deployed to the control plane, which are part of the overcloud stack,
use different networks then the cell stacks which are separated in a cell
controller stack and a cell compute stack. The cell controller and cell compute
stack use the same networks,
.. note::
In this example the routing for the different VLAN subnets is done by
the undercloud, which must _NOT_ be done in a production environment
as it is a single point of failure!
Used networks
^^^^^^^^^^^^^
The following provides and overview of the used networks and subnet
details for this example:
.. code-block:: yaml
InternalApi
internal_api_subnet
vlan: 20
net: 172.16.2.0/24
route: 172.17.2.0/24 gw: 172.16.2.254
internal_api_cell1
vlan: 21
net: 172.17.2.0/24
gateway: 172.17.2.254
Storage
storage_subnet
vlan: 30
net: 172.16.1.0/24
route: 172.17.1.0/24 gw: 172.16.1.254
storage_cell1
vlan: 31
net: 172.17.1.0/24
gateway: 172.17.1.254
StorageMgmt
storage_mgmt_subnet
vlan: 40
net: 172.16.3.0/24
route: 172.17.3.0/24 gw: 172.16.3.254
storage_mgmt_cell1
vlan: 41
net: 172.17.3.0/24
gateway: 172.17.3.254
Tenant
tenant_subnet
vlan: 50
net: 172.16.0.0/24
External
external_subnet
vlan: 10
net: 10.0.0.0/24
external_cell1
vlan: 11
net: 10.0.1.0/24
gateway: 10.0.1.254
Prepare control plane for cell network routing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
openstack overcloud status
+-----------+-------------------+
| Plan Name | Deployment Status |
+-----------+-------------------+
| overcloud | DEPLOY_SUCCESS |
+-----------+-------------------+
openstack server list -c Name -c Status -c Networks
+-------------------------+--------+------------------------+
| Name | Status | Networks |
+-------------------------+--------+------------------------+
| overcloud-controller-2 | ACTIVE | ctlplane=192.168.24.29 |
| overcloud-controller-0 | ACTIVE | ctlplane=192.168.24.18 |
| overcloud-controller-1 | ACTIVE | ctlplane=192.168.24.20 |
| overcloud-novacompute-0 | ACTIVE | ctlplane=192.168.24.16 |
+-------------------------+--------+------------------------+
Overcloud stack for the control planed deployed using a `routes.yaml`
environment file to add the routing information for the new cell
subnets.
.. code-block:: yaml
parameter_defaults:
InternalApiInterfaceRoutes:
- destination: 172.17.2.0/24
nexthop: 172.16.2.254
StorageInterfaceRoutes:
- destination: 172.17.1.0/24
nexthop: 172.16.1.254
StorageMgmtInterfaceRoutes:
- destination: 172.17.3.0/24
nexthop: 172.16.3.254
Reuse networks and adding cell subnets
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To prepare the `network_data` parameter file for the cell controller stack
the file from the control plane is used as base:
.. code-block:: bash
cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml cell1/network_data-ctrl.yaml
When deploying a cell in separate stacks it may be necessary to reuse networks,
subnets, segments, and VIP resources between stacks. Only a single Heat stack
can own a resource and be responsible for its creation and deletion, however
the resources can be reused in other stacks.
To reuse network related resources between stacks, the following parameters have
been added to the network definitions in the network_data.yaml file format:
.. code-block:: yaml
external_resource_network_id: Existing Network UUID
external_resource_subnet_id: Existing Subnet UUID
external_resource_segment_id: Existing Segment UUID
external_resource_vip_id: Existing VIP UUID
These parameters can be set on each network definition in the `network_data-ctrl.yaml`
file used for the deployment of the separate stack.
Not all networks need to be reused or shared across stacks. The `external_resource_*`
parameters can be set for only the networks that are meant to be shared, while
the other networks can be newly created and managed.
In this example we reuse all networks, except the management network as it is
not being used at all.
The resulting storage network here looks like this:
.. code-block::
- name: Storage
  external_resource_network_id: 30e9d52d-1929-47ed-884b-7c6d65fa2e00
  external_resource_subnet_id: 11a3777a-8c42-4314-a47f-72c86e9e6ad4
  vip: true
  vlan: 30
  name_lower: storage
  ip_subnet: '172.16.1.0/24'
  allocation_pools: [{'start': '172.16.1.4', 'end': '172.16.1.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:3000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:3000::10', 'end': 'fd00:fd00:fd00:3000:ffff:ffff:ffff:fffe'}]
  mtu: 1500
  subnets:
    storage_cell1:
      vlan: 31
      ip_subnet: '172.17.1.0/24'
      allocation_pools: [{'start': '172.17.1.10', 'end': '172.17.1.250'}]
      gateway_ip: '172.17.1.254'
We added the `external_resource_network_id` and `external_resource_subnet_id` of
the control plane stack as we want to reuse those resources:
.. code-block:: bash
openstack network show storage -c id -f value
openstack subnet show storage_subnet -c id -f value
In addition a new `storage_cell1` subnet is now added to the `subnets` section
to get it created in the cell controller stack for cell1:
.. code-block::
subnets:
storage_cell1:
vlan: 31
ip_subnet: '172.17.1.0/24'
allocation_pools: [{'start': '172.17.1.10', 'end': '172.17.1.250'}]
gateway_ip: '172.17.1.254'
.. note::
In this example no Management network is used, therefore it was removed.
Full networks data example:
.. code-block::
- name: Storage
external_resource_network_id: 30e9d52d-1929-47ed-884b-7c6d65fa2e00
  external_resource_subnet_id: 11a3777a-8c42-4314-a47f-72c86e9e6ad4
  vip: true
  vlan: 30
  name_lower: storage
  ip_subnet: '172.16.1.0/24'
  allocation_pools: [{'start': '172.16.1.4', 'end': '172.16.1.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:3000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:3000::10', 'end': 'fd00:fd00:fd00:3000:ffff:ffff:ffff:fffe'}]
  mtu: 1500
  subnets:
    storage_cell1:
      vlan: 31
      ip_subnet: '172.17.1.0/24'
      allocation_pools: [{'start': '172.17.1.10', 'end': '172.17.1.250'}]
      gateway_ip: '172.17.1.254'
- name: StorageMgmt
  name_lower: storage_mgmt
  external_resource_network_id: 29e85314-2177-4cbd-aac8-6faf2a3f7031
  external_resource_subnet_id: 01c0a75e-e62f-445d-97ad-b98a141d6082
  vip: true
  vlan: 40
  ip_subnet: '172.16.3.0/24'
  allocation_pools: [{'start': '172.16.3.4', 'end': '172.16.3.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:4000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:4000::10', 'end': 'fd00:fd00:fd00:4000:ffff:ffff:ffff:fffe'}]
  mtu: 1500
  subnets:
    storage_mgmt_cell1:
      vlan: 41
      ip_subnet: '172.17.3.0/24'
      allocation_pools: [{'start': '172.17.3.10', 'end': '172.17.3.250'}]
      gateway_ip: '172.17.3.254'
- name: InternalApi
  name_lower: internal_api
  external_resource_network_id: 5eb79743-7ff4-4f68-9904-6e9c36fbaaa6
  external_resource_subnet_id: dbc24086-0aa7-421d-857d-4e3956adec10
  vip: true
  vlan: 20
  ip_subnet: '172.16.2.0/24'
  allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:2000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}]
  mtu: 1500
  subnets:
    internal_api_cell1:
      vlan: 21
      ip_subnet: '172.17.2.0/24'
      allocation_pools: [{'start': '172.17.2.10', 'end': '172.17.2.250'}]
      gateway_ip: '172.17.2.254'
- name: Tenant
  external_resource_network_id: ee83d0fb-3bf1-47f2-a02b-ef5dc277afae
  external_resource_subnet_id: 0b6030ae-8445-4480-ab17-dd4c7c8fa64b
  vip: false  # Tenant network does not use VIPs
  name_lower: tenant
  vlan: 50
  ip_subnet: '172.16.0.0/24'
  allocation_pools: [{'start': '172.16.0.4', 'end': '172.16.0.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:5000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:5000::10', 'end': 'fd00:fd00:fd00:5000:ffff:ffff:ffff:fffe'}]
  mtu: 1500
- name: External
  external_resource_network_id: 89b7b481-f609-45e7-ad5e-e006553c1d3a
  external_resource_subnet_id: dd84112d-2129-430c-a8c2-77d2dee05af2
  vip: true
  name_lower: external
  vlan: 10
  ip_subnet: '10.0.0.0/24'
  allocation_pools: [{'start': '10.0.0.4', 'end': '10.0.0.250'}]
  gateway_ip: '10.0.0.1'
  ipv6_subnet: '2001:db8:fd00:1000::/64'
  ipv6_allocation_pools: [{'start': '2001:db8:fd00:1000::10', 'end': '2001:db8:fd00:1000:ffff:ffff:ffff:fffe'}]
  gateway_ipv6: '2001:db8:fd00:1000::1'
  mtu: 1500
  subnets:
    external_cell1:
      vlan: 11
      ip_subnet: '10.0.1.0/24'
      allocation_pools: [{'start': '10.0.1.10', 'end': '10.0.1.250'}]
      gateway_ip: '10.0.1.254'
.. note:
When not sharing networks between stacks, each network defined in `network_data*.yaml`
must have a unique name across all deployed stacks. This requirement is necessary
since regardless of the stack, all networks are created in the same tenant in
Neutron on the undercloud.
Export EndpointMap, HostsEntry, AllNodesConfig, GlobalConfig and passwords information
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Follow the steps as explained in :ref:`cell_export_overcloud_info` on how to
export the required data from the overcloud stack.
Cell roles
^^^^^^^^^^
Modify the cell roles file to use new subnets for `InternalApi`, `Storage`,
`StorageMgmt` and `External` for cell controller and compute:
.. code-block:: bash
openstack overcloud roles generate --roles-path \
/usr/share/openstack-tripleo-heat-templates/roles \
-o $DIR/cell_roles_data.yaml Compute CellController
For each role modify the subnets to match what got defined in the previous step
in `cell1/network_data-ctrl.yaml`:
.. code-block::
- name: Compute
  description: |
    Basic Compute Node role
  CountDefault: 1
  # Create external Neutron bridge (unset if using ML2/OVS without DVR)
  tags:
    - external_bridge
  networks:
    InternalApi:
      subnet: internal_api_cell1
    Tenant:
      subnet: tenant_subnet
    Storage:
      subnet: storage_cell1
...
- name: CellController
  description: |
    CellController role for the nova cell_v2 controller services
  CountDefault: 1
  tags:
    - primary
    - controller
  networks:
    External:
      subnet: external_cell1
    InternalApi:
      subnet: internal_api_cell1
    Storage:
      subnet: storage_cell1
    StorageMgmt:
      subnet: storage_mgmt_cell1
    Tenant:
      subnet: tenant_subnet
Create the cell parameter file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each cell has some mandatory parameters which need to be set using an
environment file.
Add the following content into a parameter file for the cell, e.g. `cell1/cell1.yaml`:
.. code-block:: yaml
parameter_defaults:
# new CELL Parameter to reflect that this is an additional CELL
NovaAdditionalCell: True
# The DNS names for the VIPs for the cell
CloudName: cell1.ooo.test
CloudNameInternal: cell1.internalapi.ooo.test
CloudNameStorage: cell1.storage.ooo.test
CloudNameStorageManagement: cell1.storagemgmt.ooo.test
CloudNameCtlplane: cell1.ctlplane.ooo.test
# Flavors used for the cell controller and computes
OvercloudCellControllerFlavor: cellcontroller
OvercloudComputeFlavor: compute
# number of controllers/computes in the cell
CellControllerCount: 3
ComputeCount: 0
# default gateway
ControlPlaneStaticRoutes:
- ip_netmask: 0.0.0.0/0
next_hop: 192.168.24.1
default: true
DnsServers:
- x.x.x.x
Virtual IP addresses
^^^^^^^^^^^^^^^^^^^^
The cell controller is hosting VIPs (Virtual IP addresses) and is not using
the base subnet of one or more networks, therefore additional overrides to the
`VipSubnetMap` are required to ensure VIPs are created on the subnet associated
with the L2 network segment the controller nodes is connected to.
Add a `VipSubnetMap` to the `cell1/cell1.yaml` or a new parameter file to
point the VIPs to the correct subnet:
.. code-block:: yaml
parameter_defaults:
VipSubnetMap:
InternalApi: internal_api_cell1
Storage: storage_cell1
StorageMgmt: storage_mgmt_cell1
External: external_cell1
Create the network configuration for `cellcontroller` and add to environment file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Depending on the network configuration of the used hardware and network
architecture it is required to register a resource for the `CellController`
role in `cell1/cell1.yaml`.
.. code-block:: yaml
resource_registry:
OS::TripleO::CellController::Net::SoftwareConfig: cell1/single-nic-vlans/controller.yaml
OS::TripleO::Compute::Net::SoftwareConfig: cell1/single-nic-vlans/compute.yaml
.. note::
For details on network configuration consult :ref:`network_isolation` guide, chapter *Customizing the Interface Templates*.
Deploy the cell controllers
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create new flavor used to tag the cell controller
_________________________________________________
Follow the instructions in :ref:`cell_create_flavor_and_tag` on how to create
a new flavor and tag the cell controller.
Run cell deployment
___________________
To deploy the overcloud we can use use the same `overcloud deploy` command as
it was used to deploy the `overcloud` stack and add the created export
environment files:
.. code-block:: bash
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates \
-e ... additional environment files used for overcloud stack, like container
prepare parameters, or other specific parameters for the cell
...
--stack cell1-ctrl \
  -n $HOME/$DIR/network_data-ctrl.yaml \
  -r $HOME/$DIR/cell_roles_data.yaml \
-e $HOME/$DIR/cell1-ctrl-input.yaml \
-e $HOME/$DIR/cell1.yaml
Wait for the deployment to finish:
.. code-block:: bash
openstack stack list
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+----------------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+----------------------+
| 6403ed94-7c8f-47eb-bdb8-388a5ac7cb20 | cell1-ctrl | f7736589861c47d8bbf1ecd29f02823d | CREATE_COMPLETE | 2019-08-15T14:46:32Z | None |
| 925a2875-fbbb-41fd-bb06-bf19cded2510 | overcloud | f7736589861c47d8bbf1ecd29f02823d | UPDATE_COMPLETE | 2019-08-13T10:43:20Z | 2019-08-15T10:13:41Z |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+----------------------+
Create the cell
^^^^^^^^^^^^^^^
As in :ref:`cell_create_cell` create the cell, but we can skip the final host
discovery step as the computes are note yet deployed.
Extract deployment information from the cell controller stack
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Follow the steps explained in :ref:`cell_export_cell_controller_info` on
how to export the required input data from the cell controller stack.
Create cell compute parameter file for additional customization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create the `cell1/cell1-cmp.yaml` parameter file to overwrite settings
which are different from the cell controller stack.
.. code-block:: yaml
parameter_defaults:
# number of controllers/computes in the cell
CellControllerCount: 0
ComputeCount: 1
The above file overwrites the values from `cell1/cell1.yaml` to not deploy
a controller in the cell compute stack. Since the cell compute stack uses
the same role file the default `CellControllerCount` is 1.
Reusing networks from control plane and cell controller stack
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For the cell compute stack we reuse the networks from the control plane
stack and the subnet from the cell controller stack. Therefore references
to the external resources for network, subnet, segment and vip are required:
.. code-block:: bash
cp cell1/network_data-ctrl.yaml cell1/network_data-cmp.yaml
The storage network definition in `cell1/network_data-cmp.yaml` looks
like this:
.. code-block::
- name: Storage
  external_resource_network_id: 30e9d52d-1929-47ed-884b-7c6d65fa2e00
  external_resource_subnet_id: 11a3777a-8c42-4314-a47f-72c86e9e6ad4
  external_resource_vip_id: 4ed73ea9-4cf6-42c1-96a5-e32b415c738f
  vip: true
  vlan: 30
  name_lower: storage
  ip_subnet: '172.16.1.0/24'
  allocation_pools: [{'start': '172.16.1.4', 'end': '172.16.1.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:3000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:3000::10', 'end': 'fd00:fd00:fd00:3000:ffff:ffff:ffff:fffe'}]
  mtu: 1500
  subnets:
    storage_cell1:
      vlan: 31
      ip_subnet: '172.17.1.0/24'
      allocation_pools: [{'start': '172.17.1.10', 'end': '172.17.1.250'}]
      gateway_ip: '172.17.1.254'
      external_resource_subnet_id: 7930635d-d1d5-4699-b318-00233c73ed6b
      external_resource_segment_id: 730769f8-e78f-42a3-9dd4-367a212e49ff
Previously we already added the `external_resource_network_id` and `external_resource_subnet_id`
for the network in the upper level hirarchy.
In addition we add the `external_resource_vip_id` of the VIP of the stack which
should be reused for this network (Storage).
Important is that the `external_resource_vip_id` for the InternalApi points
the the VIP of the cell controller stack!
.. code-block:: bash
openstack port show <id storage_virtual_ip overcloud stack> -c id -f value
In the `storage_cell1` subnet section we add the `external_resource_subnet_id`
and `external_resource_segment_id` of the cell controller stack:
.. code-block:: yaml
storage_cell1:
vlan: 31
ip_subnet: '172.17.1.0/24'
allocation_pools: [{'start': '172.17.1.10', 'end': '172.17.1.250'}]
gateway_ip: '172.17.1.254'
external_resource_subnet_id: 7930635d-d1d5-4699-b318-00233c73ed6b
external_resource_segment_id: 730769f8-e78f-42a3-9dd4-367a212e49ff
.. code-block:: bash
openstack subnet show storage_cell1 -c id -f value
openstack network segment show storage_storage_cell1 -c id -f value
Full networks data example for the compute stack:
.. code-block::
- name: Storage
  external_resource_network_id: 30e9d52d-1929-47ed-884b-7c6d65fa2e00
  external_resource_subnet_id: 11a3777a-8c42-4314-a47f-72c86e9e6ad4
  external_resource_vip_id: 4ed73ea9-4cf6-42c1-96a5-e32b415c738f
  vip: true
  vlan: 30
  name_lower: storage
  ip_subnet: '172.16.1.0/24'
  allocation_pools: [{'start': '172.16.1.4', 'end': '172.16.1.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:3000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:3000::10', 'end': 'fd00:fd00:fd00:3000:ffff:ffff:ffff:fffe'}]
  mtu: 1500
  subnets:
    storage_cell1:
      vlan: 31
      ip_subnet: '172.17.1.0/24'
      allocation_pools: [{'start': '172.17.1.10', 'end': '172.17.1.250'}]
      gateway_ip: '172.17.1.254'
      external_resource_subnet_id: 7930635d-d1d5-4699-b318-00233c73ed6b
      external_resource_segment_id: 730769f8-e78f-42a3-9dd4-367a212e49ff
- name: StorageMgmt
  name_lower: storage_mgmt
  external_resource_network_id: 29e85314-2177-4cbd-aac8-6faf2a3f7031
  external_resource_subnet_id: 01c0a75e-e62f-445d-97ad-b98a141d6082
  external_resource_segment_id: 4b4f6f83-f031-4495-84c5-7422db1729d5
  vip: true
  vlan: 40
  ip_subnet: '172.16.3.0/24'
  allocation_pools: [{'start': '172.16.3.4', 'end': '172.16.3.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:4000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:4000::10', 'end': 'fd00:fd00:fd00:4000:ffff:ffff:ffff:fffe'}]
  mtu: 1500
  subnets:
    storage_mgmt_cell1:
      vlan: 41
      ip_subnet: '172.17.3.0/24'
      allocation_pools: [{'start': '172.17.3.10', 'end': '172.17.3.250'}]
      gateway_ip: '172.17.3.254'
      external_resource_subnet_id: de9233d4-53a3-485d-8433-995a9057383f
      external_resource_segment_id: 2400718d-7fbd-4227-8318-245747495241
- name: InternalApi
  name_lower: internal_api
  external_resource_network_id: 5eb79743-7ff4-4f68-9904-6e9c36fbaaa6
  external_resource_subnet_id: dbc24086-0aa7-421d-857d-4e3956adec10
  external_resource_vip_id: 1a287ad7-e574-483a-8288-e7c385ee88a0
  vip: true
  vlan: 20
  ip_subnet: '172.16.2.0/24'
  allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:2000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}]
  mtu: 1500
  subnets:
    internal_api_cell1:
      external_resource_subnet_id: 16b8cf48-6ca1-4117-ad90-3273396cb41d
      external_resource_segment_id: b310daec-7811-46be-a958-a05a5b0569ef
      vlan: 21
      ip_subnet: '172.17.2.0/24'
      allocation_pools: [{'start': '172.17.2.10', 'end': '172.17.2.250'}]
      gateway_ip: '172.17.2.254'
- name: Tenant
  external_resource_network_id: ee83d0fb-3bf1-47f2-a02b-ef5dc277afae
  external_resource_subnet_id: 0b6030ae-8445-4480-ab17-dd4c7c8fa64b
  vip: false  # Tenant network does not use VIPs
  name_lower: tenant
  vlan: 50
  ip_subnet: '172.16.0.0/24'
  allocation_pools: [{'start': '172.16.0.4', 'end': '172.16.0.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:5000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:5000::10', 'end': 'fd00:fd00:fd00:5000:ffff:ffff:ffff:fffe'}]
  mtu: 1500
- name: External
  external_resource_network_id: 89b7b481-f609-45e7-ad5e-e006553c1d3a
  external_resource_subnet_id: dd84112d-2129-430c-a8c2-77d2dee05af2
  external_resource_vip_id: b7a0606d-f598-4dc6-9e85-e023c64fd20b
  vip: true
  name_lower: external
  vlan: 10
  ip_subnet: '10.0.0.0/24'
  allocation_pools: [{'start': '10.0.0.4', 'end': '10.0.0.250'}]
  gateway_ip: '10.0.0.1'
  ipv6_subnet: '2001:db8:fd00:1000::/64'
  ipv6_allocation_pools: [{'start': '2001:db8:fd00:1000::10', 'end': '2001:db8:fd00:1000:ffff:ffff:ffff:fffe'}]
  gateway_ipv6: '2001:db8:fd00:1000::1'
  mtu: 1500
  subnets:
    external_cell1:
      vlan: 11
      ip_subnet: '10.0.1.0/24'
      allocation_pools: [{'start': '10.0.1.10', 'end': '10.0.1.250'}]
      gateway_ip: '10.0.1.254'
      external_resource_subnet_id: 81ac9bc2-4fbe-40be-ac0e-9aa425799626
      external_resource_segment_id: 8a877c1f-cb47-40dd-a906-6731f042e544
Deploy the cell computes
^^^^^^^^^^^^^^^^^^^^^^^^
Run cell deployment
___________________
To deploy the overcloud we can use use the same `overcloud deploy` command as
it was used to deploy the `cell1-ctrl` stack and add the created export
environment files:
.. code-block:: bash
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates \
-e ... additional environment files used for overcloud stack, like container
prepare parameters, or other specific parameters for the cell
...
--stack cell1-cmp \
-r $HOME/$DIR/cell_roles_data.yaml \
-n $HOME/$DIR/network_data-cmp.yaml \
-e $HOME/$DIR/cell1-ctrl-input.yaml \
-e $HOME/$DIR/cell1-cmp-input.yaml \
-e $HOME/$DIR/cell1.yaml \
-e $HOME/$DIR/cell1-cmp.yaml
Wait for the deployment to finish:
.. code-block:: bash
openstack stack list
+--------------------------------------+------------+----------------------------------+--------------------+----------------------+----------------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+------------+----------------------------------+--------------------+----------------------+----------------------+
| 12e86ea6-3725-482a-9b05-b283378dcf30 | cell1-cmp | f7736589861c47d8bbf1ecd29f02823d | CREATE_COMPLETE | 2019-08-15T15:57:19Z | None |
| 6403ed94-7c8f-47eb-bdb8-388a5ac7cb20 | cell1-ctrl | f7736589861c47d8bbf1ecd29f02823d | CREATE_COMPLETE | 2019-08-15T14:46:32Z | None |
| 925a2875-fbbb-41fd-bb06-bf19cded2510 | overcloud | f7736589861c47d8bbf1ecd29f02823d | UPDATE_COMPLETE | 2019-08-13T10:43:20Z | 2019-08-15T10:13:41Z |
+--------------------------------------+------------+----------------------------------+--------------------+----------------------+----------------------+
Perform cell host discovery
___________________________
The final step is to discover the computes deployed in the cell. Run the host discovery
as explained in :ref:`cell_host_discovery`.
Create and add the node to an Availability Zone
_______________________________________________
After a cell got provisioned, it is required to create an availability zone for the
cell to make sure an instance created in the cell, stays in the cell when performing
a migration. Check :ref:`cell_availability_zone` on more about how to create an
availability zone and add the node.
After that the cell is deployed and can be used.
.. note::
Migrating instances between cells is not supported. To move an instance to
a different cell it needs to be re-created in the new target cell.

View File

@ -0,0 +1,388 @@
Deploy an additional nova cell v2 in Stein release
==================================================
.. warning::
Multi cell support is only supported in Stein or later versions.
This guide addresses only the Stein release!
.. contents::
:depth: 3
:backlinks: none
This guide assumes that you are ready to deploy a new overcloud, or have
already installed an overcloud (min Stein release).
.. note::
Starting with CentOS 8 and TripleO Stein release, podman is the CONTAINERCLI
to be used in the following steps.
Initial Deploy
--------------
.. note::
Right now the current implementation does not support running nova metadata
API per cell as explained in the cells v2 layout section `Local per cell
<https://docs.openstack.org/nova/latest/user/cellsv2-layout.html#nova-metadata-api-service>`_
The following example uses six nodes and the split control plane method to
simulate a distributed cell deployment. The first Heat stack deploys a controller
cluster and a compute. The second Heat stack deploys a cell controller and a
compute node::
openstack overcloud status
+-----------+---------------------+---------------------+-------------------+
| Plan Name | Created | Updated | Deployment Status |
+-----------+---------------------+---------------------+-------------------+
| overcloud | 2019-02-12 09:00:27 | 2019-02-12 09:00:27 | DEPLOY_SUCCESS |
+-----------+---------------------+---------------------+-------------------+
openstack server list -c Name -c Status -c Networks
+----------------------------+--------+------------------------+
| Name | Status | Networks |
+----------------------------+--------+------------------------+
| overcloud-controller-1 | ACTIVE | ctlplane=192.168.24.19 |
| overcloud-controller-2 | ACTIVE | ctlplane=192.168.24.11 |
| overcloud-controller-0 | ACTIVE | ctlplane=192.168.24.29 |
| overcloud-novacompute-0 | ACTIVE | ctlplane=192.168.24.15 |
+----------------------------+--------+------------------------+
.. note::
In this example the default cell and the additional cell uses the
same network, When configuring another network scenario keep in
mind that it will be necessary for the systems to be able to
communicate with each other.
Extract deployment information from the overcloud stack
-------------------------------------------------------
Any additional cell stack requires information from the overcloud Heat stack
where the central OpenStack services are located. The extracted parameters are
needed as input for additional cell stacks. To extract these parameters
into separate files in a directory (e.g. DIR=cell1) run the following::
source stackrc
mkdir cell1
export DIR=cell1
#. Export the default cell EndpointMap
.. code::
openstack stack output show overcloud EndpointMap --format json \
| jq '{"parameter_defaults": {"EndpointMapOverride": .output_value}}' \
> $DIR/endpoint-map.json
#. Export the default cell HostsEntry
.. code::
openstack stack output show overcloud HostsEntry -f json \
| jq -r '{"parameter_defaults":{"ExtraHostFileEntries": .output_value}}' \
> $DIR/extra-host-file-entries.json
#. Export AllNodesConfig and GlobalConfig information
In addition to the ``GlobalConfig``, which contains the RPC information (port,
ssl, scheme, user and password), additional information from the ``AllNodesConfig``
is required to point components to the default cell service instead of the
service served by the cell controller. These are
* ``oslo_messaging_notify_short_bootstrap_node_name`` - default cell overcloud
messaging notify bootstrap node information
* ``oslo_messaging_notify_node_names`` - default cell overcloud messaging notify
node information
* ``oslo_messaging_rpc_node_names`` - default cell overcloud messaging rpc node
information as e.g. neutron agent needs to point to the overcloud messaging
cluster
* ``memcached_node_ips`` - memcached node information used by the cell services.
.. code::
ALLNODESCFG=$(openstack stack output show overcloud AllNodesConfig --format json)
GLOBALCFG=$(openstack stack output show overcloud GlobalConfig --format json)
(echo $ALLNODESCFG | jq '.output_value |
{oslo_messaging_notify_short_bootstrap_node_name:
.oslo_messaging_notify_short_bootstrap_node_name,
oslo_messaging_notify_node_names: .oslo_messaging_notify_node_names,
oslo_messaging_rpc_node_names: .oslo_messaging_rpc_node_names,
memcached_node_ips: .memcached_node_ips}';\
echo $GLOBALCFG | jq '.output_value') |\
jq -s '.[0] * .[1]| {"parameter_defaults":
{"AllNodesExtraMapData": .}}' > $DIR/all-nodes-extra-map-data.json
An example of a ``all-nodes-extra-map-data.json`` file::
{
"parameter_defaults": {
"AllNodesExtraMapData": {
"oslo_messaging_notify_short_bootstrap_node_name": "overcloud-controller-0",
"oslo_messaging_notify_node_names": [
"overcloud-controller-0.internalapi.site1.test",
"overcloud-controller-1.internalapi.site1.test",
"overcloud-controller-2.internalapi.site1.test"
],
"oslo_messaging_rpc_node_names": [
"overcloud-controller-0.internalapi.site1.test",
"overcloud-controller-1.internalapi.site1.test",
"overcloud-controller-2.internalapi.site1.test"
],
"memcached_node_ips": [
"172.16.2.232",
"172.16.2.29",
"172.16.2.49"
],
"oslo_messaging_rpc_port": 5672,
"oslo_messaging_rpc_use_ssl": "False",
"oslo_messaging_notify_scheme": "rabbit",
"oslo_messaging_notify_use_ssl": "False",
"oslo_messaging_rpc_scheme": "rabbit",
"oslo_messaging_rpc_password": "7l4lfamjPp6nqJgBMqb1YyM2I",
"oslo_messaging_notify_password": "7l4lfamjPp6nqJgBMqb1YyM2I",
"oslo_messaging_rpc_user_name": "guest",
"oslo_messaging_notify_port": 5672,
"oslo_messaging_notify_user_name": "guest"
}
}
}
#. Export passwords
.. code::
openstack object save --file - overcloud plan-environment.yaml \
| python -c 'import yaml as y, sys as s; \
s.stdout.write(y.dump({"parameter_defaults": \
y.load(s.stdin.read())["passwords"]}));' > $DIR/passwords.yaml
The same passwords are used for the cell services.
#. Create roles file for cell stack
.. code::
openstack overcloud roles generate --roles-path \
/usr/share/openstack-tripleo-heat-templates/roles \
-o $DIR/cell_roles_data.yaml Compute CellController
.. note::
In case a different default heat stack name or compute role name is used,
modify the above commands.
#. Create cell parameter file for additional customization (e.g. cell1/cell1.yaml)
Add the following content into a parameter file for the cell, e.g. ``cell1/cell1.yaml``::
resource_registry:
# since the same network is used, the creation of the
# different kind of networks is omitted for additional
# cells
OS::TripleO::Network::External: OS::Heat::None
OS::TripleO::Network::InternalApi: OS::Heat::None
OS::TripleO::Network::Storage: OS::Heat::None
OS::TripleO::Network::StorageMgmt: OS::Heat::None
OS::TripleO::Network::Tenant: OS::Heat::None
OS::TripleO::Network::Management: OS::Heat::None
parameter_defaults:
# new CELL Parameter to reflect that this is an additional CELL
NovaAdditionalCell: True
# The DNS names for the VIPs for the cell
CloudName: computecell1.ooo.test
CloudNameInternal: computecell1.internalapi.ooo.test
CloudNameStorage: computecell1.storage.ooo.test
CloudNameStorageManagement: computecell1.storagemgmt.ooo.test
CloudNameCtlplane: computecell1.ctlplane.ooo.test
# Flavors used for the cell controller and computes
OvercloudCellControllerFlavor: cellcontroller
OvercloudComputeFlavor: compute
# number of controllers/computes in the cell
CellControllerCount: 1
ComputeCount: 1
# default gateway
ControlPlaneStaticRoutes:
- ip_netmask: 0.0.0.0/0
next_hop: 192.168.24.1
default: true
DnsServers:
- x.x.x.x
The above file disables creating networks as the same as the overcloud stack
created are used. It also specifies that this will be an additional cell using
parameter `NovaAdditionalCell`.
#. Create the network configuration for `cellcontroller` and add to environment file.
.. code::
resource_registry:
OS::TripleO::BlockStorage::Net::SoftwareConfig: three-nics-vlans//cinder-storage.yaml
OS::TripleO::CephStorage::Net::SoftwareConfig: three-nics-vlans//ceph-storage.yaml
OS::TripleO::Compute::Net::SoftwareConfig: three-nics-vlans//compute.yaml
OS::TripleO::Controller::Net::SoftwareConfig: three-nics-vlans//controller.yaml
OS::TripleO::CellController::Net::SoftwareConfig: three-nics-vlans//cellcontroller.yaml
OS::TripleO::ObjectStorage::Net::SoftwareConfig: three-nics-vlans//swift-storage.yaml
.. note::
For details on network configuration consult :ref:`network_isolation` guide, chapter *Customizing the Interface Templates*.
Deploy the cell
---------------
#. Create new flavor used to tag the cell controller
.. code::
openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 cellcontroller
openstack flavor set --property "cpu_arch"="x86_64" \
--property "capabilities:boot_option"="local" \
--property "capabilities:profile"="cellcontroller" \
--property "resources:CUSTOM_BAREMETAL=1" \
--property "resources:DISK_GB=0" \
--property "resources:MEMORY_MB=0" \
--property "resources:VCPU=0" \
cellcontroller
The properties need to be modified to the needs of the environment.
#. Tag node into the new flavor using the following command
.. code::
openstack baremetal node set --property \
capabilities='profile:cellcontroller,boot_option:local' <node id>
Verify the tagged cellcontroller::
openstack overcloud profiles list
#. Deploy the cell
To deploy the overcloud we can use use the same ``overcloud deploy`` command as
it was used to deploy the ``overcloud`` stack and add the created export
environment files::
openstack overcloud deploy --override-ansible-cfg \
/home/stack/custom_ansible.cfg \
--stack computecell1 \
--templates /usr/share/openstack-tripleo-heat-templates \
-e ... additional environment files used for overcloud stack, like container
prepare parameters, or other specific parameters for the cell
...
-r $HOME/$DIR/cell_roles_data.yaml \
-e $HOME/$DIR/passwords.yaml \
-e $HOME/$DIR/endpoint-map.json \
-e $HOME/$DIR/all-nodes-extra-map-data.json \
-e $HOME/$DIR/extra-host-file-entries.json \
-e $HOME/$DIR/cell1.yaml
Wait for the deployment to finish::
openstack stack list
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
| 890e4764-1606-4dab-9c2f-6ed853e3fed8 | computecell1 | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None |
| 09531653-1074-4568-b50a-48a7b3cc15a6 | overcloud | 2b303a97f4664a69ba2dbcfd723e76a4 | UPDATE_COMPLETE | 2019-02-09T09:52:56Z | 2019-02-11T08:33:37Z |
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
Create the cell and discover compute nodes
------------------------------------------
#. Add cell information to overcloud controllers
On all central controllers add information on how to reach the messaging cell
controller endpoint (usually internalapi) to ``/etc/hosts``, from the undercloud::
API_INFO=$(ssh heat-admin@<cell controlle ip> grep cellcontrol-0.internalapi /etc/hosts)
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b \
-m lineinfile -a "dest=/etc/hosts line=\"$API_INFO\""
.. note::
Do this outside the ``HEAT_HOSTS_START`` .. ``HEAT_HOSTS_END`` block, or
add it to an `ExtraHostFileEntries` section of an environment file for the
central overcloud controller. Add the environment file to the next
`overcloud deploy` run.
#. Extract transport_url and database connection
Get the ``transport_url`` and database ``connection`` endpoint information
from the cell controller. This information is used to create the cell in the
next step::
ssh heat-admin@<cell controller ip> sudo crudini --get \
/var/lib/config-data/nova/etc/nova/nova.conf DEFAULT transport_url
ssh heat-admin@<cell controller ip> sudo crudini --get \
/var/lib/config-data/nova/etc/nova/nova.conf database connection
#. Create the cell
Login to one of the central controllers create the cell with reference to
the IP of the cell controller in the ``database_connection`` and the
``transport_url`` extracted from previous step, like::
ssh heat-admin@<ctlplane ip overcloud-controller-0>
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
sudo $CONTAINERCLI exec -it -u root nova_api /bin/bash
nova-manage cell_v2 create_cell --name computecell1 \
--database_connection \
'{scheme}://{username}:{password}@172.16.2.102/nova?{query}' \
--transport-url \
'rabbit://guest:7l4lfamjPp6nqJgBMqb1YyM2I@computecell1-cellcontrol-0.internalapi.cell1.test:5672/?ssl=0'
.. note::
Templated transport cells URLs could be used if the same amount of controllers
are in the default and add on cell.
.. code::
nova-manage cell_v2 list_cells --verbose
After the cell got created the nova services on all central controllers need to
be restarted.
Docker::
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
"docker restart nova_api nova_scheduler nova_conductor"
Podman::
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
"systemctl restart tripleo_nova_api tripleo_nova_conductor tripleo_nova_scheduler"
#. Perform cell host discovery
Login to one of the overcloud controllers and run the cell host discovery::
ssh heat-admin@<ctlplane ip overcloud-controller-0>
# CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker'
sudo $CONTAINERCLI exec -it -u root nova_api /bin/bash
nova-manage cell_v2 discover_hosts --by-service --verbose
nova-manage cell_v2 list_hosts
+--------------+--------------------------------------+---------------------------------------+
|  Cell Name   |              Cell UUID               |                Hostname               |
+--------------+--------------------------------------+---------------------------------------+
| computecell1 | 97bb4ee9-7fe9-4ec7-af0d-72b8ef843e3e | computecell1-novacompute-0.site1.test |
| default | f012b67d-de96-471d-a44f-74e4a6783bca | overcloud-novacompute-0.site1.test |
+--------------+--------------------------------------+---------------------------------------+
The cell is now deployed and can be used.