Merge "EOL Queens and Stein. Added repos for Wallaby"
This commit is contained in:
commit
ca32d6130d
|
@ -12,9 +12,8 @@ configuration with Ansible in TripleO.
|
|||
|
||||
Summary
|
||||
-------
|
||||
Starting with the Queens release, it is possible to use Ansible to apply the
|
||||
overcloud configuration. Since the Rocky release, this method is the new default
|
||||
behavior.
|
||||
Since the Queens release, it has been possible to use Ansible to apply the
|
||||
overcloud configuration and with the Rocky release it became the default.
|
||||
|
||||
Ansible is used to replace the communication and transport of the software
|
||||
configuration deployment data between Heat and the Heat agent
|
||||
|
|
|
@ -2,9 +2,9 @@
|
|||
|
||||
Ansible config-download differences
|
||||
===================================
|
||||
Starting with the Queens release, it is possible to use Ansible to apply the
|
||||
overcloud configuration. Since the Rocky release, this method is the new default
|
||||
behavior.
|
||||
With the Queens release, it became possible to use Ansible to apply the
|
||||
overcloud configuration and this method became the default behavior with
|
||||
the Rockt release.
|
||||
|
||||
The feature is fully documented at
|
||||
:doc:`ansible_config_download`, while this page details
|
||||
|
|
|
@ -17,7 +17,7 @@ Containers runtime deployment and configuration notes
|
|||
TripleO has transitioned to the `podman`_ container runtime. Podman does not
|
||||
use a persistent daemon to manage containers. TripleO wraps the container
|
||||
service execution in systemd managed services. These services are named
|
||||
tripleo_<container name>. Prior to Stein TripleO deployed the containers
|
||||
tripleo_<container name>. Prior to Stein, TripleO deployed the containers
|
||||
runtime and image components from the docker packages. The installed components
|
||||
include the docker daemon system service and `OCI`_ compliant `Moby`_ and
|
||||
`Containerd`_ - the building blocks for the container system.
|
||||
|
|
|
@ -115,7 +115,7 @@ refer to the images in `push_destination` instead of `namespace`.
|
|||
|
||||
Prior to Stein, Docker Registry v2 (provided by "Docker
|
||||
Distribution" package), was the service running on tcp 8787.
|
||||
In Stein it has been replaced with an Apache vhost called
|
||||
Since Stein it has been replaced with an Apache vhost called
|
||||
"image-serve", which serves the containers on tcp 8787 and
|
||||
supports podman or buildah pull commands. Though podman or buildah
|
||||
tag, push, and commit commands are not supported, they are not
|
||||
|
|
|
@ -180,13 +180,7 @@
|
|||
.. note::
|
||||
It's possible to enable verbose logging with ``--verbose`` option.
|
||||
|
||||
.. note::
|
||||
To install a deprecated instack undercloud, you'll need to deploy
|
||||
with ``--use-heat=False`` option. It only works in Rocky
|
||||
as instack-undercloud was retired in Stein.
|
||||
|
||||
|
||||
In Rocky, we will run all the OpenStack services in a moby container runtime
|
||||
Since Rocky, we run all the OpenStack services in a moby container runtime
|
||||
unless the default settings are overwritten.
|
||||
This command requires 2 services to be running at all times. The first one is a
|
||||
basic keystone service, which is currently executed by `tripleoclient` itself, the
|
||||
|
|
|
@ -165,21 +165,6 @@ created on the undercloud, one should use a non-root user.
|
|||
|
||||
export STABLE_RELEASE="train"
|
||||
|
||||
.. admonition:: Stein
|
||||
:class: stein
|
||||
|
||||
::
|
||||
|
||||
export STABLE_RELEASE="stein"
|
||||
|
||||
.. admonition:: Queens
|
||||
:class: queens
|
||||
|
||||
::
|
||||
|
||||
export STABLE_RELEASE="queens"
|
||||
|
||||
|
||||
#. Build the required images:
|
||||
|
||||
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
Containers based Overcloud Deployment
|
||||
======================================
|
||||
|
||||
This documentation explains how to deploy a fully containerized overcloud on
|
||||
Docker. This feature is now the default in Queens.
|
||||
This documentation explains how to deploy a fully containerized overcloud
|
||||
utilizing Podman which is the default since the Stein releasee.
|
||||
|
||||
The requirements for a containerized overcloud are the same as for any other
|
||||
overcloud deployment. The real difference is in where the overcloud services
|
||||
|
@ -11,7 +11,7 @@ will be deployed (containers vs base OS).
|
|||
Architecture
|
||||
------------
|
||||
|
||||
The docker-based overcloud architecture is not very different from the
|
||||
The container-based overcloud architecture is not very different from the
|
||||
baremetal/VM based one. The services deployed in the traditional baremetal
|
||||
overcloud are also deployed in the docker-based one.
|
||||
|
||||
|
|
|
@ -39,7 +39,7 @@ monitor the output of the command below::
|
|||
|
||||
$ watch -n 0.5 sudo podman ps -a --filter label=managed_by=tripleo_ansible
|
||||
|
||||
.. admonition:: Stein and Train
|
||||
.. admonition:: Train
|
||||
:class: stable
|
||||
|
||||
::
|
||||
|
@ -55,7 +55,7 @@ You can view the output of the main process running in a container by running::
|
|||
|
||||
$ sudo podman logs $CONTAINER_ID_OR_NAME
|
||||
|
||||
From Stein release, standard out and standard error from containers are
|
||||
Since the Stein release, standard out and standard error from containers are
|
||||
captured in `/var/log/containers/stdouts`.
|
||||
|
||||
We export traditional logs from containers into the `/var/log/containers`
|
||||
|
|
|
@ -292,8 +292,8 @@ explanation of similarities and differences between the two types.
|
|||
|
||||
Hardware types are enabled in the ``undercloud.conf`` using the
|
||||
``enabled_hardware_types`` configuration option. Classic drivers are enabled
|
||||
using the ``enabled_drivers`` option. It is deprecated in the Queens release
|
||||
cycle and should no longer be used. See the `hardware types migration guide`_
|
||||
using the ``enabled_drivers`` option. It has been deprecated since the Queens
|
||||
release and should no longer be used. See the `hardware types migration guide`_
|
||||
for information on how to migrate existing nodes.
|
||||
|
||||
Both hardware types and classic drivers can be equally used in the
|
||||
|
|
|
@ -121,8 +121,8 @@ in an environment file:
|
|||
.. admonition:: Stable Branches
|
||||
:class: stable
|
||||
|
||||
The ``IronicEnabledDrivers`` option can also be used before the Queens
|
||||
release. It sets the list of enabled classic drivers. The most often used
|
||||
The ``IronicEnabledDrivers`` option can also be used for releases prior
|
||||
to Queens. It sets the list of enabled classic drivers. The most often used
|
||||
bare metal driver is ``pxe_ipmitool``. Also enabled by default are
|
||||
``pxe_ilo`` and ``pxe_drac`` drivers.
|
||||
|
||||
|
@ -141,25 +141,11 @@ in an environment file:
|
|||
:class: stable
|
||||
|
||||
``NovaSchedulerDefaultFilters`` configures available scheduler filters.
|
||||
Before the Stein release the ``AggregateInstanceExtraSpecsFilter`` could be
|
||||
Before the Stein release, the ``AggregateInstanceExtraSpecsFilter`` could be
|
||||
used to separate flavors targeting virtual and bare metal instances.
|
||||
Starting with the Stein release a flavor can only target one of them, so
|
||||
Starting with the Stein release, a flavor can only target one of them, so
|
||||
no additional actions are needed.
|
||||
|
||||
* In the Pike, Queens and Rocky releases you can use the following filters::
|
||||
|
||||
parameter_defaults:
|
||||
NovaSchedulerDefaultFilters:
|
||||
- RetryFilter
|
||||
- AggregateInstanceExtraSpecsFilter
|
||||
- AvailabilityZoneFilter
|
||||
- ComputeFilter
|
||||
- ComputeCapabilitiesFilter
|
||||
- ImagePropertiesFilter
|
||||
|
||||
Alternatively, you can skip adding ``cpus`` and ``memory_mb`` to your bare
|
||||
metal nodes. This will make the virtual flavors skip bare metal nodes.
|
||||
|
||||
Additional configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
|
|
@ -336,7 +336,7 @@ setting custom values for this parameter::
|
|||
.. admonition:: ceph-ansible 4.0 and newer
|
||||
:class: ceph
|
||||
|
||||
Stein's default Ceph is Nautilus, which introduced the Messenger v2 protocol.
|
||||
Stein's default Ceph was Nautilus, which introduced the Messenger v2 protocol.
|
||||
ceph-ansible 4.0 and newer added a parameter in order to:
|
||||
|
||||
* enable or disable the v1 protocol
|
||||
|
@ -910,7 +910,7 @@ Run a Ceph validation with command like the following::
|
|||
|
||||
ansible-playbook -i inventory $BASE/validation-playbooks/ceph-ansible-installed.yaml
|
||||
|
||||
For Stein and newer it is possible to run validations using the
|
||||
For Stein and newer, it is possible to run validations using the
|
||||
`openstack tripleo validator run` command with a syntax like the
|
||||
following::
|
||||
|
||||
|
|
|
@ -187,8 +187,8 @@ The steps to define your custom networks are:
|
|||
|
||||
.. admonition:: Ussuri and prior releases
|
||||
|
||||
Prior to Queens the nic config templates are not dynamically generated,
|
||||
so it is necessary to copy those that are in use, and add parameters for
|
||||
Prior to Queens, the nic config templates were not dynamically generated,
|
||||
so it was necessary to copy those that were in use, and add parameters for
|
||||
any additional networks, for example::
|
||||
|
||||
cp -r /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans custom-single-nic-vlans
|
||||
|
@ -199,7 +199,7 @@ The steps to define your custom networks are:
|
|||
``custom_network_data.yaml``.
|
||||
|
||||
.. note::
|
||||
In Queens and later the NIC config templates are dynamically
|
||||
Since Queens, the NIC config templates are dynamically
|
||||
generated so this step is only necessary when creating custom NIC
|
||||
config templates, not when just adding a custom network.
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ overcloud, or already have installed an overcloud (min Stein release).
|
|||
|
||||
.. note::
|
||||
|
||||
Starting with CentOS 8 and TripleO Stein release, podman is the CONTAINERCLI
|
||||
Starting with CentOS 8 and the TripleO Stein release, podman is the CONTAINERCLI
|
||||
to be used in the following steps.
|
||||
|
||||
The minimum requirement for having multiple cells is to have a central OpenStack
|
||||
|
@ -23,7 +23,6 @@ For more details on the cells v2 layout check `Cells Layout (v2)
|
|||
|
||||
.. toctree::
|
||||
|
||||
deploy_cellv2_stein.rst
|
||||
deploy_cellv2_basic.rst
|
||||
deploy_cellv2_advanced.rst
|
||||
deploy_cellv2_routed.rst
|
||||
|
|
|
@ -14,7 +14,7 @@ already installed an overcloud (min Train release).
|
|||
|
||||
.. note::
|
||||
|
||||
Starting with CentOS 8 and TripleO Stein release, podman is the CONTAINERCLI
|
||||
Starting with CentOS 8 and the TripleO Stein release, podman is the CONTAINERCLI
|
||||
to be used in the following steps.
|
||||
|
||||
.. _advanced_cell_arch:
|
||||
|
|
|
@ -14,7 +14,7 @@ already installed an overcloud (min Train release).
|
|||
|
||||
.. note::
|
||||
|
||||
Starting with CentOS 8 and TripleO Stein release, podman is the CONTAINERCLI
|
||||
Starting with CentOS 8 and the TripleO Stein release, podman is the CONTAINERCLI
|
||||
to be used in the following steps.
|
||||
|
||||
.. _basic_cell_arch:
|
||||
|
|
|
@ -14,7 +14,7 @@ already installed an overcloud (min Train release).
|
|||
|
||||
.. note::
|
||||
|
||||
Starting with CentOS 8 and TripleO Stein release, podman is the CONTAINERCLI
|
||||
Starting with CentOS 8 and the TripleO Stein release, podman is the CONTAINERCLI
|
||||
to be used in the following steps.
|
||||
|
||||
In this example we use the :doc:`deploy_cellv2_advanced` using a routed spine and
|
||||
|
|
|
@ -1,388 +0,0 @@
|
|||
Deploy an additional nova cell v2 in Stein release
|
||||
==================================================
|
||||
|
||||
.. warning::
|
||||
Multi cell support is only supported in Stein or later versions.
|
||||
This guide addresses only the Stein release!
|
||||
|
||||
.. contents::
|
||||
:depth: 3
|
||||
:backlinks: none
|
||||
|
||||
This guide assumes that you are ready to deploy a new overcloud, or have
|
||||
already installed an overcloud (min Stein release).
|
||||
|
||||
.. note::
|
||||
|
||||
Starting with CentOS 8 and TripleO Stein release, podman is the CONTAINERCLI
|
||||
to be used in the following steps.
|
||||
|
||||
Initial Deploy
|
||||
--------------
|
||||
|
||||
.. note::
|
||||
|
||||
Right now the current implementation does not support running nova metadata
|
||||
API per cell as explained in the cells v2 layout section `Local per cell
|
||||
<https://docs.openstack.org/nova/latest/user/cellsv2-layout.html#nova-metadata-api-service>`_
|
||||
|
||||
The following example uses six nodes and the split control plane method to
|
||||
simulate a distributed cell deployment. The first Heat stack deploys a controller
|
||||
cluster and a compute. The second Heat stack deploys a cell controller and a
|
||||
compute node::
|
||||
|
||||
openstack overcloud status
|
||||
+-----------+---------------------+---------------------+-------------------+
|
||||
| Plan Name | Created | Updated | Deployment Status |
|
||||
+-----------+---------------------+---------------------+-------------------+
|
||||
| overcloud | 2019-02-12 09:00:27 | 2019-02-12 09:00:27 | DEPLOY_SUCCESS |
|
||||
+-----------+---------------------+---------------------+-------------------+
|
||||
|
||||
openstack server list -c Name -c Status -c Networks
|
||||
+----------------------------+--------+------------------------+
|
||||
| Name | Status | Networks |
|
||||
+----------------------------+--------+------------------------+
|
||||
| overcloud-controller-1 | ACTIVE | ctlplane=192.168.24.19 |
|
||||
| overcloud-controller-2 | ACTIVE | ctlplane=192.168.24.11 |
|
||||
| overcloud-controller-0 | ACTIVE | ctlplane=192.168.24.29 |
|
||||
| overcloud-novacompute-0 | ACTIVE | ctlplane=192.168.24.15 |
|
||||
+----------------------------+--------+------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
In this example the default cell and the additional cell uses the
|
||||
same network, When configuring another network scenario keep in
|
||||
mind that it will be necessary for the systems to be able to
|
||||
communicate with each other.
|
||||
|
||||
Extract deployment information from the overcloud stack
|
||||
-------------------------------------------------------
|
||||
|
||||
Any additional cell stack requires information from the overcloud Heat stack
|
||||
where the central OpenStack services are located. The extracted parameters are
|
||||
needed as input for additional cell stacks. To extract these parameters
|
||||
into separate files in a directory (e.g. DIR=cell1) run the following::
|
||||
|
||||
source stackrc
|
||||
mkdir cell1
|
||||
export DIR=cell1
|
||||
|
||||
#. Export the default cell EndpointMap
|
||||
|
||||
.. code::
|
||||
|
||||
openstack stack output show overcloud EndpointMap --format json \
|
||||
| jq '{"parameter_defaults": {"EndpointMapOverride": .output_value}}' \
|
||||
> $DIR/endpoint-map.json
|
||||
|
||||
#. Export the default cell HostsEntry
|
||||
|
||||
.. code::
|
||||
|
||||
openstack stack output show overcloud HostsEntry -f json \
|
||||
| jq -r '{"parameter_defaults":{"ExtraHostFileEntries": .output_value}}' \
|
||||
> $DIR/extra-host-file-entries.json
|
||||
|
||||
#. Export AllNodesConfig and GlobalConfig information
|
||||
|
||||
In addition to the ``GlobalConfig``, which contains the RPC information (port,
|
||||
ssl, scheme, user and password), additional information from the ``AllNodesConfig``
|
||||
is required to point components to the default cell service instead of the
|
||||
service served by the cell controller. These are
|
||||
|
||||
* ``oslo_messaging_notify_short_bootstrap_node_name`` - default cell overcloud
|
||||
messaging notify bootstrap node information
|
||||
* ``oslo_messaging_notify_node_names`` - default cell overcloud messaging notify
|
||||
node information
|
||||
* ``oslo_messaging_rpc_node_names`` - default cell overcloud messaging rpc node
|
||||
information as e.g. neutron agent needs to point to the overcloud messaging
|
||||
cluster
|
||||
* ``memcached_node_ips`` - memcached node information used by the cell services.
|
||||
|
||||
.. code::
|
||||
|
||||
ALLNODESCFG=$(openstack stack output show overcloud AllNodesConfig --format json)
|
||||
GLOBALCFG=$(openstack stack output show overcloud GlobalConfig --format json)
|
||||
(echo $ALLNODESCFG | jq '.output_value |
|
||||
{oslo_messaging_notify_short_bootstrap_node_name:
|
||||
.oslo_messaging_notify_short_bootstrap_node_name,
|
||||
oslo_messaging_notify_node_names: .oslo_messaging_notify_node_names,
|
||||
oslo_messaging_rpc_node_names: .oslo_messaging_rpc_node_names,
|
||||
memcached_node_ips: .memcached_node_ips}';\
|
||||
echo $GLOBALCFG | jq '.output_value') |\
|
||||
jq -s '.[0] * .[1]| {"parameter_defaults":
|
||||
{"AllNodesExtraMapData": .}}' > $DIR/all-nodes-extra-map-data.json
|
||||
|
||||
An example of a ``all-nodes-extra-map-data.json`` file::
|
||||
|
||||
{
|
||||
"parameter_defaults": {
|
||||
"AllNodesExtraMapData": {
|
||||
"oslo_messaging_notify_short_bootstrap_node_name": "overcloud-controller-0",
|
||||
"oslo_messaging_notify_node_names": [
|
||||
"overcloud-controller-0.internalapi.site1.test",
|
||||
"overcloud-controller-1.internalapi.site1.test",
|
||||
"overcloud-controller-2.internalapi.site1.test"
|
||||
],
|
||||
"oslo_messaging_rpc_node_names": [
|
||||
"overcloud-controller-0.internalapi.site1.test",
|
||||
"overcloud-controller-1.internalapi.site1.test",
|
||||
"overcloud-controller-2.internalapi.site1.test"
|
||||
],
|
||||
"memcached_node_ips": [
|
||||
"172.16.2.232",
|
||||
"172.16.2.29",
|
||||
"172.16.2.49"
|
||||
],
|
||||
"oslo_messaging_rpc_port": 5672,
|
||||
"oslo_messaging_rpc_use_ssl": "False",
|
||||
"oslo_messaging_notify_scheme": "rabbit",
|
||||
"oslo_messaging_notify_use_ssl": "False",
|
||||
"oslo_messaging_rpc_scheme": "rabbit",
|
||||
"oslo_messaging_rpc_password": "7l4lfamjPp6nqJgBMqb1YyM2I",
|
||||
"oslo_messaging_notify_password": "7l4lfamjPp6nqJgBMqb1YyM2I",
|
||||
"oslo_messaging_rpc_user_name": "guest",
|
||||
"oslo_messaging_notify_port": 5672,
|
||||
"oslo_messaging_notify_user_name": "guest"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#. Export passwords
|
||||
|
||||
.. code::
|
||||
|
||||
openstack object save --file - overcloud plan-environment.yaml \
|
||||
| python -c 'import yaml as y, sys as s; \
|
||||
s.stdout.write(y.dump({"parameter_defaults": \
|
||||
y.load(s.stdin.read())["passwords"]}));' > $DIR/passwords.yaml
|
||||
|
||||
The same passwords are used for the cell services.
|
||||
|
||||
#. Create roles file for cell stack
|
||||
|
||||
.. code::
|
||||
|
||||
openstack overcloud roles generate --roles-path \
|
||||
/usr/share/openstack-tripleo-heat-templates/roles \
|
||||
-o $DIR/cell_roles_data.yaml Compute CellController
|
||||
|
||||
.. note::
|
||||
|
||||
In case a different default heat stack name or compute role name is used,
|
||||
modify the above commands.
|
||||
|
||||
#. Create cell parameter file for additional customization (e.g. cell1/cell1.yaml)
|
||||
|
||||
Add the following content into a parameter file for the cell, e.g. ``cell1/cell1.yaml``::
|
||||
|
||||
resource_registry:
|
||||
# since the same network is used, the creation of the
|
||||
# different kind of networks is omitted for additional
|
||||
# cells
|
||||
OS::TripleO::Network::External: OS::Heat::None
|
||||
OS::TripleO::Network::InternalApi: OS::Heat::None
|
||||
OS::TripleO::Network::Storage: OS::Heat::None
|
||||
OS::TripleO::Network::StorageMgmt: OS::Heat::None
|
||||
OS::TripleO::Network::Tenant: OS::Heat::None
|
||||
OS::TripleO::Network::Management: OS::Heat::None
|
||||
|
||||
parameter_defaults:
|
||||
# new CELL Parameter to reflect that this is an additional CELL
|
||||
NovaAdditionalCell: True
|
||||
|
||||
# The DNS names for the VIPs for the cell
|
||||
CloudName: computecell1.ooo.test
|
||||
CloudNameInternal: computecell1.internalapi.ooo.test
|
||||
CloudNameStorage: computecell1.storage.ooo.test
|
||||
CloudNameStorageManagement: computecell1.storagemgmt.ooo.test
|
||||
CloudNameCtlplane: computecell1.ctlplane.ooo.test
|
||||
|
||||
# Flavors used for the cell controller and computes
|
||||
OvercloudCellControllerFlavor: cellcontroller
|
||||
OvercloudComputeFlavor: compute
|
||||
|
||||
# number of controllers/computes in the cell
|
||||
CellControllerCount: 1
|
||||
ComputeCount: 1
|
||||
|
||||
# default gateway
|
||||
ControlPlaneStaticRoutes:
|
||||
- ip_netmask: 0.0.0.0/0
|
||||
next_hop: 192.168.24.1
|
||||
default: true
|
||||
DnsServers:
|
||||
- x.x.x.x
|
||||
|
||||
The above file disables creating networks as the same as the overcloud stack
|
||||
created are used. It also specifies that this will be an additional cell using
|
||||
parameter `NovaAdditionalCell`.
|
||||
|
||||
#. Create the network configuration for `cellcontroller` and add to environment file.
|
||||
|
||||
.. code::
|
||||
|
||||
resource_registry:
|
||||
OS::TripleO::BlockStorage::Net::SoftwareConfig: three-nics-vlans//cinder-storage.yaml
|
||||
OS::TripleO::CephStorage::Net::SoftwareConfig: three-nics-vlans//ceph-storage.yaml
|
||||
OS::TripleO::Compute::Net::SoftwareConfig: three-nics-vlans//compute.yaml
|
||||
OS::TripleO::Controller::Net::SoftwareConfig: three-nics-vlans//controller.yaml
|
||||
OS::TripleO::CellController::Net::SoftwareConfig: three-nics-vlans//cellcontroller.yaml
|
||||
OS::TripleO::ObjectStorage::Net::SoftwareConfig: three-nics-vlans//swift-storage.yaml
|
||||
|
||||
.. note::
|
||||
|
||||
For details on network configuration consult :ref:`network_isolation` guide, chapter *Customizing the Interface Templates*.
|
||||
|
||||
Deploy the cell
|
||||
---------------
|
||||
|
||||
#. Create new flavor used to tag the cell controller
|
||||
|
||||
.. code::
|
||||
|
||||
openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 cellcontroller
|
||||
openstack flavor set --property "cpu_arch"="x86_64" \
|
||||
--property "capabilities:boot_option"="local" \
|
||||
--property "capabilities:profile"="cellcontroller" \
|
||||
--property "resources:CUSTOM_BAREMETAL=1" \
|
||||
--property "resources:DISK_GB=0" \
|
||||
--property "resources:MEMORY_MB=0" \
|
||||
--property "resources:VCPU=0" \
|
||||
cellcontroller
|
||||
|
||||
The properties need to be modified to the needs of the environment.
|
||||
|
||||
#. Tag node into the new flavor using the following command
|
||||
|
||||
.. code::
|
||||
|
||||
openstack baremetal node set --property \
|
||||
capabilities='profile:cellcontroller,boot_option:local' <node id>
|
||||
|
||||
Verify the tagged cellcontroller::
|
||||
|
||||
openstack overcloud profiles list
|
||||
|
||||
#. Deploy the cell
|
||||
|
||||
To deploy the overcloud we can use use the same ``overcloud deploy`` command as
|
||||
it was used to deploy the ``overcloud`` stack and add the created export
|
||||
environment files::
|
||||
|
||||
openstack overcloud deploy --override-ansible-cfg \
|
||||
/home/stack/custom_ansible.cfg \
|
||||
--stack computecell1 \
|
||||
--templates /usr/share/openstack-tripleo-heat-templates \
|
||||
-e ... additional environment files used for overcloud stack, like container
|
||||
prepare parameters, or other specific parameters for the cell
|
||||
...
|
||||
-r $HOME/$DIR/cell_roles_data.yaml \
|
||||
-e $HOME/$DIR/passwords.yaml \
|
||||
-e $HOME/$DIR/endpoint-map.json \
|
||||
-e $HOME/$DIR/all-nodes-extra-map-data.json \
|
||||
-e $HOME/$DIR/extra-host-file-entries.json \
|
||||
-e $HOME/$DIR/cell1.yaml
|
||||
|
||||
Wait for the deployment to finish::
|
||||
|
||||
openstack stack list
|
||||
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
|
||||
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
|
||||
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
|
||||
| 890e4764-1606-4dab-9c2f-6ed853e3fed8 | computecell1 | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None |
|
||||
| 09531653-1074-4568-b50a-48a7b3cc15a6 | overcloud | 2b303a97f4664a69ba2dbcfd723e76a4 | UPDATE_COMPLETE | 2019-02-09T09:52:56Z | 2019-02-11T08:33:37Z |
|
||||
+--------------------------------------+--------------+----------------------------------+-----------------+----------------------+----------------------+
|
||||
|
||||
Create the cell and discover compute nodes
|
||||
------------------------------------------
|
||||
|
||||
#. Add cell information to overcloud controllers
|
||||
|
||||
On all central controllers add information on how to reach the messaging cell
|
||||
controller endpoint (usually internalapi) to ``/etc/hosts``, from the undercloud::
|
||||
|
||||
API_INFO=$(ssh heat-admin@<cell controlle ip> grep cellcontrol-0.internalapi /etc/hosts)
|
||||
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b \
|
||||
-m lineinfile -a "dest=/etc/hosts line=\"$API_INFO\""
|
||||
|
||||
.. note::
|
||||
|
||||
Do this outside the ``HEAT_HOSTS_START`` .. ``HEAT_HOSTS_END`` block, or
|
||||
add it to an `ExtraHostFileEntries` section of an environment file for the
|
||||
central overcloud controller. Add the environment file to the next
|
||||
`overcloud deploy` run.
|
||||
|
||||
#. Extract transport_url and database connection
|
||||
|
||||
Get the ``transport_url`` and database ``connection`` endpoint information
|
||||
from the cell controller. This information is used to create the cell in the
|
||||
next step::
|
||||
|
||||
ssh heat-admin@<cell controller ip> sudo crudini --get \
|
||||
/var/lib/config-data/nova/etc/nova/nova.conf DEFAULT transport_url
|
||||
ssh heat-admin@<cell controller ip> sudo crudini --get \
|
||||
/var/lib/config-data/nova/etc/nova/nova.conf database connection
|
||||
|
||||
#. Create the cell
|
||||
|
||||
Login to one of the central controllers create the cell with reference to
|
||||
the IP of the cell controller in the ``database_connection`` and the
|
||||
``transport_url`` extracted from previous step, like::
|
||||
|
||||
ssh heat-admin@<ctlplane ip overcloud-controller-0>
|
||||
|
||||
# CONTAINERCLI can be either docker or podman
|
||||
export CONTAINERCLI='docker'
|
||||
|
||||
sudo $CONTAINERCLI exec -it -u root nova_api /bin/bash
|
||||
nova-manage cell_v2 create_cell --name computecell1 \
|
||||
--database_connection \
|
||||
'{scheme}://{username}:{password}@172.16.2.102/nova?{query}' \
|
||||
--transport-url \
|
||||
'rabbit://guest:7l4lfamjPp6nqJgBMqb1YyM2I@computecell1-cellcontrol-0.internalapi.cell1.test:5672/?ssl=0'
|
||||
|
||||
.. note::
|
||||
|
||||
Templated transport cells URLs could be used if the same amount of controllers
|
||||
are in the default and add on cell.
|
||||
|
||||
.. code::
|
||||
|
||||
nova-manage cell_v2 list_cells --verbose
|
||||
|
||||
After the cell got created the nova services on all central controllers need to
|
||||
be restarted.
|
||||
|
||||
Docker::
|
||||
|
||||
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
|
||||
"docker restart nova_api nova_scheduler nova_conductor"
|
||||
|
||||
Podman::
|
||||
|
||||
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
|
||||
"systemctl restart tripleo_nova_api tripleo_nova_conductor tripleo_nova_scheduler"
|
||||
|
||||
|
||||
#. Perform cell host discovery
|
||||
|
||||
Login to one of the overcloud controllers and run the cell host discovery::
|
||||
|
||||
ssh heat-admin@<ctlplane ip overcloud-controller-0>
|
||||
|
||||
# CONTAINERCLI can be either docker or podman
|
||||
export CONTAINERCLI='docker'
|
||||
|
||||
sudo $CONTAINERCLI exec -it -u root nova_api /bin/bash
|
||||
nova-manage cell_v2 discover_hosts --by-service --verbose
|
||||
nova-manage cell_v2 list_hosts
|
||||
|
||||
+--------------+--------------------------------------+---------------------------------------+
|
||||
| Cell Name | Cell UUID | Hostname |
|
||||
+--------------+--------------------------------------+---------------------------------------+
|
||||
| computecell1 | 97bb4ee9-7fe9-4ec7-af0d-72b8ef843e3e | computecell1-novacompute-0.site1.test |
|
||||
| default | f012b67d-de96-471d-a44f-74e4a6783bca | overcloud-novacompute-0.site1.test |
|
||||
+--------------+--------------------------------------+---------------------------------------+
|
||||
|
||||
The cell is now deployed and can be used.
|
|
@ -3,7 +3,7 @@
|
|||
Deploying with IPSec
|
||||
====================
|
||||
|
||||
As of the Queens release, it is possible to encrypt communications within the
|
||||
Since the Queens release, it is possible to encrypt communications within the
|
||||
internal network by setting up IPSec tunnels configured by TripleO.
|
||||
|
||||
There are several options that TripleO provides deployers whose requirements call
|
||||
|
@ -88,15 +88,6 @@ With this, your deployment command will be similar to this::
|
|||
-e /home/stack/templates/network-environment.yaml \
|
||||
-e /usr/share/openstack-tripleo-heat-templates/environments/ipsec.yaml
|
||||
|
||||
.. note:: For the Queens release, you need to specify the config-download
|
||||
related parameters yourself::
|
||||
|
||||
openstack overcloud deploy \
|
||||
...
|
||||
-e /usr/share/openstack-tripleo-heat-templates/environments/config-download-environment.yaml \
|
||||
--config-download \
|
||||
...
|
||||
|
||||
To change the default encryption algorithm, you can use an environment file
|
||||
that looks as follows::
|
||||
|
||||
|
|
|
@ -253,7 +253,7 @@ deploying the overcloud.
|
|||
Generate Templates from Jinja2
|
||||
------------------------------
|
||||
|
||||
With Queens cycle, the network configuration templates have been converted to
|
||||
With the Queens cycle, the network configuration templates have been converted to
|
||||
Jinja2 templates, so that templates can be generated for each role with
|
||||
customized network data. A utility script is available to generate the
|
||||
templates based on the provided ``roles_data.yaml`` and ``network_data.yaml``
|
||||
|
@ -1090,6 +1090,6 @@ to a provider network if Neutron is to provide DHCP services to tenant VMs::
|
|||
|
||||
|
||||
.. _tripleo-heat-templates: https://opendev.org/openstack/tripleo-heat-templates
|
||||
.. _default-network-isolation: https://opendev.org/openstack/tripleo-heat-templates/network-data-samples/default-network-isolation.yaml
|
||||
.. _network-data-samples: https://opendev.org/openstack/tripleo-heat-templates/network-data-samples
|
||||
.. _vip-data-samples: https://opendev.org/openstack/tripleo-heat-templates/network-data-samples
|
||||
.. _default-network-isolation: https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/network-data-samples/default-network-isolation.yaml
|
||||
.. _network-data-samples: https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/network-data-samples/
|
||||
.. _vip-data-samples: https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/network-data-samples/
|
||||
|
|
|
@ -26,20 +26,10 @@ to the deployment command::
|
|||
<other cli args> \
|
||||
-e ~/rhsm.yaml
|
||||
|
||||
.. note::
|
||||
This feature requires config-download to be enabled, which wasn't the
|
||||
case in Queens.
|
||||
If you're deploying on this release, make sure you deploy with
|
||||
config-download enabled::
|
||||
|
||||
-e /usr/share/openstack-tripleo-heat-templates/environments/config-download-environment.yaml \
|
||||
--config-download
|
||||
|
||||
The ``rhsm.yaml`` environment enables mapping the OS::TripleO::Services::Rhsm to
|
||||
the extraconfig service::
|
||||
|
||||
resource_registry:
|
||||
# Before Train cycle, the file is in /usr/share/openstack-tripleo-heat-templates/extraconfig/services/rhsm.yaml
|
||||
OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml
|
||||
parameter_defaults:
|
||||
RhsmVars:
|
||||
|
|
|
@ -92,7 +92,7 @@ Database backups
|
|||
|
||||
The operator needs to backup all databases in the Undercloud node
|
||||
|
||||
.. admonition:: Stein and Train
|
||||
.. admonition:: Train
|
||||
:class: stable
|
||||
|
||||
::
|
||||
|
@ -100,13 +100,6 @@ The operator needs to backup all databases in the Undercloud node
|
|||
/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password
|
||||
podman exec mysql bash -c "mysqldump -uroot -pPASSWORD --opt --all-databases" > /root/undercloud-all-databases.sql
|
||||
|
||||
.. admonition:: Queens
|
||||
:class: stable
|
||||
|
||||
::
|
||||
|
||||
mysqldump --opt --single-transaction --all-databases > /root/undercloud-all-databases.sql
|
||||
|
||||
Filesystem backups
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
|
|
@ -6,13 +6,13 @@ Updating Content on Overcloud Nodes
|
|||
The update of overcloud packages and containers to the latest version
|
||||
of the current release is referred to as the 'minor update' in TripleO
|
||||
(distinguishing it from the 'major upgrade' to the next release). In
|
||||
the Queens cycle the minor update workflow has changed compared to
|
||||
the Queens cycle the minor update workflow was changed compared to
|
||||
previous cycles. There are thus version specific sections below.
|
||||
|
||||
Updating your Overcloud - Queens and beyond
|
||||
-------------------------------------------
|
||||
|
||||
The Queens release brings common CLI and workflow conventions to the
|
||||
The Queens release brought common CLI and workflow conventions to the
|
||||
main deployment lifecycle operations (minor updates, major upgrades,
|
||||
and fast forward upgrades). This means that the minor update workflow
|
||||
has changed compared to previous releases, and it should now be easier
|
||||
|
@ -28,16 +28,6 @@ the OpenStack release that you currently operate, perform these steps:
|
|||
RPMs. If you use stable RDO repositories, you don't need to change
|
||||
anything.
|
||||
|
||||
Update container image parameter files:
|
||||
|
||||
.. admonition:: Queens
|
||||
:class: queens
|
||||
|
||||
Fetch latest container images to your undercloud registry and
|
||||
generate a Heat environment file pointing to new container
|
||||
images. This is done via workflow described in
|
||||
:doc:`containerized deployment documentation<../../deployment/overcloud>`.
|
||||
|
||||
#. **Update preparation**
|
||||
|
||||
To prepare the overcloud for the update, run:
|
||||
|
@ -52,7 +42,7 @@ the OpenStack release that you currently operate, perform these steps:
|
|||
used with previous `openstack overcloud deploy` command.
|
||||
|
||||
The last argument `containers-prepare-parameter.yaml` differs in
|
||||
content depending on release. In Queens and before, it has a list
|
||||
content depending on release. In Queens and before, it was a list
|
||||
of individual container image parameters, pointing to images you've
|
||||
already uploaded to local registry in previous step. In Rocky and
|
||||
beyond, this file contains the ``ContainerImagePrepare`` parameter.
|
||||
|
@ -109,45 +99,7 @@ the OpenStack release that you currently operate, perform these steps:
|
|||
|
||||
If your environment includes Ceph managed by TripleO (i.e. *not*
|
||||
what TripleO calls "external Ceph"), you'll want to update Ceph at
|
||||
this point too. The procedure differs between Queens and newer
|
||||
releases:
|
||||
|
||||
.. admonition:: Queens
|
||||
:class: queens
|
||||
|
||||
Run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
openstack overcloud ceph-upgrade run <OPTIONS>
|
||||
|
||||
In place of the `<OPTIONS>` token should go all parameters that you
|
||||
used with previous `openstack overcloud update prepare` command
|
||||
(including the new `-e container-params.yaml`).
|
||||
|
||||
.. note::
|
||||
|
||||
The `ceph-upgrade run` command performs a Heat stack update, and
|
||||
as such it should be passed all parameters currently used by the
|
||||
Heat stack (most notably environment files, role counts, roles
|
||||
data, and network data). This is crucial in order to keep
|
||||
correct state of the stack.
|
||||
|
||||
The `ceph-upgrade run` command re-enables config management
|
||||
operations previously disabled by `update prepare`, and triggers
|
||||
the rolling update playbook of the Ceph installer (`ceph-ansible`).
|
||||
|
||||
.. admonition:: Rocky
|
||||
:class: rocky
|
||||
|
||||
Run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
openstack overcloud external-update run --tags ceph
|
||||
|
||||
This will update Ceph by running ceph-ansible installer with
|
||||
update playbook.
|
||||
this point too.
|
||||
|
||||
#. **Update convergence**
|
||||
|
||||
|
@ -222,8 +174,8 @@ parameter::
|
|||
.. admonition:: Stable Branch
|
||||
:class: stable
|
||||
|
||||
The `--limit` was introduced in the Stein release. In previous versions,
|
||||
use `--nodes` or `--roles` parameters.
|
||||
The `--limit` was introduced in the Stein release, previous versions used
|
||||
`--nodes` or `--roles` parameters.
|
||||
|
||||
You can specify a role name, e.g. 'Compute', to execute the minor update on
|
||||
all nodes of that role in a rolling fashion (serial:1 is used on the playbooks).
|
||||
|
|
|
@ -1,18 +1,13 @@
|
|||
Extending overcloud nodes provisioning
|
||||
======================================
|
||||
|
||||
Starting with the Queens release, the new *ansible* deploy interface is
|
||||
Starting with the Queens release, the *ansible* deploy interface became
|
||||
available in Ironic. Unlike the default `iSCSI deploy interface`_, it is
|
||||
highly customizable through operator-provided Ansible playbooks. These
|
||||
playbooks will run on the target image when Ironic boots the deploy ramdisk.
|
||||
|
||||
.. TODO(dtantsur): link to ansible interface docs when they merge
|
||||
|
||||
.. warning::
|
||||
The ansible deploy interface support in TripleO is technical preview in
|
||||
the Queens release. This guide may change substantially as the feature
|
||||
is stabilizing.
|
||||
|
||||
.. note::
|
||||
This feature is not related to the ongoing work of switching overcloud
|
||||
configuration to Ansible.
|
||||
|
|
|
@ -2,26 +2,12 @@
|
|||
|
||||
.. note::
|
||||
Python3 is required for Ussuri and newer releases of OpenStack which is supported on RHEL 8
|
||||
and CentOS 8. Train is also recommended to be installed on RHEL 8 or CentOS 8. Earlier versions
|
||||
should be installed on RHEL 7 or CentOS 7 which support Python2.7.
|
||||
and CentOS 8. Train is also recommended to be installed on RHEL 8 or CentOS 8.
|
||||
|
||||
#. Download and install the python-tripleo-repos RPM from
|
||||
the appropriate RDO repository
|
||||
|
||||
.. note::
|
||||
At this time, the Queens release is still active. However, Pike and Rocky are EOL and have
|
||||
been removed.
|
||||
|
||||
.. admonition:: CentOS 7
|
||||
:class: centos7
|
||||
|
||||
Current `Centos 7 RDO repository <https://trunk.rdoproject.org/centos7/current/>`_
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo yum install -y https://trunk.rdoproject.org/centos7/current/python2-tripleo-repos-<version>.el7.centos.noarch.rpm
|
||||
|
||||
.. admonition:: CentOS 8
|
||||
.. admonition:: CentOS 8 and CentOS Strem 8
|
||||
:class: centos8
|
||||
|
||||
Current `Centos 8 RDO repository <https://trunk.rdoproject.org/centos8/component/tripleo/current/>`_.
|
||||
|
@ -43,6 +29,24 @@
|
|||
Enable the appropriate repos for the desired release, as indicated below.
|
||||
Do not enable any other repos not explicitly marked for that release.
|
||||
|
||||
.. admonition:: Wallaby
|
||||
:class: wallaby vtow
|
||||
|
||||
Enable the current Wallaby repositories
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo -E tripleo-repos -b wallaby current
|
||||
|
||||
.. admonition:: Ceph
|
||||
:class: ceph
|
||||
|
||||
Include the Ceph repo in the tripleo-repos call
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo -E tripleo-repos -b wallaby current ceph
|
||||
|
||||
.. admonition:: Victoria
|
||||
:class: victoria utov
|
||||
|
||||
|
@ -97,42 +101,6 @@
|
|||
|
||||
sudo -E tripleo-repos -b train current ceph
|
||||
|
||||
.. admonition:: Stein
|
||||
:class: stein rtos
|
||||
|
||||
Enable the current Stein repositories
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo -E tripleo-repos -b stein current
|
||||
|
||||
.. admonition:: Ceph
|
||||
:class: ceph
|
||||
|
||||
Include the Ceph repo in the tripleo-repos call
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo -E tripleo-repos -b stein current ceph
|
||||
|
||||
.. admonition:: Queens
|
||||
:class: queens ptoq
|
||||
|
||||
Enable the current Queens repositories
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo -E tripleo-repos -b queens current
|
||||
|
||||
.. admonition:: Ceph
|
||||
:class: ceph
|
||||
|
||||
Include the Ceph repo in the tripleo-repos call
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo -E tripleo-repos -b queens current ceph
|
||||
|
||||
.. warning::
|
||||
|
||||
The remaining repositories configuration steps below should not be done for
|
||||
|
|
Loading…
Reference in New Issue