Merge "Fix typos in doc pages"

This commit is contained in:
Zuul 2022-11-14 03:43:22 +00:00 committed by Gerrit Code Review
commit a79631c6f4
42 changed files with 75 additions and 75 deletions

View File

@ -27,7 +27,7 @@
color: rgba(0, 0, 0, 0.9) !important; color: rgba(0, 0, 0, 0.9) !important;
} }
/* NOTES, ADMONITTIONS AND TAGS */ /* NOTES, ADMONITIONS AND TAGS */
.admonition { .admonition {
font-size: 85%; /* match code size */ font-size: 85%; /* match code size */
background: rgb(240, 240, 240); background: rgb(240, 240, 240);

View File

@ -216,7 +216,7 @@ Here are some of them:
Tips and Tricks with tripleo_container_image_build Tips and Tricks with tripleo_container_image_build
.................................................. ..................................................
Here's a non-exaustive list of tips and tricks that might make things faster, Here's a non-exhaustive list of tips and tricks that might make things faster,
especially on a dev env where you need to build multiple times the containers. especially on a dev env where you need to build multiple times the containers.
Inject a caching proxy Inject a caching proxy

View File

@ -144,7 +144,7 @@ with ``--override-ansible-cfg`` on the deployment command.
For example the following command will use the configuration options from For example the following command will use the configuration options from
``/home/stack/ansible.cfg``. Any options specified in the override file will ``/home/stack/ansible.cfg``. Any options specified in the override file will
take precendence over the defaults:: take precedence over the defaults::
openstack overcloud deploy \ openstack overcloud deploy \
... ...

View File

@ -87,7 +87,7 @@ that it can run all the steps:
The environment files with the parameters and resource registry overrides The environment files with the parameters and resource registry overrides
required is automatically included when the ``overcloud deploy`` command is required is automatically included when the ``overcloud deploy`` command is
run with the arguments: ``--vip-file``, ``--baremetal-deployement`` and run with the arguments: ``--vip-file``, ``--baremetal-deployment`` and
``--network-config``. ``--network-config``.
#. Run Config-Download and the deploy-steps playbook #. Run Config-Download and the deploy-steps playbook
@ -369,7 +369,7 @@ Migrating existing deployments
To facilitate the migration for deployed overclouds tripleoclient commands to To facilitate the migration for deployed overclouds tripleoclient commands to
extract information from deployed overcloud stacks has been added. During the extract information from deployed overcloud stacks has been added. During the
upgrade to Wallaby these tools will be executed as part of the underlcoud upgrade to Wallaby these tools will be executed as part of the undercloud
upgrade, placing the generated YAML definition files in the working directory upgrade, placing the generated YAML definition files in the working directory
(Defaults to: ``~/overcloud-deploy/$STACK_NAME/``). Below each export command (Defaults to: ``~/overcloud-deploy/$STACK_NAME/``). Below each export command
is described, and examples to use them manually with the intent for developers is described, and examples to use them manually with the intent for developers
@ -379,7 +379,7 @@ during the undercloud upgrade.
There is also a tool ``convert_heat_nic_config_to_ansible_j2.py`` that can be There is also a tool ``convert_heat_nic_config_to_ansible_j2.py`` that can be
used to convert heat template NIC config to Ansible j2 templates. used to convert heat template NIC config to Ansible j2 templates.
.. warning:: If migrating to use Networking v2 while using the non-Epemeral .. warning:: If migrating to use Networking v2 while using the non-Ephemeral
heat i.e ``--heat-type installed``, the existing overcloud stack heat i.e ``--heat-type installed``, the existing overcloud stack
must **first** be updated to set the ``deletion_policy`` for must **first** be updated to set the ``deletion_policy`` for
``OS::Nova`` and ``OS::Neutron`` resources. This can be done ``OS::Nova`` and ``OS::Neutron`` resources. This can be done
@ -474,7 +474,7 @@ generate a Baremetal Provision definition YAML file from a deployed overcloud
stack. The YAML definition file can then be used with ``openstack overcloud stack. The YAML definition file can then be used with ``openstack overcloud
node provision`` command and/or the ``openstack overcloud deploy`` command. node provision`` command and/or the ``openstack overcloud deploy`` command.
**Example**: Export deployed overcloud nodes to Baremtal Deployment YAML **Example**: Export deployed overcloud nodes to Baremetal Deployment YAML
definition definition
.. code-block:: bash .. code-block:: bash

View File

@ -2,7 +2,7 @@ Containers based Overcloud Deployment
====================================== ======================================
This documentation explains how to deploy a fully containerized overcloud This documentation explains how to deploy a fully containerized overcloud
utilizing Podman which is the default since the Stein releasee. utilizing Podman which is the default since the Stein release.
The requirements for a containerized overcloud are the same as for any other The requirements for a containerized overcloud are the same as for any other
overcloud deployment. The real difference is in where the overcloud services overcloud deployment. The real difference is in where the overcloud services

View File

@ -127,7 +127,7 @@ Deploying a Standalone OpenStack node
be bound to by haproxy while the other by backend services on the same. be bound to by haproxy while the other by backend services on the same.
.. Note: The following example utilizes 2 interfaces. NIC1 which will serve as .. Note: The following example utilizes 2 interfaces. NIC1 which will serve as
the management inteface. It can have any address and will be left untouched. the management interface. It can have any address and will be left untouched.
NIC2 will serve as the OpenStack & Provider network NIC. The following NIC2 will serve as the OpenStack & Provider network NIC. The following
exports should be configured for your network and interface. exports should be configured for your network and interface.
@ -179,7 +179,7 @@ Deploying a Standalone OpenStack node
floating IP. floating IP.
.. Note: NIC1 will serve as the management, OpenStack and Provider network .. Note: NIC1 will serve as the management, OpenStack and Provider network
inteface. The exports should be configured for your network and interface. interface. The exports should be configured for your network and interface.
.. code-block:: bash .. code-block:: bash

View File

@ -53,7 +53,7 @@ Preparing the Baremetal Environment
Networking Networking
^^^^^^^^^^ ^^^^^^^^^^
The overcloud nodes will be deployed from the undercloud machine and therefore the machines need to have their network settings modified to allow for the overcloud nodes to be PXE boot'ed using the undercloud machine. As such, the setup requires that: The overcloud nodes will be deployed from the undercloud machine and therefore the machines need to have their network settings modified to allow for the overcloud nodes to be PXE booted using the undercloud machine. As such, the setup requires that:
* All overcloud machines in the setup must support IPMI * All overcloud machines in the setup must support IPMI
* A management provisioning network is setup for all of the overcloud machines. * A management provisioning network is setup for all of the overcloud machines.

View File

@ -217,7 +217,7 @@ overcloud to connect to an external ceph cluster:
Deploying Manila with an External CephFS Service Deploying Manila with an External CephFS Service
------------------------------------------------ ------------------------------------------------
If chosing to configure Manila with Ganesha as NFS gateway for CephFS, If choosing to configure Manila with Ganesha as NFS gateway for CephFS,
with an external Ceph cluster, then add `environments/manila-cephfsganesha-config.yaml` with an external Ceph cluster, then add `environments/manila-cephfsganesha-config.yaml`
to the list of environment files used to deploy the overcloud and also to the list of environment files used to deploy the overcloud and also
configure the following parameters:: configure the following parameters::

View File

@ -72,12 +72,12 @@ Examples
NovaPMEMNamespaces: "6G:ns1,6G:ns1,6G:ns2,100G:ns3" NovaPMEMNamespaces: "6G:ns1,6G:ns1,6G:ns2,100G:ns3"
The following example will peform following steps: The following example will perform following steps:
* ensure **ndctl** tool is installed on hosts with role **ComputePMEM** * ensure **ndctl** tool is installed on hosts with role **ComputePMEM**
* create PMEM namespaces as specified in the **NovaPMEMNamespaces** parameter. * create PMEM namespaces as specified in the **NovaPMEMNamespaces** parameter.
- ns0, ns1, ns2 with size 6GiB - ns0, ns1, ns2 with size 6GiB
- ns3 with size 100GiB - ns3 with size 100GiB
* set Nova prameter **pmem_namespaces** in nova.conf to map create namespaces to vPMEM as specified in **NovaPMEMMappings**. * set Nova parameter **pmem_namespaces** in nova.conf to map create namespaces to vPMEM as specified in **NovaPMEMMappings**.
In this example the label '6GB' will map to one of ns0, ns1 or ns2 namespace and the label 'LARGE' will map to ns3 namespace. In this example the label '6GB' will map to one of ns0, ns1 or ns2 namespace and the label 'LARGE' will map to ns3 namespace.
After deployment you need to configure flavors as described in documentation `Nova: Configure a flavor <https://docs.openstack.org/nova/latest/admin/virtual-persistent-memory.html#configure-a-flavor>`_ After deployment you need to configure flavors as described in documentation `Nova: Configure a flavor <https://docs.openstack.org/nova/latest/admin/virtual-persistent-memory.html#configure-a-flavor>`_

View File

@ -384,7 +384,7 @@ Network data YAML options
See `Options for network data YAML subnet definitions`_ for a list of all See `Options for network data YAML subnet definitions`_ for a list of all
documented sub-options for the subnet definitions. documented sub-options for the subnet definitions.
type: *dictonary* type: *dictionary*
Options for network data YAML subnet definitions Options for network data YAML subnet definitions
@ -419,7 +419,7 @@ Options for network data YAML subnet definitions
type: *list* type: *list*
elements: *dictonary* elements: *dictionary*
:suboptions: :suboptions:
@ -447,7 +447,7 @@ Options for network data YAML subnet definitions
type: *list* type: *list*
elements: *dictonary* elements: *dictionary*
:suboptions: :suboptions:
@ -475,7 +475,7 @@ Options for network data YAML subnet definitions
type: *list* type: *list*
elements: *dictonary* elements: *dictionary*
:suboptions: :suboptions:
@ -504,7 +504,7 @@ Options for network data YAML subnet definitions
type: *list* type: *list*
elements: *dictonary* elements: *dictionary*
:suboptions: :suboptions:

View File

@ -23,7 +23,7 @@ These roles can be listed using the `tripleoclient` by running::
With these provided roles, the user deploying the overcloud can generate a With these provided roles, the user deploying the overcloud can generate a
`roles_data.yaml` file that contains the roles they would like to use for the `roles_data.yaml` file that contains the roles they would like to use for the
overcloud nodes. Additionally the user can manage their personal custom roles overcloud nodes. Additionally the user can manage their personal custom roles
in a similar manor by storing the individual files in a directory and using in a similar manner by storing the individual files in a directory and using
the `tripleoclient` to generate their `roles_data.yaml`. For example, a user the `tripleoclient` to generate their `roles_data.yaml`. For example, a user
can execute the following to create a `roles_data.yaml` containing only the can execute the following to create a `roles_data.yaml` containing only the
`Controller` and `Compute` roles:: `Controller` and `Compute` roles::

View File

@ -283,7 +283,7 @@ by this ansible way.
.. code-block:: bash .. code-block:: bash
export CONTAINERCLI=podman #choose appropriate contaier cli here export CONTAINERCLI=podman #choose appropriate container cli here
source stackrc source stackrc
mkdir inventories mkdir inventories
for i in overcloud cell1; do \ for i in overcloud cell1; do \

View File

@ -20,7 +20,7 @@ the cell host discovery:
# CONTAINERCLI can be either docker or podman # CONTAINERCLI can be either docker or podman
export CONTAINERCLI='docker' export CONTAINERCLI='docker'
# run cell host dicovery # run cell host discovery
ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \ ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 discover_hosts --by-service --verbose nova-manage cell_v2 discover_hosts --by-service --verbose
@ -34,7 +34,7 @@ the cell host discovery:
.. note:: .. note::
Optionally the cell uuid cal be specificed to the `discover_hosts` and Optionally the cell uuid cal be specified to the `discover_hosts` and
`list_hosts` command to only target against a specific cell. `list_hosts` command to only target against a specific cell.
Delete a compute from a cell Delete a compute from a cell

View File

@ -147,7 +147,7 @@ been added to the network definitions in the network_data.yaml file format:
The cell controllers use virtual IPs, therefore the existing VIPs from the The cell controllers use virtual IPs, therefore the existing VIPs from the
central overcloud stack should not be referenced. In case cell controllers central overcloud stack should not be referenced. In case cell controllers
and cell computes get split into sepearate stacks, the cell compute stack and cell computes get split into separate stacks, the cell compute stack
network_data file need an external_resource_vip_id reference to the cell network_data file need an external_resource_vip_id reference to the cell
controllers VIP resource. controllers VIP resource.
@ -382,7 +382,7 @@ Add the following content into a parameter file for the cell, e.g. `cell1/cell1.
CellControllerCount: 3 CellControllerCount: 3
ComputeCount: 0 ComputeCount: 0
# Compute names need to be uniq, make sure to have a uniq # Compute names need to be unique, make sure to have a unique
# hostname format for cell nodes # hostname format for cell nodes
ComputeHostnameFormat: 'cell1-compute-%index%' ComputeHostnameFormat: 'cell1-compute-%index%'

View File

@ -354,7 +354,7 @@ The command line interface supports the following options::
addition to registry authentication via addition to registry authentication via
ContainerImageRegistryCredentials. ContainerImageRegistryCredentials.
--cephadm-default-container --cephadm-default-container
Use the default continer defined in cephadm instead of Use the default container defined in cephadm instead of
container_image_prepare_defaults.yaml. If this is container_image_prepare_defaults.yaml. If this is
used, 'cephadm bootstrap' is not passed the --image used, 'cephadm bootstrap' is not passed the --image
parameter. parameter.
@ -445,7 +445,7 @@ initial Ceph configuration file which can be passed with the --config
option. These settings may also be modified after `openstack overcloud option. These settings may also be modified after `openstack overcloud
ceph deploy`. ceph deploy`.
The deprecated Heat paramters `CephPoolDefaultSize` and The deprecated Heat parameters `CephPoolDefaultSize` and
`CephPoolDefaultPgNum` no longer have any effect as these `CephPoolDefaultPgNum` no longer have any effect as these
configurations are not made during overcloud deployment. configurations are not made during overcloud deployment.
However, during overcloud deployment pools are created and However, during overcloud deployment pools are created and
@ -956,7 +956,7 @@ Then the Ceph cluster will have the following parameters set::
ms_bind_ipv6 = True ms_bind_ipv6 = True
Because the storage networks in network_data.yaml contain `ipv6: Because the storage networks in network_data.yaml contain `ipv6:
true`, the ipv6_subet values are extracted and the Ceph globals true`, the ipv6_subset values are extracted and the Ceph globals
`ms_bind_ipv4` is set `false` and `ms_bind_ipv6` is set `true`. `ms_bind_ipv4` is set `false` and `ms_bind_ipv6` is set `true`.
It is not supported to have the ``public_network`` use IPv4 and It is not supported to have the ``public_network`` use IPv4 and
the ``cluster_network`` use IPv6 or vice versa. the ``cluster_network`` use IPv6 or vice versa.

View File

@ -102,7 +102,7 @@ Deploying the Overcloud
Provision networks and ports if using Neutron Provision networks and ports if using Neutron
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If using Neutron for resource managment, Network resources for the deployment If using Neutron for resource management, Network resources for the deployment
still must be provisioned with the ``openstack overcloud network provision`` still must be provisioned with the ``openstack overcloud network provision``
command as documented in :ref:`custom_networks`. command as documented in :ref:`custom_networks`.
@ -559,7 +559,7 @@ The following is a sample environment file that shows setting these values
Use ``DeployedServerPortMap`` to assign an ControlPlane Virtual IP address from Use ``DeployedServerPortMap`` to assign an ControlPlane Virtual IP address from
any CIDR, and the ``RedisVirtualFixedIPs`` and ``OVNDBsVirtualFixedIPs`` any CIDR, and the ``RedisVirtualFixedIPs`` and ``OVNDBsVirtualFixedIPs``
parameters to assing the ``RedisVip`` and ``OVNDBsVip``:: parameters to assign the ``RedisVip`` and ``OVNDBsVip``::
resource_registry: resource_registry:
OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml

View File

@ -972,7 +972,7 @@ the command from example_export_dcn_ (Victoria or prior releases).
``network_data_v2.yaml``, ``vip_data.yaml``, and ``baremetal_deployment.yaml`` ``network_data_v2.yaml``, ``vip_data.yaml``, and ``baremetal_deployment.yaml``
are the same files used with the ``control-plane`` stack. However, the files are the same files used with the ``control-plane`` stack. However, the files
will need to be modified if the ``edge-0`` or ``edge-1`` stacks require will need to be modified if the ``edge-0`` or ``edge-1`` stacks require
addditional provisioning of network resources for new subnets. Update the files additional provisioning of network resources for new subnets. Update the files
as needed and continue to use the same files for all ``overcloud deploy`` as needed and continue to use the same files for all ``overcloud deploy``
commands across all stacks. commands across all stacks.

View File

@ -391,7 +391,7 @@ you need to disable image conversion you may override the
parameter_defaults: parameter_defaults:
GlanceImageImportPlugin: [] GlanceImageImportPlugin: []
The ``glance.yaml`` file sets the following to configue the local Glance backend:: The ``glance.yaml`` file sets the following to configure the local Glance backend::
parameter_defaults: parameter_defaults:
GlanceShowMultipleLocations: true GlanceShowMultipleLocations: true
@ -456,7 +456,7 @@ config-download. If the ``--cephx-key-client-name`` is not passed,
then the default cephx client key called `openstack` will be then the default cephx client key called `openstack` will be
extracted. extracted.
The genereated ``~/central_ceph_external.yaml`` should look something The generated ``~/central_ceph_external.yaml`` should look something
like the following:: like the following::
parameter_defaults: parameter_defaults:
@ -484,14 +484,14 @@ cluster.
The ``openstack overcloud export ceph`` command will obtain all of the The ``openstack overcloud export ceph`` command will obtain all of the
values from the config-download directory of the stack specified by values from the config-download directory of the stack specified by
`--stack` option. All values are extracted from the `--stack` option. All values are extracted from the
``cephadm/ceph_client.yml`` file. This file is genereated when ``cephadm/ceph_client.yml`` file. This file is generated when
config-download executes the export tasks from the tripleo-ansible config-download executes the export tasks from the tripleo-ansible
role `tripleo_cephadm`. It should not be necessary to extract these role `tripleo_cephadm`. It should not be necessary to extract these
values manually as the ``openstack overcloud export ceph`` command values manually as the ``openstack overcloud export ceph`` command
will genereate a valid YAML file with `CephExternalMultiConfig` will generate a valid YAML file with `CephExternalMultiConfig`
populated for all stacks passed with the `--stack` option. populated for all stacks passed with the `--stack` option.
The `ceph_conf_overrides` section of the file genereated by ``openstack The `ceph_conf_overrides` section of the file generated by ``openstack
overcloud export ceph`` should look like the following:: overcloud export ceph`` should look like the following::
ceph_conf_overrides: ceph_conf_overrides:
@ -879,7 +879,7 @@ Use the `openstack overcloud export ceph` command to create
In the above example a coma-delimited list of Heat stack names is In the above example a coma-delimited list of Heat stack names is
provided to the ``--stack`` option. Pass as many stacks as necessary provided to the ``--stack`` option. Pass as many stacks as necessary
for all deployed DCN sites so that the configuration data to connect for all deployed DCN sites so that the configuration data to connect
to every DCN Ceph cluster is extracted into the single genereated to every DCN Ceph cluster is extracted into the single generated
``dcn_ceph_external.yaml`` file. ``dcn_ceph_external.yaml`` file.
If you created a separate cephx key called external on each DCN ceph If you created a separate cephx key called external on each DCN ceph
@ -1017,7 +1017,7 @@ described above.
Thus, for a three stack deployment the following will be the case. Thus, for a three stack deployment the following will be the case.
- The initial deployment of the cental stack is configured with one - The initial deployment of the central stack is configured with one
external Ceph cluster called central, which is the default store for external Ceph cluster called central, which is the default store for
Cinder, Glance, and Nova. We will refer to this as the central Cinder, Glance, and Nova. We will refer to this as the central
site's "primary external Ceph cluster". site's "primary external Ceph cluster".
@ -1082,7 +1082,7 @@ Ensure you have Glance 3.0.0 or newer as provided by the
Authenticate to the control-plane using the RC file generated Authenticate to the control-plane using the RC file generated
by the stack from the first deployment which contains Keystone. by the stack from the first deployment which contains Keystone.
In this example the stack was called "control-plane" so the file In this example the stack was called "control-plane" so the file
to source beofre running Glance commands will be called to source before running Glance commands will be called
"control-planerc". "control-planerc".
Confirm the expected stores are available: Confirm the expected stores are available:
@ -1324,7 +1324,7 @@ site, which is the default backend for Glance.
After the above is run the output of `openstack image show After the above is run the output of `openstack image show
$IMAGE_ID -f value -c properties` should contain a JSON data structure $IMAGE_ID -f value -c properties` should contain a JSON data structure
whose key called `stores` should looke like "dcn0,central" as whose key called `stores` should look like "dcn0,central" as
the image will also exist in the "central" backend which stores its the image will also exist in the "central" backend which stores its
data on the central Ceph cluster. The same image at the Central site data on the central Ceph cluster. The same image at the Central site
may then be copied to other DCN sites, booted in the vms or volumes may then be copied to other DCN sites, booted in the vms or volumes

View File

@ -50,7 +50,7 @@ Architecture
Deploying the Operational Tool Server Deploying the Operational Tool Server
------------------------------------- -------------------------------------
There is an ansible project called opstools-ansible (OpsTools_) on github that helps to install the Operator Server, further documentation of the operational tool server instalation can be founded at (OpsToolsDoc_). There is an ansible project called opstools-ansible (OpsTools_) on github that helps to install the Operator Server, further documentation of the operational tool server installation can be founded at (OpsToolsDoc_).
.. _OpsTools: https://github.com/centos-opstools/opstools-ansible .. _OpsTools: https://github.com/centos-opstools/opstools-ansible
.. _OpsToolsDoc: https://github.com/centos-opstools/opstools-doc .. _OpsToolsDoc: https://github.com/centos-opstools/opstools-doc
@ -135,7 +135,7 @@ Before deploying the Overcloud
LoggingSharedKey: secret # The key LoggingSharedKey: secret # The key
LoggingSSLCertificate: | # The content of the SSL Certificate LoggingSSLCertificate: | # The content of the SSL Certificate
-----BEGIN CERTIFICATE----- -----BEGIN CERTIFICATE-----
...contens of server.pem here... ...contents of server.pem here...
-----END CERTIFICATE----- -----END CERTIFICATE-----
- Performance Monitoring:: - Performance Monitoring::

View File

@ -91,7 +91,7 @@ Example::
type: ovs_dpdk_port type: ovs_dpdk_port
name: dpdk0 name: dpdk0
mtu: 2000 mtu: 2000
rx_queu: 2 rx_queue: 2
members: members:
- -
type: interface type: interface

View File

@ -8,14 +8,14 @@ OpenvSwitch before network deployment (os-net-config), but after the
hugepages are created (hugepages are created using kernel args). This hugepages are created (hugepages are created using kernel args). This
requirement is also valid for some 3rd party SDN integration. This kind of requirement is also valid for some 3rd party SDN integration. This kind of
configuration requires additional TripleO service definitions. This document configuration requires additional TripleO service definitions. This document
explains how to acheive such deployments on and after `train` release. explains how to achieve such deployments on and after `train` release.
.. note:: .. note::
In `queens` release, the resource `PreNetworkConfig` can be overriden to In `queens` release, the resource `PreNetworkConfig` can be overridden to
achieve the required behavior, which has been deprecated from `train` achieve the required behavior, which has been deprecated from `train`
onwards. The implementations based on `PreNetworkConfig` should be onwards. The implementations based on `PreNetworkConfig` should be
moved to other available aternates. moved to other available alternates.
The TripleO service `OS::TripleO::BootParams` configures the parameter The TripleO service `OS::TripleO::BootParams` configures the parameter
`KernelArgs` and reboots the node using the `tripleo-ansible` role `KernelArgs` and reboots the node using the `tripleo-ansible` role

View File

@ -400,7 +400,7 @@ Example ``roles_data`` below. (The list of default services has been left out.)
############################################################################# #############################################################################
- name: Controller - name: Controller
description: | description: |
Controller role that has all the controler services loaded and handles Controller role that has all the controller services loaded and handles
Database, Messaging and Network functions. Database, Messaging and Network functions.
CountDefault: 1 CountDefault: 1
tags: tags:
@ -557,7 +557,7 @@ network configuration templates as needed.
be used for both compute roles (``computeleaf0`` and be used for both compute roles (``computeleaf0`` and
``computeleaf1``). ``computeleaf1``).
Create a environement file (``network-environment-overrides.yaml``) with Create a environment file (``network-environment-overrides.yaml``) with
``resource_registry`` overrides to specify the network configuration templates ``resource_registry`` overrides to specify the network configuration templates
to use. For example:: to use. For example::

View File

@ -26,7 +26,7 @@ baremetal on which SR-IOV needs to be enabled.
Also, SR-IOV requires mandatory kernel parameters to be set, like Also, SR-IOV requires mandatory kernel parameters to be set, like
``intel_iommu=on iommu=pt`` on Intel machines. In order to enable the ``intel_iommu=on iommu=pt`` on Intel machines. In order to enable the
configuration of kernel parametres to the host, host-config-pre-network configuration of kernel parameters to the host, host-config-pre-network
environment file has to be added for the deploy command. environment file has to be added for the deploy command.
Adding the following arguments to the ``openstack overcloud deploy`` command Adding the following arguments to the ``openstack overcloud deploy`` command

View File

@ -323,7 +323,7 @@ For reference, the Novajoin based composable service is located at
tripleo-ipa Composable Service tripleo-ipa Composable Service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you're deploying *TLS-everwhere* with tripleo-ipa prior to Victoria, you need to If you're deploying *TLS-everywhere* with tripleo-ipa prior to Victoria, you need to
override the default Novajoin composable service. Add the following composable service to override the default Novajoin composable service. Add the following composable service to
the ``resource_registry`` in ``tls-parameters.yaml``:: the ``resource_registry`` in ``tls-parameters.yaml``::

View File

@ -2,7 +2,7 @@ Tolerate deployment failures
============================ ============================
When proceeding to large scale deployments, it happens very often to have When proceeding to large scale deployments, it happens very often to have
infrastucture problems such as network outages, wrong configurations applied infrastructure problems such as network outages, wrong configurations applied
on hardware, hard drive issues, etc. on hardware, hard drive issues, etc.
It is unpleasant to deploy hundred of nodes and only have a few of them which It is unpleasant to deploy hundred of nodes and only have a few of them which

View File

@ -34,7 +34,7 @@ baremetal on which vDPA needs to be enabled.
Also, vDPA requires mandatory kernel parameters to be set, like Also, vDPA requires mandatory kernel parameters to be set, like
``intel_iommu=on iommu=pt`` on Intel machines. In order to enable the ``intel_iommu=on iommu=pt`` on Intel machines. In order to enable the
configuration of kernel parametres to the host, The ``KernelArgs`` role configuration of kernel parameters to the host, The ``KernelArgs`` role
parameter has to be defined accordingly. parameter has to be defined accordingly.
Adding the following arguments to the ``openstack overcloud deploy`` command Adding the following arguments to the ``openstack overcloud deploy`` command
@ -146,7 +146,7 @@ Scheduling instances
Normally, the ``PciPassthroughFilter`` is sufficient to ensure that a vDPA instance will Normally, the ``PciPassthroughFilter`` is sufficient to ensure that a vDPA instance will
land on a vDPA host. If we want to prevent other instances from using a vDPA host, we need land on a vDPA host. If we want to prevent other instances from using a vDPA host, we need
to setup the `isolate-aggreate feature to setup the `isolate-aggregate feature
<https://docs.openstack.org/nova/latest/reference/isolate-aggregates.html>`_. <https://docs.openstack.org/nova/latest/reference/isolate-aggregates.html>`_.
Example:: Example::
@ -431,7 +431,7 @@ Validating the OVN agents::
+--------------------------------------+----------------------+-------------------------+-------------------+-------+-------+----------------------------+ +--------------------------------------+----------------------+-------------------------+-------------------+-------+-------+----------------------------+
Other usefull commands for troubleshooting:: Other useful commands for troubleshooting::
[root@computevdpa-0 ~]# ovs-appctl dpctl/dump-flows -m type=offloaded [root@computevdpa-0 ~]# ovs-appctl dpctl/dump-flows -m type=offloaded
[root@computevdpa-0 ~]# ovs-appctl dpctl/dump-flows -m [root@computevdpa-0 ~]# ovs-appctl dpctl/dump-flows -m

View File

@ -11,7 +11,7 @@ by following the process outlined in on this page.
If you're just scaling down nodes and plan to re-use them. Use :ref:`scale_roles` If you're just scaling down nodes and plan to re-use them. Use :ref:`scale_roles`
instead. For temporary issues with nodes, they can be blacklisted temporarily instead. For temporary issues with nodes, they can be blacklisted temporarily
using ``DeploymentServerBlacklist``. using ``DeploymentServerBlacklist``.
This guide is specifcally for removing nodes from the environment. This guide is specifically for removing nodes from the environment.
.. note:: .. note::
If your Compute node is still hosting VM's, ensure they are migrated to If your Compute node is still hosting VM's, ensure they are migrated to

View File

@ -17,7 +17,7 @@ where instance creation times must be minimized.
Since Ussuri Nova also provides an API to pre-cache images on Compute nodes. Since Ussuri Nova also provides an API to pre-cache images on Compute nodes.
See the `Nova Image pre-caching documentation <https://docs.openstack.org/nova/ussuri/admin/image-caching.html#image-pre-caching>`_. See the `Nova Image pre-caching documentation <https://docs.openstack.org/nova/ussuri/admin/image-caching.html#image-pre-caching>`_.
.. note:: The Nova Image Cache is not used when using Ceph RBD for Glange images and Nova ephemeral disk. See `Nova Image Caching documentation <https://docs.openstack.org/nova/ussuri/admin/image-caching.html>`_. .. note:: The Nova Image Cache is not used when using Ceph RBD for Glance images and Nova ephemeral disk. See `Nova Image Caching documentation <https://docs.openstack.org/nova/ussuri/admin/image-caching.html>`_.
Image Cache Cleanup Image Cache Cleanup
------------------- -------------------
@ -167,7 +167,7 @@ Pre-caching on one node and distributing to remaining nodes
In the case of a :doc:`../features/distributed_compute_node` it may be desirable to transfer an image to a single compute node at a remote site and then redistribute it from that node to the remaining compute nodes. In the case of a :doc:`../features/distributed_compute_node` it may be desirable to transfer an image to a single compute node at a remote site and then redistribute it from that node to the remaining compute nodes.
The SSH/SCP configuration that exists between the compute nodes to support cold migration/resize is reused for this purpose. The SSH/SCP configuration that exists between the compute nodes to support cold migration/resize is reused for this purpose.
.. warning:: SSH/SCP is inefficient over high latency networks. The method should only be used when the compute nodes targetted by the playbook are all within the same site. To ensure this is the case set tripleo_nova_image_cache_plan to the stack name of the site. Multiple runs of ansible-playbook are then required, targetting a different site each time. .. warning:: SSH/SCP is inefficient over high latency networks. The method should only be used when the compute nodes targeted by the playbook are all within the same site. To ensure this is the case set tripleo_nova_image_cache_plan to the stack name of the site. Multiple runs of ansible-playbook are then required, targeting a different site each time.
To enable this simply set `tripleo_nova_image_cache_use_proxy: true` in the arguments file. To enable this simply set `tripleo_nova_image_cache_use_proxy: true` in the arguments file.
The image is distributed from the first compute node by default. To use a specific compute node also set `tripleo_nova_image_cache_proxy_hostname`. The image is distributed from the first compute node by default. To use a specific compute node also set `tripleo_nova_image_cache_proxy_hostname`.

View File

@ -81,7 +81,7 @@ What are these above vars:
* `tempest_tempest_conf_overrides`: In order to pass additional tempest configuration to python-tempestconf tool, we can pass a dictionary of values. * `tempest_tempest_conf_overrides`: In order to pass additional tempest configuration to python-tempestconf tool, we can pass a dictionary of values.
* `tempest_test_whitelist`: We need to pass a list of tests which we wish to run on the target host as a list. * `tempest_test_whitelist`: We need to pass a list of tests which we wish to run on the target host as a list.
* `tempest_test_blacklist`: In order to skip tempest tests, we can pass the list here. * `tempest_test_blacklist`: In order to skip tempest tests, we can pass the list here.
* `gather_facts`: We need to set gather_facts to true as os_tempest rely on targetted environment facts for installing stuff. * `gather_facts`: We need to set gather_facts to true as os_tempest rely on targeted environment facts for installing stuff.
Here are the `defaults vars of os_tempest role <https://docs.openstack.org/openstack-ansible-os_tempest/latest/user/default.html>`_. Here are the `defaults vars of os_tempest role <https://docs.openstack.org/openstack-ansible-os_tempest/latest/user/default.html>`_.

View File

@ -678,7 +678,7 @@ Generate tempest.conf and run tempest tests within the container
with a command which copies your `tempest.conf` from `container_tempest` with a command which copies your `tempest.conf` from `container_tempest`
directory to `tempest_workspace/etc` directory:: directory to `tempest_workspace/etc` directory::
$ cp /home/stack/container_tempest/tempets.conf /home/stack/tempest_workspace/etc/tempest.conf $ cp /home/stack/container_tempest/tempest.conf /home/stack/tempest_workspace/etc/tempest.conf
* Set executable privileges to the `tempest_script.sh` script:: * Set executable privileges to the `tempest_script.sh` script::

View File

@ -810,7 +810,7 @@ Following there is a list of all the changes needed:
10. If using a modified version of the core Heat template collection from 10. If using a modified version of the core Heat template collection from
Newton, you need to re-apply your customizations to a copy of the Queens Newton, you need to re-apply your customizations to a copy of the Queens
version. To do this, use a git version control system or similar toolings version. To do this, use a git version control system or similar tooling
to compare. to compare.

View File

@ -58,7 +58,7 @@ run of a group or specific validations.
``--extra-vars-file``: This ``--extra-vars-file``: This
option allows to add a valid ``JSON`` or ``YAML`` option allows to add a valid ``JSON`` or ``YAML``
file contaning extra variables to a run of a group or specific validations. file containing extra variables to a run of a group or specific validations.
.. code-block:: bash .. code-block:: bash

View File

@ -8,7 +8,7 @@ Bare Metal service to provision baremetal before the overcloud is deployed.
This adds a new provision step before the overcloud deploy, and the output of This adds a new provision step before the overcloud deploy, and the output of
the provision is a valid :doc:`../features/deployed_server` configuration. the provision is a valid :doc:`../features/deployed_server` configuration.
In the Wallaby release the baremetal provisining was extended to also manage In the Wallaby release the baremetal provisioning was extended to also manage
the neutron API resources for :doc:`../features/network_isolation` and the neutron API resources for :doc:`../features/network_isolation` and
:doc:`../features/custom_networks`, and apply network configuration on the :doc:`../features/custom_networks`, and apply network configuration on the
provisioned nodes using os-net-config. provisioned nodes using os-net-config.

View File

@ -18,7 +18,7 @@ and only see the ``enroll`` state if their power management fails to validate::
openstack overcloud import instackenv.json openstack overcloud import instackenv.json
Nodes can optionally be introspected in this step by passing the --provide flag Nodes can optionally be introspected in this step by passing the --provide flag
which will progress them through through the manageable_ state and eventually to which will progress them through the manageable_ state and eventually to
the available_ state ready for deployment. the available_ state ready for deployment.
manageable manageable

View File

@ -7,7 +7,7 @@
#. Download and install the python-tripleo-repos RPM from #. Download and install the python-tripleo-repos RPM from
the appropriate RDO repository the appropriate RDO repository
.. admonition:: CentOS Strem 8 .. admonition:: CentOS Stream 8
:class: centos8 :class: centos8
Current `Centos 8 RDO repository <https://trunk.rdoproject.org/centos8/component/tripleo/current/>`_. Current `Centos 8 RDO repository <https://trunk.rdoproject.org/centos8/component/tripleo/current/>`_.

View File

@ -140,7 +140,7 @@ Hack the promotion with testproject
Finally testproject_ and the ability to run individual periodic jobs on Finally testproject_ and the ability to run individual periodic jobs on
demand is an important part of the ruck|rover toolbox. In some cases you may demand is an important part of the ruck|rover toolbox. In some cases you may
want to run a job for verification of a given launchpad bug that affects want to run a job for verification of a given launchpad bug that affects
perioric jobs. periodic jobs.
However another important use is when the ruck|rover notice that one of the However another important use is when the ruck|rover notice that one of the
jobs in criteria failed on something they (now) know how to fix, or on some jobs in criteria failed on something they (now) know how to fix, or on some
@ -191,7 +191,7 @@ at time of writing we see::
priority=1 priority=1
So the centos7 master tripleo-ci-testing *hash* is So the centos7 master tripleo-ci-testing *hash* is
*544864ccc03b053317f5408b0c0349a42723ce73_ebb98bd9a*. The corrresponding repo *544864ccc03b053317f5408b0c0349a42723ce73_ebb98bd9a*. The corresponding repo
is given by the baseurl above and if you navigate to that URL with your is given by the baseurl above and if you navigate to that URL with your
browser you can see the list of packages used in the jobs. Thus, the job browser you can see the list of packages used in the jobs. Thus, the job
specified in the example above for testproject specified in the example above for testproject

View File

@ -115,7 +115,7 @@ Troubleshooting a failed job
When your newly added job fails, you may want to download its logs for a local When your newly added job fails, you may want to download its logs for a local
inspection and root cause analysis. Use the inspection and root cause analysis. Use the
`tripleo-ci gethelogs script `tripleo-ci getthelogs script
<https://github.com/openstack-infra/tripleo-ci/blob/master/scripts/getthelogs>`_ <https://github.com/openstack-infra/tripleo-ci/blob/master/scripts/getthelogs>`_
for that. for that.
@ -157,7 +157,7 @@ Some of the overridable settings are:
- `tempest_format`: To run tempest using different format (packages, containers, venv). - `tempest_format`: To run tempest using different format (packages, containers, venv).
- `tempest_extra_config`: A dict of additional tempest config to be overridden. - `tempest_extra_config`: A dict of additional tempest config to be overridden.
- `tempest_plugins`: A list of tempest plugins needs to be installed. - `tempest_plugins`: A list of tempest plugins needs to be installed.
- `standalone_environment_files`: List of environment files to be overriden - `standalone_environment_files`: List of environment files to be overridden
by the featureset configuration on standalone deployment. The environment by the featureset configuration on standalone deployment. The environment
file should exist in tripleo-heat-templates repo. file should exist in tripleo-heat-templates repo.
- `test_white_regex`: Regex to be used by tempest - `test_white_regex`: Regex to be used by tempest

View File

@ -10,7 +10,7 @@ briefly remind folks of their options.
--------------------------------------------------------------- ---------------------------------------------------------------
RDO's zuul is setup to directly inherit from upstream zuul. Any TripleO RDO's zuul is setup to directly inherit from upstream zuul. Any TripleO
job that executes upstream should be rerunable in RDO's zuul. A distinct job that executes upstream should be re-runnable in RDO's zuul. A distinct
advantage here is that you can ask RDO admins to hold the job for you, advantage here is that you can ask RDO admins to hold the job for you,
get your ssh keys on the box and debug the live environment. It's good get your ssh keys on the box and debug the live environment. It's good
stuff. To hold a node, ask your friends in #rhos-ops stuff. To hold a node, ask your friends in #rhos-ops

View File

@ -92,13 +92,13 @@ In some cases bugs are fixed once a new version of some service is released
service/project). In this case the rover is expected to know what that fix is service/project). In this case the rover is expected to know what that fix is
and do everything they can to make it available in the jobs. This will range and do everything they can to make it available in the jobs. This will range
from posting gerrit reviews to bump some service version in requirements.txt from posting gerrit reviews to bump some service version in requirements.txt
through to simply harrassing the 'right folks' ;) in the relevant `TripleO Squad`_. through to simply harassing the 'right folks' ;) in the relevant `TripleO Squad`_.
In other cases bugs may be deprioritized - for example if the job is non voting In other cases bugs may be deprioritized - for example if the job is non voting
or is not in the promotion criteria then any related bugs are less likely to or is not in the promotion criteria then any related bugs are less likely to
be getting the rover's attention. If you are interested in such jobs or bugs be getting the rover's attention. If you are interested in such jobs or bugs
then you should go to #OFTC oooq channel and find the folks with "ruck" or then you should go to #OFTC oooq channel and find the folks with "ruck" or
"rover" in their nick and harrass them about it! "rover" in their nick and harass them about it!
Of course for other cases there are bona fide bugs with the `TripleO CI code`_ Of course for other cases there are bona fide bugs with the `TripleO CI code`_
that the rover is expected to fix. To avoid being overwhelmed time management that the rover is expected to fix. To avoid being overwhelmed time management
@ -177,7 +177,7 @@ Grafana is also useful for tracking promotions across branches.
represent promotions and height shows the number of promotions on that day. represent promotions and height shows the number of promotions on that day.
Finally grafana tracks a list of all running jobs hilighting the failures in Finally grafana tracks a list of all running jobs highlighting the failures in
red. red.
.. image:: ./_images/grafana3.png .. image:: ./_images/grafana3.png

View File

@ -89,7 +89,7 @@ Here are a non-exhaustive list of things that are expected:
which doesn't only mean +1/-1, but also comments the code that confirm which doesn't only mean +1/-1, but also comments the code that confirm
that the patch is ready (or not) to be merged into the repository. that the patch is ready (or not) to be merged into the repository.
This capacity to provide these kind of reviews is strongly evaluated when This capacity to provide these kind of reviews is strongly evaluated when
recruiting new core reviewers. It is prefered to provide quality reviews recruiting new core reviewers. It is preferred to provide quality reviews
over quantity. A negative review needs productive feedback and harmful over quantity. A negative review needs productive feedback and harmful
comments won't help to build credibility within the team. comments won't help to build credibility within the team.

View File

@ -142,7 +142,7 @@ Prepare version tags
Based on the versions.csv data, an openstack/releases patch needs to be created Based on the versions.csv data, an openstack/releases patch needs to be created
to tag the release with the provided hashes. You can determine which TripleO to tag the release with the provided hashes. You can determine which TripleO
projects are needed by finding the projects taged with "team: tripleo_". projects are needed by finding the projects tagged with "team: tripleo_".
`An example review`_. Please be aware of changes between versions and create `An example review`_. Please be aware of changes between versions and create
the appropriate version number as necessary (e.g. major, feature, or bugfix). the appropriate version number as necessary (e.g. major, feature, or bugfix).

View File

@ -26,7 +26,7 @@ how to use this command as an operator.
.. _building_containers_deploy_guide: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/3rd_party.html .. _building_containers_deploy_guide: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/3rd_party.html
One of the TipleO CI jobs that executes this command is the One of the TripleO CI jobs that executes this command is the
tripleo-build-containers-centos-7_ job. This job invokes the overcloud container tripleo-build-containers-centos-7_ job. This job invokes the overcloud container
image build command in the build.sh.j2_ template:: image build command in the build.sh.j2_ template::