Fix typos

This patch tries to fix maximum typos in docs.

Change-Id: Ic406e3a4423cdd7c46c8a4497d43d63c95b2a9f5
This commit is contained in:
Rajesh Tailor 2020-07-15 15:12:08 +05:30
parent 78374417bb
commit 6447c381b1
41 changed files with 65 additions and 65 deletions

View File

@ -153,8 +153,8 @@ Here are some of them:
the default path for <config-file> will be modified. the default path for <config-file> will be modified.
* `--exclude` to skip some containers during the build. * `--exclude` to skip some containers during the build.
* `--registry` to specify a Container Registry where the images will be pushed. * `--registry` to specify a Container Registry where the images will be pushed.
* `--authfile` to specify an authentification file if the Container Registry * `--authfile` to specify an authentication file if the Container Registry
requires authentification. requires authentication.
* `--skip-build` if we don't want to build and push images. It will only * `--skip-build` if we don't want to build and push images. It will only
generate the configuration files. generate the configuration files.
* `--push` to push the container images into the Container Registry. * `--push` to push the container images into the Container Registry.
@ -305,7 +305,7 @@ Either way, if you need to run a privileged container, make sure to set this par
privileged: true privileged: true
If privilege mode isn't required, it is suggested to set it to false for security reaons. If privilege mode isn't required, it is suggested to set it to false for security reasons.
Kernel modules will need to be loaded when the container will be started by Docker. To do so, it is Kernel modules will need to be loaded when the container will be started by Docker. To do so, it is
suggested to configure the composable service which deploys the module in the container this way:: suggested to configure the composable service which deploys the module in the container this way::

View File

@ -392,7 +392,7 @@ deploy_steps_playbook.yaml
__________________________ __________________________
``deploy_steps_playbook.yaml`` is the playbook used for deployment and template ``deploy_steps_playbook.yaml`` is the playbook used for deployment and template
update. It applies all the software configuration necessary to deploy a full update. It applies all the software configuration necessary to deploy a full
overcluod based on the templates provided as input to the deployment command. overcloud based on the templates provided as input to the deployment command.
This section will summarize at high level the different ansible plays used This section will summarize at high level the different ansible plays used
within this playbook. The play names shown here are the same names used within within this playbook. The play names shown here are the same names used within
@ -410,7 +410,7 @@ Gather facts from overcloud
tags: facts tags: facts
Load global variables Load global variables
Loads all varaibles from `l`global_vars.yaml`` Loads all variables from `l`global_vars.yaml``
tags: always tags: always
Common roles for TripleO servers Common roles for TripleO servers
@ -441,13 +441,13 @@ External deployment step [1,2,3,4,5]
Overcloud deploy step tasks for [1,2,3,4,5] Overcloud deploy step tasks for [1,2,3,4,5]
Applies tasks from the ``deploy_steps_tasks`` template interface Applies tasks from the ``deploy_steps_tasks`` template interface
tags: overcloud, deploy_setps tags: overcloud, deploy_steps
Overcloud common deploy step tasks [1,2,3,4,5] Overcloud common deploy step tasks [1,2,3,4,5]
Applies the common tasks done at each step to include puppet host Applies the common tasks done at each step to include puppet host
configuration, ``container-puppet.py``, and ``paunch`` or configuration, ``container-puppet.py``, and ``paunch`` or
``tripleo_container_manage`` Ansible role (container configuration). ``tripleo_container_manage`` Ansible role (container configuration).
tags: overcloud, deploy_setps tags: overcloud, deploy_steps
Server Post Deployments Server Post Deployments
Applies server specific Heat deployments for configuration done after the 5 Applies server specific Heat deployments for configuration done after the 5
step deployment process. step deployment process.
@ -722,7 +722,7 @@ previewed is dependent on many factors such as the underlying tools in use
(puppet, docker, etc) and the support for ansible check mode in the given (puppet, docker, etc) and the support for ansible check mode in the given
ansible module. ansible module.
The ``--diff`` option can aloso be used with ``--check`` to show the The ``--diff`` option can also be used with ``--check`` to show the
differences that would result from changes. differences that would result from changes.
See `Ansible Check Mode ("Dry Run") See `Ansible Check Mode ("Dry Run")

View File

@ -301,7 +301,7 @@ include:
- As part of a development workflow where local changes need to be deployed for - As part of a development workflow where local changes need to be deployed for
testing and development testing and development
- When changes need to be deployed but are not available through an image - When changes need to be deployed but are not available through an image
build pipeline (proprietry addons, emergency fixes) build pipeline (proprietary addons, emergency fixes)
The modification is done by invoking an ansible role on each image which needs The modification is done by invoking an ansible role on each image which needs
to be modified. The role takes a source image, makes the requested changes, to be modified. The role takes a source image, makes the requested changes,

View File

@ -32,7 +32,7 @@
.. note:: .. note::
The undercloud is intended to work correctly with SELinux enforcing. The undercloud is intended to work correctly with SELinux enforcing.
Installatoins with the permissive/disabled SELinux are not recommended. Installations with the permissive/disabled SELinux are not recommended.
The ``undercloud_enable_selinux`` config option controls that setting. The ``undercloud_enable_selinux`` config option controls that setting.
.. note:: .. note::
@ -152,7 +152,7 @@
actually deployed is completely changed and what is more, for the first actually deployed is completely changed and what is more, for the first
time aligns with the overcloud deployment. See the command time aligns with the overcloud deployment. See the command
``openstack tripleo deploy --standalone`` help for details. ``openstack tripleo deploy --standalone`` help for details.
That interface extention for standalone clouds is experimental for Rocky. That interface extension for standalone clouds is experimental for Rocky.
It is normally should not be used directly for undercloud installations. It is normally should not be used directly for undercloud installations.
#. Run the command to install the undercloud: #. Run the command to install the undercloud:

View File

@ -281,7 +281,7 @@ Load the images into the containerized undercloud Glance::
To upload a single image, see :doc:`upload_single_image`. To upload a single image, see :doc:`upload_single_image`.
If working with multiple architectures and/or plaforms with an architecure these If working with multiple architectures and/or platforms with an architecture these
attributes can be specified at upload time as in:: attributes can be specified at upload time as in::
openstack overcloud image upload openstack overcloud image upload

View File

@ -43,7 +43,7 @@ Installing the Undercloud
.. note:: .. note::
The undercloud is intended to work correctly with SELinux enforcing. The undercloud is intended to work correctly with SELinux enforcing.
Installatoins with the permissive/disabled SELinux are not recommended. Installations with the permissive/disabled SELinux are not recommended.
The ``undercloud_enable_selinux`` config option controls that setting. The ``undercloud_enable_selinux`` config option controls that setting.
.. note:: .. note::
@ -185,7 +185,7 @@ Installing the Undercloud
actually deployed is completely changed and what is more, for the first actually deployed is completely changed and what is more, for the first
time aligns with the overcloud deployment. See the command time aligns with the overcloud deployment. See the command
``openstack tripleo deploy --standalone`` help for details. ``openstack tripleo deploy --standalone`` help for details.
That interface extention for standalone clouds is experimental for Rocky. That interface extension for standalone clouds is experimental for Rocky.
It is normally should not be used directly for undercloud installations. It is normally should not be used directly for undercloud installations.
#. Run the command to install the undercloud: #. Run the command to install the undercloud:

View File

@ -5,7 +5,7 @@ Standalone Containers based Deployment
This currently is only supported in Rocky or newer versions. This currently is only supported in Rocky or newer versions.
This documentation explains how the underlying framework used by the This documentation explains how the underlying framework used by the
Containterized Undercloud deployment mechanism can be reused to deploy a single Containerized Undercloud deployment mechanism can be reused to deploy a single
node capable of running OpenStack services for development. node capable of running OpenStack services for development.
@ -219,7 +219,7 @@ Deploying a Standalone OpenStack node
:class: ceph :class: ceph
Create an additional environment file which directs ceph-ansible Create an additional environment file which directs ceph-ansible
to use the block device with logical volumes and fecth directory to use the block device with logical volumes and fetch directory
backup created earlier. In the same file pass additional Ceph backup created earlier. In the same file pass additional Ceph
parameters for the OSD scenario and Ceph networks. Set the parameters for the OSD scenario and Ceph networks. Set the
placement group and replica count to values which fit the number placement group and replica count to values which fit the number

View File

@ -86,7 +86,7 @@ system, which is often the preferred way to operate.
Restarting nova_scheduler for example:: Restarting nova_scheduler for example::
$ sudo systemctl restart triplo_nova_scheduler $ sudo systemctl restart tripleo_nova_scheduler
Stopping a container with systemd:: Stopping a container with systemd::

View File

@ -296,7 +296,7 @@ using the ``enabled_drivers`` option. It is deprecated in the Queens release
cycle and should no longer be used. See the `hardware types migration guide`_ cycle and should no longer be used. See the `hardware types migration guide`_
for information on how to migrate existing nodes. for information on how to migrate existing nodes.
Both hardware types and classic drivers can be equially used in the Both hardware types and classic drivers can be equally used in the
``pm_addr`` field of the ``instackenv.json``. ``pm_addr`` field of the ``instackenv.json``.
See https://docs.openstack.org/ironic/latest/admin/drivers.html for the most See https://docs.openstack.org/ironic/latest/admin/drivers.html for the most

View File

@ -1108,7 +1108,7 @@ switch with ironic/neutron controlling the vlan id for the switch::
+---------------+ +-----------------+ +---------------+ +-----------------+
Switch config for xe-0/0/7 should be removed before deployment, and Switch config for xe-0/0/7 should be removed before deployment, and
xe-0/0/1 shoud be a member of the vlan range 1200-1299:: xe-0/0/1 should be a member of the vlan range 1200-1299::
xe-0/0/1 { xe-0/0/1 {
native-vlan-id XXX; native-vlan-id XXX;

View File

@ -226,8 +226,8 @@ using the following defaults:
* CephClusterName: 'ceph' * CephClusterName: 'ceph'
* CephClientUserName: 'openstack' * CephClientUserName: 'openstack'
* CephClientKey: This value is randomly genereated per Heat stack. If * CephClientKey: This value is randomly generated per Heat stack. If
it is overridden the recomendation is to set it to the output of it is overridden the recommendation is to set it to the output of
`ceph-authtool --gen-print-key`. `ceph-authtool --gen-print-key`.
If the above values are overridden, the keyring file will have a If the above values are overridden, the keyring file will have a

View File

@ -45,7 +45,7 @@ You can then pass the environment file on deployment as follows::
The same approach can be used for any role. The same approach can be used for any role.
.. warning:: .. warning::
While considerable flexibilty is available regarding service placement with While considerable flexibility is available regarding service placement with
these interfaces, the flexible placement of pacemaker managed services is only these interfaces, the flexible placement of pacemaker managed services is only
available since the Ocata release. available since the Ocata release.

View File

@ -2,7 +2,7 @@ Deploy an additional nova cell v2
================================= =================================
.. warning:: .. warning::
Multi cell cupport is only supported in Stein and later versions. Multi cell support is only supported in Stein and later versions.
The different sections in this guide assume that you are ready to deploy a new The different sections in this guide assume that you are ready to deploy a new
overcloud, or already have installed an overcloud (min Stein release). overcloud, or already have installed an overcloud (min Stein release).

View File

@ -524,7 +524,7 @@ like this:
      external_resource_segment_id: 730769f8-e78f-42a3-9dd4-367a212e49ff       external_resource_segment_id: 730769f8-e78f-42a3-9dd4-367a212e49ff
Previously we already added the `external_resource_network_id` and `external_resource_subnet_id` Previously we already added the `external_resource_network_id` and `external_resource_subnet_id`
for the network in the upper level hirarchy. for the network in the upper level hierarchy.
In addition we add the `external_resource_vip_id` of the VIP of the stack which In addition we add the `external_resource_vip_id` of the VIP of the stack which
should be reused for this network (Storage). should be reused for this network (Storage).

View File

@ -128,7 +128,7 @@ network to the instance.
.. note:: .. note::
Cloud-init by default configures only first network interface to use DHCP Cloud-init by default configures only first network interface to use DHCP
which means that user intances will not have network interface for storage which means that user instances will not have network interface for storage
network autoconfigured. You can configure it manually or use network autoconfigured. You can configure it manually or use
`dhcp-all-interfaces <https://docs.openstack.org/diskimage-builder/elements/dhcp-all-interfaces/README.html>`_. `dhcp-all-interfaces <https://docs.openstack.org/diskimage-builder/elements/dhcp-all-interfaces/README.html>`_.
@ -170,7 +170,7 @@ Deploying the Overcloud with an External Backend
- Fill in or override the values of parameters for your back end. - Fill in or override the values of parameters for your back end.
- Since you have copied the file out of its original locaation, - Since you have copied the file out of its original location,
replace relative paths in the resource_registry with absolute paths replace relative paths in the resource_registry with absolute paths
based on ``/usr/share/openstack-tripleo-heat-templates``. based on ``/usr/share/openstack-tripleo-heat-templates``.

View File

@ -324,7 +324,7 @@ from any CIDR::
OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml
# Set VIP's for redis and OVN to noop to default to the ctlplane VIP # Set VIP's for redis and OVN to noop to default to the ctlplane VIP
# The cltplane VIP is set with control_virtual_ip in # The ctlplane VIP is set with control_virtual_ip in
# DeployedServerPortMap below. # DeployedServerPortMap below.
# #
# Alternatively, these can be mapped to deployed-neutron-port.yaml as # Alternatively, these can be mapped to deployed-neutron-port.yaml as

View File

@ -16,7 +16,7 @@ management between the control plane stack and the stacks for additional compute
nodes. The stacks can be managed, scaled, and updated separately. nodes. The stacks can be managed, scaled, and updated separately.
Using separate stacks also creates smaller failure domains as there are less Using separate stacks also creates smaller failure domains as there are less
baremetal nodes in each invidiual stack. A failure in one baremetal node only baremetal nodes in each individual stack. A failure in one baremetal node only
requires that management operations to address that failure need only affect requires that management operations to address that failure need only affect
the single stack that contains the failed node. the single stack that contains the failed node.
@ -339,7 +339,7 @@ Extract the needed data from the control plane stack:
in the overcloud deployment. Some passwords in the file may be removed if in the overcloud deployment. Some passwords in the file may be removed if
they are not needed by DCN. For example, the passwords for RabbitMQ, MySQL, they are not needed by DCN. For example, the passwords for RabbitMQ, MySQL,
Keystone, Nova and Neutron should be sufficient to launch an instance. When Keystone, Nova and Neutron should be sufficient to launch an instance. When
the export comman is run, the Ceph passwords are excluded so that DCN the export common is run, the Ceph passwords are excluded so that DCN
deployments which include Ceph do not reuse the same Ceph password and deployments which include Ceph do not reuse the same Ceph password and
instead new ones are generated per DCN deployment. instead new ones are generated per DCN deployment.
@ -1114,7 +1114,7 @@ were to run the following::
tripleo-ansible-inventory --static-yaml-inventory inventory.yaml --stack central,edge0 tripleo-ansible-inventory --static-yaml-inventory inventory.yaml --stack central,edge0
then you could use the genereated inventory.yaml as follows:: then you could use the generated inventory.yaml as follows::
(undercloud) [stack@undercloud ~]$ ansible -i inventory.yaml -m ping central (undercloud) [stack@undercloud ~]$ ansible -i inventory.yaml -m ping central
central-controller-0 | SUCCESS => { central-controller-0 | SUCCESS => {

View File

@ -11,7 +11,7 @@ Features
This Distributed Multibackend Storage design extends the architecture This Distributed Multibackend Storage design extends the architecture
described in :doc:`distributed_compute_node` to support the following described in :doc:`distributed_compute_node` to support the following
worklow. workflow.
- Upload an image to the Central site using `glance image-create` - Upload an image to the Central site using `glance image-create`
command with `--file` and `--store default_backend` parameters. command with `--file` and `--store default_backend` parameters.

View File

@ -281,7 +281,7 @@ following template as a configuration::
:doc:`ssl` document. :doc:`ssl` document.
* $SUFFIX will be the domain for your users. Given a domain, the suffix DN can * $SUFFIX will be the domain for your users. Given a domain, the suffix DN can
be created withwith the following snippet:: be created with the following snippet::
suffix=`echo $DOMAIN | sed -e 's/^/dc=/' -e 's/\./,dc=/g'` suffix=`echo $DOMAIN | sed -e 's/^/dc=/' -e 's/\./,dc=/g'`

View File

@ -107,7 +107,7 @@ that looks as follows::
The ``IpsecVars`` option is able to change any parameter in the tripleo-ipsec The ``IpsecVars`` option is able to change any parameter in the tripleo-ipsec
ansible role. ansible role.
.. note:: For more information on the algorithms that Libreswan suppports, .. note:: For more information on the algorithms that Libreswan supports,
please check the `Libreswan documentation`_ please check the `Libreswan documentation`_
.. note:: For more information on the available parameters, check the README .. note:: For more information on the available parameters, check the README

View File

@ -93,8 +93,8 @@ may not always point to the same device on reboot. Thus, `by_path` is
recommended and is the default if `-k` is not specified. recommended and is the default if `-k` is not specified.
Ironic will have one of the available disks on the system reserved as Ironic will have one of the available disks on the system reserved as
the root disk. The utility will always exlude the root disk from the the root disk. The utility will always exclude the root disk from the
list of devices genereated. list of devices generated.
Use `./make_ceph_disk_list.py --help` to see other available options. Use `./make_ceph_disk_list.py --help` to see other available options.

View File

@ -108,7 +108,7 @@ Before deploying the Overcloud
3. Configure the environment 3. Configure the environment
The easiest way to configure our environment will be to create a parameter file, let's called paramters.yaml with all the paramteres defined. The easiest way to configure our environment will be to create a parameter file, let's called parameters.yaml with all the parameters defined.
- Availability Monitoring:: - Availability Monitoring::

View File

@ -113,7 +113,7 @@ Audit
----- -----
Having a system capable of recording all audit events is key for troubleshooting Having a system capable of recording all audit events is key for troubleshooting
and peforming analysis of events that led to a certain outcome. The audit system and performing analysis of events that led to a certain outcome. The audit system
is capable of logging many events such as someone changing the system time, is capable of logging many events such as someone changing the system time,
changes to Mandatory / Discretionary Access Control, creating / destroying users changes to Mandatory / Discretionary Access Control, creating / destroying users
or groups. or groups.
@ -251,7 +251,7 @@ example structure::
.. note:: .. note::
Operators should select their own required AIDE values, as the example list Operators should select their own required AIDE values, as the example list
above is not activley maintained or benchmarked. It only seeks to provide above is not actively maintained or benchmarked. It only seeks to provide
an document the YAML structure required. an document the YAML structure required.
If above environment file were saved as `aide.yaml` it could then be passed to If above environment file were saved as `aide.yaml` it could then be passed to

View File

@ -88,7 +88,7 @@ node, the user is able to log in as the stack user, and
have the Backup restored in the folder have the Backup restored in the folder
`/var/tmp/test_bk_down`, follow the next steps. `/var/tmp/test_bk_down`, follow the next steps.
Syncronize the stack home directory, haproxy configuration, Synchronize the stack home directory, haproxy configuration,
certificates and hieradata with the backup content: certificates and hieradata with the backup content:
:: ::
@ -171,7 +171,7 @@ the DB password to be able to reinstall the Undercloud:
oldpassword=$(sudo cat /var/tmp/test_bk_down/root/.my.cnf | grep -m1 password | cut -d'=' -f2 | tr -d "'") oldpassword=$(sudo cat /var/tmp/test_bk_down/root/.my.cnf | grep -m1 password | cut -d'=' -f2 | tr -d "'")
mysqladmin -u root -p$oldpassword password '' mysqladmin -u root -p$oldpassword password ''
Remove old user permisology if it exists, replace <node> with the host related to each user. Remove old user permissions if it exists, replace <node> with the host related to each user.
:: ::

View File

@ -96,7 +96,7 @@ verify we can restore it from the Hypervisor.
3. Prepare the hypervisor. 3. Prepare the hypervisor.
We will run in the Hypervison some pre backup steps in We will run in the Hypervisor some pre backup steps in
order to have the correct configuration to mount the order to have the correct configuration to mount the
backup bucket from the Undercloud node:: backup bucket from the Undercloud node::

View File

@ -4,7 +4,7 @@ Rotation Keystone Fernet Keys from the Overcloud
================================================ ================================================
Like most passwords in your overcloud deployment, keystone fernet keys are also Like most passwords in your overcloud deployment, keystone fernet keys are also
stored as part of the deployment plan in mistral. The overcloud deplotment's stored as part of the deployment plan in mistral. The overcloud deployment's
fernet keys can be rotated with the following command:: fernet keys can be rotated with the following command::
openstack workflow execution create \ openstack workflow execution create \

View File

@ -76,10 +76,10 @@ What are these above vars:
* `tempest_public_subnet_cidr`: Based on the standalone deployment IP, we need to pass a required cidr. * `tempest_public_subnet_cidr`: Based on the standalone deployment IP, we need to pass a required cidr.
* `tempest_public_subnet_gateway_ip and tempest_public_subnet_allocation_pools`: * `tempest_public_subnet_gateway_ip and tempest_public_subnet_allocation_pools`:
Subnet Gateway IP and allocation pool can be calculated based on the value of `tempest_public_subnet_cidr` nthhost value. Subnet Gateway IP and allocation pool can be calculated based on the value of `tempest_public_subnet_cidr` nthhost value.
* `tempest_use_tempestconf`: For generating tempest.conf, we use python-tempestconf tool. By default It is setted to false. Set to `true` for using it * `tempest_use_tempestconf`: For generating tempest.conf, we use python-tempestconf tool. By default It is set to false. Set it to `true` for using it
* `tempest_run_stackviz`: Stackviz is very useful in CI for analizing tempest results, for local use, we set it to false. By default it is setted to true. * `tempest_run_stackviz`: Stackviz is very useful in CI for analyzing tempest results, for local use, we set it to false. By default it is set to true.
* `tempest_tempest_conf_overrides`: In order to pass additional tempest configuration to python-tempestconf tool, we can pass a dictionary of values. * `tempest_tempest_conf_overrides`: In order to pass additional tempest configuration to python-tempestconf tool, we can pass a dictionary of values.
* `tempest_test_whitelist`: We need to pass a list of tests which we wish to run on the targest host as a list. * `tempest_test_whitelist`: We need to pass a list of tests which we wish to run on the target host as a list.
* `tempest_test_blacklist`: In order to skip tempest tests, we can pass the list here. * `tempest_test_blacklist`: In order to skip tempest tests, we can pass the list here.
* `gather_facts`: We need to set gather_facts to true as os_tempest rely on targetted environment facts for installing stuff. * `gather_facts`: We need to set gather_facts to true as os_tempest rely on targetted environment facts for installing stuff.

View File

@ -117,7 +117,7 @@ configuration, the following is the default::
* Running tempest against overcloud:: * Running tempest against overcloud::
$ cd <path to triplo-quickstart repo> $ cd <path to tripleo-quickstart repo>
$ bash quickstart.sh \ $ bash quickstart.sh \
--bootstrap \ --bootstrap \

View File

@ -1,4 +1,4 @@
.. _update_network_configuration_post_deploymenet: .. _update_network_configuration_post_deployment:
Updating network configuration on the Overcloud after a deployment Updating network configuration on the Overcloud after a deployment
================================================================== ==================================================================

View File

@ -382,7 +382,7 @@ At a minimum an operator should check the health of the pacemaker cluster
:class: stable :class: stable
The ``--limit`` was introduced in the Stein release. In previous versions, The ``--limit`` was introduced in the Stein release. In previous versions,
use ``--nodes`` or ``--roles`` paremeters. use ``--nodes`` or ``--roles`` parameters.
For control plane nodes, you are expected to upgrade all nodes within a role at For control plane nodes, you are expected to upgrade all nodes within a role at
the same time: pass a role name to ``--limit``. For non-control-plane nodes, the same time: pass a role name to ``--limit``. For non-control-plane nodes,

View File

@ -65,7 +65,7 @@ The upgrade workflow essentially consists of the following steps:
Finally run a heat stack update, unsetting any upgrade specific variables Finally run a heat stack update, unsetting any upgrade specific variables
and leaving the heat stack in a healthy state for future updates. and leaving the heat stack in a healthy state for future updates.
Detailed infromation and pointers can be found in the relevant the Detailed information and pointers can be found in the relevant the
queens-upgrade-dev-docs_. queens-upgrade-dev-docs_.
.. _queens-upgrade-dev-docs: https://docs.openstack.org/tripleo-docs/latest/install/developer/upgrades/major_upgrade.html # WIP @ https://review.opendev.org/#/c/569443/ .. _queens-upgrade-dev-docs: https://docs.openstack.org/tripleo-docs/latest/install/developer/upgrades/major_upgrade.html # WIP @ https://review.opendev.org/#/c/569443/
@ -266,7 +266,7 @@ don't want to start them all.
:class: stable :class: stable
The --limit was introduced in the Stein release. In previous versions, use The --limit was introduced in the Stein release. In previous versions, use
--nodes or --roles parmeters. --nodes or --roles parameters.
openstack overcloud external-upgrade run (for services) openstack overcloud external-upgrade run (for services)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -612,7 +612,7 @@ major-upgrade-composable-steps that come first, as described above.
are 'fully' upgraded after each step completes, rather than having to wait are 'fully' upgraded after each step completes, rather than having to wait
for the final converge step as has previously been the case. In the case of for the final converge step as has previously been the case. In the case of
Ocata to Pike the full puppet/docker config is applied to bring up the Ocata to Pike the full puppet/docker config is applied to bring up the
overclod services in containers. overcloud services in containers.
The tripleo_upgrade_node.sh_ script and puppet configuration are delivered to The tripleo_upgrade_node.sh_ script and puppet configuration are delivered to
the nodes with ``disable_upgrade_deployment`` set ``True`` during the initial the nodes with ``disable_upgrade_deployment`` set ``True`` during the initial

View File

@ -202,7 +202,7 @@ Updating your Overcloud - Pike
For the Pike cycle the minor update workflow is significantly different to For the Pike cycle the minor update workflow is significantly different to
previous cycles. In particular, rather than using a static yum_update.sh_ previous cycles. In particular, rather than using a static yum_update.sh_
we now use service specific ansible update_tasks_ (similar to the upgrade_tasks we now use service specific ansible update_tasks_ (similar to the upgrade_tasks
used for the major upgrade worklow since Ocata). Furthermore, these are not used for the major upgrade workflow since Ocata). Furthermore, these are not
executed directly via a Heat stack update, but rather, together with the executed directly via a Heat stack update, but rather, together with the
docker/puppet config, collected and written to ansible playbooks. The operator docker/puppet config, collected and written to ansible playbooks. The operator
then invokes these to deliver the minor update to particular nodes. then invokes these to deliver the minor update to particular nodes.
@ -233,7 +233,7 @@ parameter::
:class: stable :class: stable
The `--limit` was introduced in the Stein release. In previous versions, The `--limit` was introduced in the Stein release. In previous versions,
use `--nodes` or `--roles` parmeters. use `--nodes` or `--roles` parameters.
You can specify a role name, e.g. 'Compute', to execute the minor update on You can specify a role name, e.g. 'Compute', to execute the minor update on
all nodes of that role in a rolling fashion (serial:1 is used on the playbooks). all nodes of that role in a rolling fashion (serial:1 is used on the playbooks).

View File

@ -53,7 +53,7 @@ run of a group or specific validations.
``--extra-vars-file``: This ``--extra-vars-file``: This
option allows to add a valid ``JSON`` or ``YAML`` option allows to add a valid ``JSON`` or ``YAML``
file containg extra variables to a run of a group or specific validations. file contaning extra variables to a run of a group or specific validations.
.. code-block:: bash .. code-block:: bash

View File

@ -16,7 +16,7 @@ The validations are assigned into various groups that indicate when in
the deployment workflow they are expected to run: the deployment workflow they are expected to run:
* **no-op** validations will run a no-op operation to verify that * **no-op** validations will run a no-op operation to verify that
the workflow is working as it supossed to, it will run in both the workflow is working as it supposed to, it will run in both
the Undercloud and Overcloud nodes. the Undercloud and Overcloud nodes.
* **openshift-on-openstack** validations will check that the * **openshift-on-openstack** validations will check that the

View File

@ -13,9 +13,9 @@ logs to the host that the command is executed from.
Example: Download logs from all controllers Example: Download logs from all controllers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The required `server_name` option for the command can be a paritial name The required `server_name` option for the command can be a partial name
match for the overcloud nodes. This means `openstack overcloud support report match for the overcloud nodes. This means `openstack overcloud support report
colect controller` will match all the overcloud nodes that contain the word collect controller` will match all the overcloud nodes that contain the word
`controller`. To download the run the command and download them to a local `controller`. To download the run the command and download them to a local
directory, run the following command:: directory, run the following command::

View File

@ -98,7 +98,7 @@ power management, and it gets stuck in an abnormal state.
.. warning:: .. warning::
Before proceeding with this section, always try to decommission a node Before proceeding with this section, always try to decommission a node
normally, by scaling down your cloud. Forcing node deletion may cause normally, by scaling down your cloud. Forcing node deletion may cause
unpredicable results. unpredictable results.
Ironic requires that nodes that cannot be operated normally are put in the Ironic requires that nodes that cannot be operated normally are put in the
maintenance mode. It is done by the following command:: maintenance mode. It is done by the following command::

View File

@ -14,7 +14,7 @@ can be included in the TripleO promotion criteria.
The goal is to give developers feedback on real deployments and allow us to The goal is to give developers feedback on real deployments and allow us to
have better coverage on issues seen in production environments. It also have better coverage on issues seen in production environments. It also
allows an aproximation of OVB jobs running in RDO cloud in order to get an allows an approximation of OVB jobs running in RDO cloud in order to get an
"apples-to-apples" comparison to eliminate infra issues. "apples-to-apples" comparison to eliminate infra issues.
.. _baremetal_deploy_guide: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/provisioning/index.html .. _baremetal_deploy_guide: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/provisioning/index.html
@ -114,7 +114,7 @@ Now adding the dlrn reporting ::
secrets: secrets:
- dlrnapi - dlrnapi
Example of a specfic hardware job in Zuul: Example of a specific hardware job in Zuul:
Note that multiple jobs cannot be run on the hardware concurrently. Note that multiple jobs cannot be run on the hardware concurrently.
The base job is modified to include semaphore The base job is modified to include semaphore

View File

@ -121,7 +121,7 @@ https://github.com/rdo-infra/ci-config/blob/master/ci-scripts/dlrnapi_promoter/R
Pushing RDO containers to ``docker.io`` Pushing RDO containers to ``docker.io``
``````````````````````````````````````` ```````````````````````````````````````
The DLRN Promotor script calls the `container push playbook The DLRN Promoter script calls the `container push playbook
<https://github.com/rdo-infra/ci-config/blob/master/ci-scripts/container-push/container-push.yml>`_ <https://github.com/rdo-infra/ci-config/blob/master/ci-scripts/container-push/container-push.yml>`_
to push the RDO containers at each stage to `docker.io to push the RDO containers at each stage to `docker.io
<https://hub.docker.com/r/tripleopike/centos-binary-heat-api/tags/>`_. <https://hub.docker.com/r/tripleopike/centos-binary-heat-api/tags/>`_.

View File

@ -97,12 +97,12 @@ Click on the job link and it will open the build result page. Then click on
`log_url`, click on the `job-output.txt`. It contains the results of `log_url`, click on the `job-output.txt`. It contains the results of
ansible playbook runs. ansible playbook runs.
Look for *ERROR* or failed messages. Look for *ERROR* or failed messages.
If looks something ovious. If looks something obvious.
Please go ahead and create the bug on launchpad_ against tripleo project with Please go ahead and create the bug on launchpad_ against tripleo project with
all the information. all the information.
Once the bug is created, please add `depcheck` tag on the filed launchpad bug. Once the bug is created, please add `depcheck` tag on the filed launchpad bug.
This tag is explicitly used for listing bugs related to TripleO CI job failue This tag is explicitly used for listing bugs related to TripleO CI job failure
against ceph-ansible and podman projects. against ceph-ansible and podman projects.
.. _launchpad: https://bugs.launchpad.net/tripleo/+filebug .. _launchpad: https://bugs.launchpad.net/tripleo/+filebug

View File

@ -41,7 +41,7 @@ structure (``manifests/profile/base/time/ntp.pp``) as:
$step = hiera('step'), $step = hiera('step'),
) { ) {
#step assigned for core modules. #step assigned for core modules.
#(Check for further referencies about the configuration steps) #(Check for further references about the configuration steps)
#https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/puppet/services/README.rst #https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/puppet/services/README.rst
if ($step >= 2){ if ($step >= 2){
#We will call the NTP puppet class and assign our configuration values. #We will call the NTP puppet class and assign our configuration values.

View File

@ -72,7 +72,7 @@ group Transfer the data to the freshly re-installed controller\n(only required i
note over "undercloud", "controller-2" #AAFFAA note over "undercloud", "controller-2" #AAFFAA
Everything is composable, but diagram showcases MariaDB specifically Everything is composable, but diagram showcases MariaDB specifically
end note end note
User -> "undercloud" : openstack overcloud external-upgarde run User -> "undercloud" : openstack overcloud external-upgrade run
"undercloud" -> "controller-1" : pcs resource disable\ngalera-bundle "undercloud" -> "controller-1" : pcs resource disable\ngalera-bundle
note right: Disable MariaDB note right: Disable MariaDB
"controller-1" -> "controller-2" "controller-1" -> "controller-2"