diff --git a/_custom/custom.css b/_custom/custom.css index b65c916d..c84f039a 100644 --- a/_custom/custom.css +++ b/_custom/custom.css @@ -27,7 +27,7 @@ color: rgba(0, 0, 0, 0.9) !important; } -/* NOTES, ADMONITTIONS AND TAGS */ +/* NOTES, ADMONITIONS AND TAGS */ .admonition { font-size: 85%; /* match code size */ background: rgb(240, 240, 240); diff --git a/deploy-guide/source/deployment/3rd_party.rst b/deploy-guide/source/deployment/3rd_party.rst index a404e38a..e7925ba9 100644 --- a/deploy-guide/source/deployment/3rd_party.rst +++ b/deploy-guide/source/deployment/3rd_party.rst @@ -216,7 +216,7 @@ Here are some of them: Tips and Tricks with tripleo_container_image_build .................................................. -Here's a non-exaustive list of tips and tricks that might make things faster, +Here's a non-exhaustive list of tips and tricks that might make things faster, especially on a dev env where you need to build multiple times the containers. Inject a caching proxy diff --git a/deploy-guide/source/deployment/ansible_config_download.rst b/deploy-guide/source/deployment/ansible_config_download.rst index 46799a34..44d563b0 100644 --- a/deploy-guide/source/deployment/ansible_config_download.rst +++ b/deploy-guide/source/deployment/ansible_config_download.rst @@ -145,7 +145,7 @@ with ``--override-ansible-cfg`` on the deployment command. For example the following command will use the configuration options from ``/home/stack/ansible.cfg``. Any options specified in the override file will -take precendence over the defaults:: +take precedence over the defaults:: openstack overcloud deploy \ ... diff --git a/deploy-guide/source/deployment/network_v2.rst b/deploy-guide/source/deployment/network_v2.rst index 28dae30d..0d47e9db 100644 --- a/deploy-guide/source/deployment/network_v2.rst +++ b/deploy-guide/source/deployment/network_v2.rst @@ -87,7 +87,7 @@ that it can run all the steps: The environment files with the parameters and resource registry overrides required is automatically included when the ``overcloud deploy`` command is - run with the arguments: ``--vip-file``, ``--baremetal-deployement`` and + run with the arguments: ``--vip-file``, ``--baremetal-deployment`` and ``--network-config``. #. Run Config-Download and the deploy-steps playbook @@ -369,7 +369,7 @@ Migrating existing deployments To facilitate the migration for deployed overclouds tripleoclient commands to extract information from deployed overcloud stacks has been added. During the -upgrade to Wallaby these tools will be executed as part of the underlcoud +upgrade to Wallaby these tools will be executed as part of the undercloud upgrade, placing the generated YAML definition files in the working directory (Defaults to: ``~/overcloud-deploy/$STACK_NAME/``). Below each export command is described, and examples to use them manually with the intent for developers @@ -379,7 +379,7 @@ during the undercloud upgrade. There is also a tool ``convert_heat_nic_config_to_ansible_j2.py`` that can be used to convert heat template NIC config to Ansible j2 templates. -.. warning:: If migrating to use Networking v2 while using the non-Epemeral +.. warning:: If migrating to use Networking v2 while using the non-Ephemeral heat i.e ``--heat-type installed``, the existing overcloud stack must **first** be updated to set the ``deletion_policy`` for ``OS::Nova`` and ``OS::Neutron`` resources. This can be done @@ -474,7 +474,7 @@ generate a Baremetal Provision definition YAML file from a deployed overcloud stack. The YAML definition file can then be used with ``openstack overcloud node provision`` command and/or the ``openstack overcloud deploy`` command. -**Example**: Export deployed overcloud nodes to Baremtal Deployment YAML +**Example**: Export deployed overcloud nodes to Baremetal Deployment YAML definition .. code-block:: bash diff --git a/deploy-guide/source/deployment/overcloud.rst b/deploy-guide/source/deployment/overcloud.rst index d4e828e1..9113e276 100644 --- a/deploy-guide/source/deployment/overcloud.rst +++ b/deploy-guide/source/deployment/overcloud.rst @@ -2,7 +2,7 @@ Containers based Overcloud Deployment ====================================== This documentation explains how to deploy a fully containerized overcloud -utilizing Podman which is the default since the Stein releasee. +utilizing Podman which is the default since the Stein release. The requirements for a containerized overcloud are the same as for any other overcloud deployment. The real difference is in where the overcloud services diff --git a/deploy-guide/source/deployment/standalone.rst b/deploy-guide/source/deployment/standalone.rst index c1b5d681..9f18ba9f 100644 --- a/deploy-guide/source/deployment/standalone.rst +++ b/deploy-guide/source/deployment/standalone.rst @@ -127,7 +127,7 @@ Deploying a Standalone OpenStack node be bound to by haproxy while the other by backend services on the same. .. Note: The following example utilizes 2 interfaces. NIC1 which will serve as - the management inteface. It can have any address and will be left untouched. + the management interface. It can have any address and will be left untouched. NIC2 will serve as the OpenStack & Provider network NIC. The following exports should be configured for your network and interface. @@ -179,7 +179,7 @@ Deploying a Standalone OpenStack node floating IP. .. Note: NIC1 will serve as the management, OpenStack and Provider network - inteface. The exports should be configured for your network and interface. + interface. The exports should be configured for your network and interface. .. code-block:: bash diff --git a/deploy-guide/source/environments/baremetal.rst b/deploy-guide/source/environments/baremetal.rst index 43164447..a1d0b762 100644 --- a/deploy-guide/source/environments/baremetal.rst +++ b/deploy-guide/source/environments/baremetal.rst @@ -53,7 +53,7 @@ Preparing the Baremetal Environment Networking ^^^^^^^^^^ -The overcloud nodes will be deployed from the undercloud machine and therefore the machines need to have their network settings modified to allow for the overcloud nodes to be PXE boot'ed using the undercloud machine. As such, the setup requires that: +The overcloud nodes will be deployed from the undercloud machine and therefore the machines need to have their network settings modified to allow for the overcloud nodes to be PXE booted using the undercloud machine. As such, the setup requires that: * All overcloud machines in the setup must support IPMI * A management provisioning network is setup for all of the overcloud machines. diff --git a/deploy-guide/source/features/ceph_external.rst b/deploy-guide/source/features/ceph_external.rst index d369e1fc..c74f253d 100644 --- a/deploy-guide/source/features/ceph_external.rst +++ b/deploy-guide/source/features/ceph_external.rst @@ -217,7 +217,7 @@ overcloud to connect to an external ceph cluster: Deploying Manila with an External CephFS Service ------------------------------------------------ -If chosing to configure Manila with Ganesha as NFS gateway for CephFS, +If choosing to configure Manila with Ganesha as NFS gateway for CephFS, with an external Ceph cluster, then add `environments/manila-cephfsganesha-config.yaml` to the list of environment files used to deploy the overcloud and also configure the following parameters:: diff --git a/deploy-guide/source/features/compute_nvdimm.rst b/deploy-guide/source/features/compute_nvdimm.rst index 182fedf7..8c277817 100644 --- a/deploy-guide/source/features/compute_nvdimm.rst +++ b/deploy-guide/source/features/compute_nvdimm.rst @@ -72,12 +72,12 @@ Examples NovaPMEMNamespaces: "6G:ns1,6G:ns1,6G:ns2,100G:ns3" -The following example will peform following steps: +The following example will perform following steps: * ensure **ndctl** tool is installed on hosts with role **ComputePMEM** * create PMEM namespaces as specified in the **NovaPMEMNamespaces** parameter. - ns0, ns1, ns2 with size 6GiB - ns3 with size 100GiB -* set Nova prameter **pmem_namespaces** in nova.conf to map create namespaces to vPMEM as specified in **NovaPMEMMappings**. +* set Nova parameter **pmem_namespaces** in nova.conf to map create namespaces to vPMEM as specified in **NovaPMEMMappings**. In this example the label '6GB' will map to one of ns0, ns1 or ns2 namespace and the label 'LARGE' will map to ns3 namespace. After deployment you need to configure flavors as described in documentation `Nova: Configure a flavor `_ diff --git a/deploy-guide/source/features/custom_networks.rst b/deploy-guide/source/features/custom_networks.rst index 7bc09892..27220832 100644 --- a/deploy-guide/source/features/custom_networks.rst +++ b/deploy-guide/source/features/custom_networks.rst @@ -384,7 +384,7 @@ Network data YAML options See `Options for network data YAML subnet definitions`_ for a list of all documented sub-options for the subnet definitions. - type: *dictonary* + type: *dictionary* Options for network data YAML subnet definitions @@ -419,7 +419,7 @@ Options for network data YAML subnet definitions type: *list* - elements: *dictonary* + elements: *dictionary* :suboptions: @@ -447,7 +447,7 @@ Options for network data YAML subnet definitions type: *list* - elements: *dictonary* + elements: *dictionary* :suboptions: @@ -475,7 +475,7 @@ Options for network data YAML subnet definitions type: *list* - elements: *dictonary* + elements: *dictionary* :suboptions: @@ -504,7 +504,7 @@ Options for network data YAML subnet definitions type: *list* - elements: *dictonary* + elements: *dictionary* :suboptions: diff --git a/deploy-guide/source/features/custom_roles.rst b/deploy-guide/source/features/custom_roles.rst index 92290c53..ee406436 100644 --- a/deploy-guide/source/features/custom_roles.rst +++ b/deploy-guide/source/features/custom_roles.rst @@ -23,7 +23,7 @@ These roles can be listed using the `tripleoclient` by running:: With these provided roles, the user deploying the overcloud can generate a `roles_data.yaml` file that contains the roles they would like to use for the overcloud nodes. Additionally the user can manage their personal custom roles -in a similar manor by storing the individual files in a directory and using +in a similar manner by storing the individual files in a directory and using the `tripleoclient` to generate their `roles_data.yaml`. For example, a user can execute the following to create a `roles_data.yaml` containing only the `Controller` and `Compute` roles:: diff --git a/deploy-guide/source/features/deploy_cellv2_basic.rst b/deploy-guide/source/features/deploy_cellv2_basic.rst index 2b20a11e..a8d61ae2 100644 --- a/deploy-guide/source/features/deploy_cellv2_basic.rst +++ b/deploy-guide/source/features/deploy_cellv2_basic.rst @@ -283,7 +283,7 @@ by this ansible way. .. code-block:: bash - export CONTAINERCLI=podman #choose appropriate contaier cli here + export CONTAINERCLI=podman #choose appropriate container cli here source stackrc mkdir inventories for i in overcloud cell1; do \ diff --git a/deploy-guide/source/features/deploy_cellv2_manage_cell.rst b/deploy-guide/source/features/deploy_cellv2_manage_cell.rst index 77cd826a..3dcd9b85 100644 --- a/deploy-guide/source/features/deploy_cellv2_manage_cell.rst +++ b/deploy-guide/source/features/deploy_cellv2_manage_cell.rst @@ -20,7 +20,7 @@ the cell host discovery: # CONTAINERCLI can be either docker or podman export CONTAINERCLI='docker' - # run cell host dicovery + # run cell host discovery ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \ nova-manage cell_v2 discover_hosts --by-service --verbose @@ -34,7 +34,7 @@ the cell host discovery: .. note:: - Optionally the cell uuid cal be specificed to the `discover_hosts` and + Optionally the cell uuid cal be specified to the `discover_hosts` and `list_hosts` command to only target against a specific cell. Delete a compute from a cell diff --git a/deploy-guide/source/features/deploy_cellv2_routed.rst b/deploy-guide/source/features/deploy_cellv2_routed.rst index 499553b9..aa9dca9c 100644 --- a/deploy-guide/source/features/deploy_cellv2_routed.rst +++ b/deploy-guide/source/features/deploy_cellv2_routed.rst @@ -147,7 +147,7 @@ been added to the network definitions in the network_data.yaml file format: The cell controllers use virtual IPs, therefore the existing VIPs from the central overcloud stack should not be referenced. In case cell controllers - and cell computes get split into sepearate stacks, the cell compute stack + and cell computes get split into separate stacks, the cell compute stack network_data file need an external_resource_vip_id reference to the cell controllers VIP resource. @@ -382,7 +382,7 @@ Add the following content into a parameter file for the cell, e.g. `cell1/cell1. CellControllerCount: 3 ComputeCount: 0 - # Compute names need to be uniq, make sure to have a uniq + # Compute names need to be unique, make sure to have a unique # hostname format for cell nodes ComputeHostnameFormat: 'cell1-compute-%index%' diff --git a/deploy-guide/source/features/deployed_ceph.rst b/deploy-guide/source/features/deployed_ceph.rst index 56b00525..b14c409a 100644 --- a/deploy-guide/source/features/deployed_ceph.rst +++ b/deploy-guide/source/features/deployed_ceph.rst @@ -354,7 +354,7 @@ The command line interface supports the following options:: addition to registry authentication via ContainerImageRegistryCredentials. --cephadm-default-container - Use the default continer defined in cephadm instead of + Use the default container defined in cephadm instead of container_image_prepare_defaults.yaml. If this is used, 'cephadm bootstrap' is not passed the --image parameter. @@ -445,7 +445,7 @@ initial Ceph configuration file which can be passed with the --config option. These settings may also be modified after `openstack overcloud ceph deploy`. -The deprecated Heat paramters `CephPoolDefaultSize` and +The deprecated Heat parameters `CephPoolDefaultSize` and `CephPoolDefaultPgNum` no longer have any effect as these configurations are not made during overcloud deployment. However, during overcloud deployment pools are created and @@ -956,7 +956,7 @@ Then the Ceph cluster will have the following parameters set:: ms_bind_ipv6 = True Because the storage networks in network_data.yaml contain `ipv6: -true`, the ipv6_subet values are extracted and the Ceph globals +true`, the ipv6_subset values are extracted and the Ceph globals `ms_bind_ipv4` is set `false` and `ms_bind_ipv6` is set `true`. It is not supported to have the ``public_network`` use IPv4 and the ``cluster_network`` use IPv6 or vice versa. diff --git a/deploy-guide/source/features/deployed_server.rst b/deploy-guide/source/features/deployed_server.rst index b8a2f6d0..79a0bdf1 100644 --- a/deploy-guide/source/features/deployed_server.rst +++ b/deploy-guide/source/features/deployed_server.rst @@ -102,7 +102,7 @@ Deploying the Overcloud Provision networks and ports if using Neutron ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If using Neutron for resource managment, Network resources for the deployment +If using Neutron for resource management, Network resources for the deployment still must be provisioned with the ``openstack overcloud network provision`` command as documented in :ref:`custom_networks`. @@ -559,7 +559,7 @@ The following is a sample environment file that shows setting these values Use ``DeployedServerPortMap`` to assign an ControlPlane Virtual IP address from any CIDR, and the ``RedisVirtualFixedIPs`` and ``OVNDBsVirtualFixedIPs`` - parameters to assing the ``RedisVip`` and ``OVNDBsVip``:: + parameters to assign the ``RedisVip`` and ``OVNDBsVip``:: resource_registry: OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml diff --git a/deploy-guide/source/features/distributed_compute_node.rst b/deploy-guide/source/features/distributed_compute_node.rst index 21c24f78..6ce31001 100644 --- a/deploy-guide/source/features/distributed_compute_node.rst +++ b/deploy-guide/source/features/distributed_compute_node.rst @@ -972,7 +972,7 @@ the command from example_export_dcn_ (Victoria or prior releases). ``network_data_v2.yaml``, ``vip_data.yaml``, and ``baremetal_deployment.yaml`` are the same files used with the ``control-plane`` stack. However, the files will need to be modified if the ``edge-0`` or ``edge-1`` stacks require -addditional provisioning of network resources for new subnets. Update the files +additional provisioning of network resources for new subnets. Update the files as needed and continue to use the same files for all ``overcloud deploy`` commands across all stacks. diff --git a/deploy-guide/source/features/distributed_multibackend_storage.rst b/deploy-guide/source/features/distributed_multibackend_storage.rst index ff7f452a..ac76498b 100644 --- a/deploy-guide/source/features/distributed_multibackend_storage.rst +++ b/deploy-guide/source/features/distributed_multibackend_storage.rst @@ -391,7 +391,7 @@ you need to disable image conversion you may override the parameter_defaults: GlanceImageImportPlugin: [] -The ``glance.yaml`` file sets the following to configue the local Glance backend:: +The ``glance.yaml`` file sets the following to configure the local Glance backend:: parameter_defaults: GlanceShowMultipleLocations: true @@ -456,7 +456,7 @@ config-download. If the ``--cephx-key-client-name`` is not passed, then the default cephx client key called `openstack` will be extracted. -The genereated ``~/central_ceph_external.yaml`` should look something +The generated ``~/central_ceph_external.yaml`` should look something like the following:: parameter_defaults: @@ -484,14 +484,14 @@ cluster. The ``openstack overcloud export ceph`` command will obtain all of the values from the config-download directory of the stack specified by `--stack` option. All values are extracted from the -``cephadm/ceph_client.yml`` file. This file is genereated when +``cephadm/ceph_client.yml`` file. This file is generated when config-download executes the export tasks from the tripleo-ansible role `tripleo_cephadm`. It should not be necessary to extract these values manually as the ``openstack overcloud export ceph`` command -will genereate a valid YAML file with `CephExternalMultiConfig` +will generate a valid YAML file with `CephExternalMultiConfig` populated for all stacks passed with the `--stack` option. -The `ceph_conf_overrides` section of the file genereated by ``openstack +The `ceph_conf_overrides` section of the file generated by ``openstack overcloud export ceph`` should look like the following:: ceph_conf_overrides: @@ -879,7 +879,7 @@ Use the `openstack overcloud export ceph` command to create In the above example a coma-delimited list of Heat stack names is provided to the ``--stack`` option. Pass as many stacks as necessary for all deployed DCN sites so that the configuration data to connect -to every DCN Ceph cluster is extracted into the single genereated +to every DCN Ceph cluster is extracted into the single generated ``dcn_ceph_external.yaml`` file. If you created a separate cephx key called external on each DCN ceph @@ -1017,7 +1017,7 @@ described above. Thus, for a three stack deployment the following will be the case. -- The initial deployment of the cental stack is configured with one +- The initial deployment of the central stack is configured with one external Ceph cluster called central, which is the default store for Cinder, Glance, and Nova. We will refer to this as the central site's "primary external Ceph cluster". @@ -1082,7 +1082,7 @@ Ensure you have Glance 3.0.0 or newer as provided by the Authenticate to the control-plane using the RC file generated by the stack from the first deployment which contains Keystone. In this example the stack was called "control-plane" so the file -to source beofre running Glance commands will be called +to source before running Glance commands will be called "control-planerc". Confirm the expected stores are available: @@ -1324,7 +1324,7 @@ site, which is the default backend for Glance. After the above is run the output of `openstack image show $IMAGE_ID -f value -c properties` should contain a JSON data structure -whose key called `stores` should looke like "dcn0,central" as +whose key called `stores` should look like "dcn0,central" as the image will also exist in the "central" backend which stores its data on the central Ceph cluster. The same image at the Central site may then be copied to other DCN sites, booted in the vms or volumes diff --git a/deploy-guide/source/features/ops_tools.rst b/deploy-guide/source/features/ops_tools.rst index dd37a82b..366477d4 100644 --- a/deploy-guide/source/features/ops_tools.rst +++ b/deploy-guide/source/features/ops_tools.rst @@ -50,7 +50,7 @@ Architecture Deploying the Operational Tool Server ------------------------------------- -There is an ansible project called opstools-ansible (OpsTools_) on github that helps to install the Operator Server, further documentation of the operational tool server instalation can be founded at (OpsToolsDoc_). +There is an ansible project called opstools-ansible (OpsTools_) on github that helps to install the Operator Server, further documentation of the operational tool server installation can be founded at (OpsToolsDoc_). .. _OpsTools: https://github.com/centos-opstools/opstools-ansible .. _OpsToolsDoc: https://github.com/centos-opstools/opstools-doc @@ -135,7 +135,7 @@ Before deploying the Overcloud LoggingSharedKey: secret # The key LoggingSSLCertificate: | # The content of the SSL Certificate -----BEGIN CERTIFICATE----- - ...contens of server.pem here... + ...contents of server.pem here... -----END CERTIFICATE----- - Performance Monitoring:: diff --git a/deploy-guide/source/features/ovs_dpdk_config.rst b/deploy-guide/source/features/ovs_dpdk_config.rst index 5404703e..851adb0c 100644 --- a/deploy-guide/source/features/ovs_dpdk_config.rst +++ b/deploy-guide/source/features/ovs_dpdk_config.rst @@ -91,7 +91,7 @@ Example:: type: ovs_dpdk_port name: dpdk0 mtu: 2000 - rx_queu: 2 + rx_queue: 2 members: - type: interface diff --git a/deploy-guide/source/features/pre_network_config.rst b/deploy-guide/source/features/pre_network_config.rst index ac9a17c9..f21e14a0 100644 --- a/deploy-guide/source/features/pre_network_config.rst +++ b/deploy-guide/source/features/pre_network_config.rst @@ -8,14 +8,14 @@ OpenvSwitch before network deployment (os-net-config), but after the hugepages are created (hugepages are created using kernel args). This requirement is also valid for some 3rd party SDN integration. This kind of configuration requires additional TripleO service definitions. This document -explains how to acheive such deployments on and after `train` release. +explains how to achieve such deployments on and after `train` release. .. note:: - In `queens` release, the resource `PreNetworkConfig` can be overriden to + In `queens` release, the resource `PreNetworkConfig` can be overridden to achieve the required behavior, which has been deprecated from `train` onwards. The implementations based on `PreNetworkConfig` should be - moved to other available aternates. + moved to other available alternates. The TripleO service `OS::TripleO::BootParams` configures the parameter `KernelArgs` and reboots the node using the `tripleo-ansible` role diff --git a/deploy-guide/source/features/routed_spine_leaf_network.rst b/deploy-guide/source/features/routed_spine_leaf_network.rst index 112db9c7..a7294440 100644 --- a/deploy-guide/source/features/routed_spine_leaf_network.rst +++ b/deploy-guide/source/features/routed_spine_leaf_network.rst @@ -400,7 +400,7 @@ Example ``roles_data`` below. (The list of default services has been left out.) ############################################################################# - name: Controller description: | - Controller role that has all the controler services loaded and handles + Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 tags: @@ -557,7 +557,7 @@ network configuration templates as needed. be used for both compute roles (``computeleaf0`` and ``computeleaf1``). -Create a environement file (``network-environment-overrides.yaml``) with +Create a environment file (``network-environment-overrides.yaml``) with ``resource_registry`` overrides to specify the network configuration templates to use. For example:: diff --git a/deploy-guide/source/features/sriov_deployment.rst b/deploy-guide/source/features/sriov_deployment.rst index ff19d0ee..a06226bb 100644 --- a/deploy-guide/source/features/sriov_deployment.rst +++ b/deploy-guide/source/features/sriov_deployment.rst @@ -26,7 +26,7 @@ baremetal on which SR-IOV needs to be enabled. Also, SR-IOV requires mandatory kernel parameters to be set, like ``intel_iommu=on iommu=pt`` on Intel machines. In order to enable the -configuration of kernel parametres to the host, host-config-pre-network +configuration of kernel parameters to the host, host-config-pre-network environment file has to be added for the deploy command. Adding the following arguments to the ``openstack overcloud deploy`` command diff --git a/deploy-guide/source/features/tls-everywhere.rst b/deploy-guide/source/features/tls-everywhere.rst index f8b190c6..94c558a6 100644 --- a/deploy-guide/source/features/tls-everywhere.rst +++ b/deploy-guide/source/features/tls-everywhere.rst @@ -323,7 +323,7 @@ For reference, the Novajoin based composable service is located at tripleo-ipa Composable Service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -If you're deploying *TLS-everwhere* with tripleo-ipa prior to Victoria, you need to +If you're deploying *TLS-everywhere* with tripleo-ipa prior to Victoria, you need to override the default Novajoin composable service. Add the following composable service to the ``resource_registry`` in ``tls-parameters.yaml``:: diff --git a/deploy-guide/source/features/tolerated_failure.rst b/deploy-guide/source/features/tolerated_failure.rst index f6641672..7c7f3d29 100644 --- a/deploy-guide/source/features/tolerated_failure.rst +++ b/deploy-guide/source/features/tolerated_failure.rst @@ -2,7 +2,7 @@ Tolerate deployment failures ============================ When proceeding to large scale deployments, it happens very often to have -infrastucture problems such as network outages, wrong configurations applied +infrastructure problems such as network outages, wrong configurations applied on hardware, hard drive issues, etc. It is unpleasant to deploy hundred of nodes and only have a few of them which diff --git a/deploy-guide/source/features/vdpa_deployment.rst b/deploy-guide/source/features/vdpa_deployment.rst index 01306618..8d9868fa 100644 --- a/deploy-guide/source/features/vdpa_deployment.rst +++ b/deploy-guide/source/features/vdpa_deployment.rst @@ -34,7 +34,7 @@ baremetal on which vDPA needs to be enabled. Also, vDPA requires mandatory kernel parameters to be set, like ``intel_iommu=on iommu=pt`` on Intel machines. In order to enable the -configuration of kernel parametres to the host, The ``KernelArgs`` role +configuration of kernel parameters to the host, The ``KernelArgs`` role parameter has to be defined accordingly. Adding the following arguments to the ``openstack overcloud deploy`` command @@ -146,7 +146,7 @@ Scheduling instances Normally, the ``PciPassthroughFilter`` is sufficient to ensure that a vDPA instance will land on a vDPA host. If we want to prevent other instances from using a vDPA host, we need -to setup the `isolate-aggreate feature +to setup the `isolate-aggregate feature `_. Example:: @@ -431,7 +431,7 @@ Validating the OVN agents:: +--------------------------------------+----------------------+-------------------------+-------------------+-------+-------+----------------------------+ -Other usefull commands for troubleshooting:: +Other useful commands for troubleshooting:: [root@computevdpa-0 ~]# ovs-appctl dpctl/dump-flows -m type=offloaded [root@computevdpa-0 ~]# ovs-appctl dpctl/dump-flows -m diff --git a/deploy-guide/source/post_deployment/delete_nodes.rst b/deploy-guide/source/post_deployment/delete_nodes.rst index ca15962a..2575bb14 100644 --- a/deploy-guide/source/post_deployment/delete_nodes.rst +++ b/deploy-guide/source/post_deployment/delete_nodes.rst @@ -11,7 +11,7 @@ by following the process outlined in on this page. If you're just scaling down nodes and plan to re-use them. Use :ref:`scale_roles` instead. For temporary issues with nodes, they can be blacklisted temporarily using ``DeploymentServerBlacklist``. - This guide is specifcally for removing nodes from the environment. + This guide is specifically for removing nodes from the environment. .. note:: If your Compute node is still hosting VM's, ensure they are migrated to diff --git a/deploy-guide/source/post_deployment/pre_cache_images.rst b/deploy-guide/source/post_deployment/pre_cache_images.rst index 17aee949..0f88ec05 100644 --- a/deploy-guide/source/post_deployment/pre_cache_images.rst +++ b/deploy-guide/source/post_deployment/pre_cache_images.rst @@ -17,7 +17,7 @@ where instance creation times must be minimized. Since Ussuri Nova also provides an API to pre-cache images on Compute nodes. See the `Nova Image pre-caching documentation `_. -.. note:: The Nova Image Cache is not used when using Ceph RBD for Glange images and Nova ephemeral disk. See `Nova Image Caching documentation `_. +.. note:: The Nova Image Cache is not used when using Ceph RBD for Glance images and Nova ephemeral disk. See `Nova Image Caching documentation `_. Image Cache Cleanup ------------------- @@ -131,7 +131,7 @@ Pre-caching on one node and distributing to remaining nodes In the case of a :doc:`../features/distributed_compute_node` it may be desirable to transfer an image to a single compute node at a remote site and then redistribute it from that node to the remaining compute nodes. The SSH/SCP configuration that exists between the compute nodes to support cold migration/resize is reused for this purpose. -.. warning:: SSH/SCP is inefficient over high latency networks. The method should only be used when the compute nodes targetted by the playbook are all within the same site. To ensure this is the case set tripleo_nova_image_cache_plan to the stack name of the site. Multiple runs of ansible-playbook are then required, targetting a different site each time. +.. warning:: SSH/SCP is inefficient over high latency networks. The method should only be used when the compute nodes targeted by the playbook are all within the same site. To ensure this is the case set tripleo_nova_image_cache_plan to the stack name of the site. Multiple runs of ansible-playbook are then required, targeting a different site each time. To enable this simply set `tripleo_nova_image_cache_use_proxy: true` in the arguments file. The image is distributed from the first compute node by default. To use a specific compute node also set `tripleo_nova_image_cache_proxy_hostname`. diff --git a/deploy-guide/source/post_deployment/tempest/os_tempest.rst b/deploy-guide/source/post_deployment/tempest/os_tempest.rst index 3f758459..a742457c 100644 --- a/deploy-guide/source/post_deployment/tempest/os_tempest.rst +++ b/deploy-guide/source/post_deployment/tempest/os_tempest.rst @@ -81,7 +81,7 @@ What are these above vars: * `tempest_tempest_conf_overrides`: In order to pass additional tempest configuration to python-tempestconf tool, we can pass a dictionary of values. * `tempest_test_whitelist`: We need to pass a list of tests which we wish to run on the target host as a list. * `tempest_test_blacklist`: In order to skip tempest tests, we can pass the list here. -* `gather_facts`: We need to set gather_facts to true as os_tempest rely on targetted environment facts for installing stuff. +* `gather_facts`: We need to set gather_facts to true as os_tempest rely on targeted environment facts for installing stuff. Here are the `defaults vars of os_tempest role `_. diff --git a/deploy-guide/source/post_deployment/tempest/tempest.rst b/deploy-guide/source/post_deployment/tempest/tempest.rst index fb5259ab..b36b717b 100644 --- a/deploy-guide/source/post_deployment/tempest/tempest.rst +++ b/deploy-guide/source/post_deployment/tempest/tempest.rst @@ -678,7 +678,7 @@ Generate tempest.conf and run tempest tests within the container with a command which copies your `tempest.conf` from `container_tempest` directory to `tempest_workspace/etc` directory:: - $ cp /home/stack/container_tempest/tempets.conf /home/stack/tempest_workspace/etc/tempest.conf + $ cp /home/stack/container_tempest/tempest.conf /home/stack/tempest_workspace/etc/tempest.conf * Set executable privileges to the `tempest_script.sh` script:: diff --git a/deploy-guide/source/post_deployment/upgrade/fast_forward_upgrade.rst b/deploy-guide/source/post_deployment/upgrade/fast_forward_upgrade.rst index 0e17ebfe..5cc0376f 100644 --- a/deploy-guide/source/post_deployment/upgrade/fast_forward_upgrade.rst +++ b/deploy-guide/source/post_deployment/upgrade/fast_forward_upgrade.rst @@ -810,7 +810,7 @@ Following there is a list of all the changes needed: 10. If using a modified version of the core Heat template collection from Newton, you need to re-apply your customizations to a copy of the Queens - version. To do this, use a git version control system or similar toolings + version. To do this, use a git version control system or similar tooling to compare. diff --git a/deploy-guide/source/post_deployment/validations/cli.rst b/deploy-guide/source/post_deployment/validations/cli.rst index 1ead4766..6dcf4c4d 100644 --- a/deploy-guide/source/post_deployment/validations/cli.rst +++ b/deploy-guide/source/post_deployment/validations/cli.rst @@ -58,7 +58,7 @@ run of a group or specific validations. ``--extra-vars-file``: This option allows to add a valid ``JSON`` or ``YAML`` -file contaning extra variables to a run of a group or specific validations. +file containing extra variables to a run of a group or specific validations. .. code-block:: bash diff --git a/deploy-guide/source/provisioning/baremetal_provision.rst b/deploy-guide/source/provisioning/baremetal_provision.rst index 10ac090a..3e97865c 100644 --- a/deploy-guide/source/provisioning/baremetal_provision.rst +++ b/deploy-guide/source/provisioning/baremetal_provision.rst @@ -8,7 +8,7 @@ Bare Metal service to provision baremetal before the overcloud is deployed. This adds a new provision step before the overcloud deploy, and the output of the provision is a valid :doc:`../features/deployed_server` configuration. -In the Wallaby release the baremetal provisining was extended to also manage +In the Wallaby release the baremetal provisioning was extended to also manage the neutron API resources for :doc:`../features/network_isolation` and :doc:`../features/custom_networks`, and apply network configuration on the provisioned nodes using os-net-config. diff --git a/deploy-guide/source/provisioning/node_states.rst b/deploy-guide/source/provisioning/node_states.rst index ace30176..e3f89970 100644 --- a/deploy-guide/source/provisioning/node_states.rst +++ b/deploy-guide/source/provisioning/node_states.rst @@ -18,7 +18,7 @@ and only see the ``enroll`` state if their power management fails to validate:: openstack overcloud import instackenv.json Nodes can optionally be introspected in this step by passing the --provide flag -which will progress them through through the manageable_ state and eventually to +which will progress them through the manageable_ state and eventually to the available_ state ready for deployment. manageable diff --git a/deploy-guide/source/repositories.rst b/deploy-guide/source/repositories.rst index bb193828..5bd8e015 100644 --- a/deploy-guide/source/repositories.rst +++ b/deploy-guide/source/repositories.rst @@ -7,7 +7,7 @@ #. Download and install the python-tripleo-repos RPM from the appropriate RDO repository - .. admonition:: CentOS Strem 8 + .. admonition:: CentOS Stream 8 :class: centos8 Current `Centos 8 RDO repository `_. diff --git a/doc/source/ci/chasing_promotions.rst b/doc/source/ci/chasing_promotions.rst index 1b85f248..1b685337 100644 --- a/doc/source/ci/chasing_promotions.rst +++ b/doc/source/ci/chasing_promotions.rst @@ -140,7 +140,7 @@ Hack the promotion with testproject Finally testproject_ and the ability to run individual periodic jobs on demand is an important part of the ruck|rover toolbox. In some cases you may want to run a job for verification of a given launchpad bug that affects -perioric jobs. +periodic jobs. However another important use is when the ruck|rover notice that one of the jobs in criteria failed on something they (now) know how to fix, or on some @@ -191,7 +191,7 @@ at time of writing we see:: priority=1 So the centos7 master tripleo-ci-testing *hash* is -*544864ccc03b053317f5408b0c0349a42723ce73_ebb98bd9a*. The corrresponding repo +*544864ccc03b053317f5408b0c0349a42723ce73_ebb98bd9a*. The corresponding repo is given by the baseurl above and if you navigate to that URL with your browser you can see the list of packages used in the jobs. Thus, the job specified in the example above for testproject diff --git a/doc/source/ci/check_gates.rst b/doc/source/ci/check_gates.rst index 270c6693..24098e6c 100644 --- a/doc/source/ci/check_gates.rst +++ b/doc/source/ci/check_gates.rst @@ -115,7 +115,7 @@ Troubleshooting a failed job When your newly added job fails, you may want to download its logs for a local inspection and root cause analysis. Use the -`tripleo-ci gethelogs script +`tripleo-ci getthelogs script `_ for that. @@ -157,7 +157,7 @@ Some of the overridable settings are: - `tempest_format`: To run tempest using different format (packages, containers, venv). - `tempest_extra_config`: A dict of additional tempest config to be overridden. - `tempest_plugins`: A list of tempest plugins needs to be installed. - - `standalone_environment_files`: List of environment files to be overriden + - `standalone_environment_files`: List of environment files to be overridden by the featureset configuration on standalone deployment. The environment file should exist in tripleo-heat-templates repo. - `test_white_regex`: Regex to be used by tempest diff --git a/doc/source/ci/reproduce-ci.rst b/doc/source/ci/reproduce-ci.rst index 2e001140..89c0a3f2 100644 --- a/doc/source/ci/reproduce-ci.rst +++ b/doc/source/ci/reproduce-ci.rst @@ -10,7 +10,7 @@ briefly remind folks of their options. --------------------------------------------------------------- RDO's zuul is setup to directly inherit from upstream zuul. Any TripleO -job that executes upstream should be rerunable in RDO's zuul. A distinct +job that executes upstream should be re-runnable in RDO's zuul. A distinct advantage here is that you can ask RDO admins to hold the job for you, get your ssh keys on the box and debug the live environment. It's good stuff. To hold a node, ask your friends in #rhos-ops diff --git a/doc/source/ci/ruck_rover_primer.rst b/doc/source/ci/ruck_rover_primer.rst index 240126e0..c12e1c20 100644 --- a/doc/source/ci/ruck_rover_primer.rst +++ b/doc/source/ci/ruck_rover_primer.rst @@ -91,13 +91,13 @@ In some cases bugs are fixed once a new version of some service is released service/project). In this case the rover is expected to know what that fix is and do everything they can to make it available in the jobs. This will range from posting gerrit reviews to bump some service version in requirements.txt -through to simply harrassing the 'right folks' ;) in the relevant `TripleO Squad`_. +through to simply harassing the 'right folks' ;) in the relevant `TripleO Squad`_. In other cases bugs may be deprioritized - for example if the job is non voting or is not in the promotion criteria then any related bugs are less likely to be getting the rover's attention. If you are interested in such jobs or bugs then you should go to #OFTC oooq channel and find the folks with "ruck" or -"rover" in their nick and harrass them about it! +"rover" in their nick and harass them about it! Of course for other cases there are bona fide bugs with the `TripleO CI code`_ that the rover is expected to fix. To avoid being overwhelmed time management @@ -176,7 +176,7 @@ Grafana is also useful for tracking promotions across branches. represent promotions and height shows the number of promotions on that day. -Finally grafana tracks a list of all running jobs hilighting the failures in +Finally grafana tracks a list of all running jobs highlighting the failures in red. .. image:: ./_images/grafana3.png diff --git a/doc/source/contributor/core.rst b/doc/source/contributor/core.rst index 2990680d..5b02edc0 100644 --- a/doc/source/contributor/core.rst +++ b/doc/source/contributor/core.rst @@ -89,7 +89,7 @@ Here are a non-exhaustive list of things that are expected: which doesn't only mean +1/-1, but also comments the code that confirm that the patch is ready (or not) to be merged into the repository. This capacity to provide these kind of reviews is strongly evaluated when - recruiting new core reviewers. It is prefered to provide quality reviews + recruiting new core reviewers. It is preferred to provide quality reviews over quantity. A negative review needs productive feedback and harmful comments won't help to build credibility within the team. diff --git a/doc/source/developer/release.rst b/doc/source/developer/release.rst index d262f280..29c64d96 100644 --- a/doc/source/developer/release.rst +++ b/doc/source/developer/release.rst @@ -142,7 +142,7 @@ Prepare version tags Based on the versions.csv data, an openstack/releases patch needs to be created to tag the release with the provided hashes. You can determine which TripleO -projects are needed by finding the projects taged with "team: tripleo_". +projects are needed by finding the projects tagged with "team: tripleo_". `An example review`_. Please be aware of changes between versions and create the appropriate version number as necessary (e.g. major, feature, or bugfix). diff --git a/doc/source/developer/tripleoclient_primer.rst b/doc/source/developer/tripleoclient_primer.rst index 256b269f..e930df88 100644 --- a/doc/source/developer/tripleoclient_primer.rst +++ b/doc/source/developer/tripleoclient_primer.rst @@ -26,7 +26,7 @@ how to use this command as an operator. .. _building_containers_deploy_guide: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/3rd_party.html -One of the TipleO CI jobs that executes this command is the +One of the TripleO CI jobs that executes this command is the tripleo-build-containers-centos-7_ job. This job invokes the overcloud container image build command in the build.sh.j2_ template::