Grammar Cleanup - Maintenance Tasks

Cleaned up grammar and formatting

Change-Id: I43af52ea33e695fd631517c0c3f3f51e7bf0e00f
This commit is contained in:
Amy Marrich (spotz) 2020-04-08 08:55:55 -05:00
parent 4d0510bcfd
commit 560fc2d447
6 changed files with 60 additions and 63 deletions

View File

@ -2,7 +2,7 @@ Running ad-hoc Ansible plays
============================
Being familiar with running ad-hoc Ansible commands is helpful when
operating your OpenStack-Ansible deployment. For example, if we look at the
operating your OpenStack-Ansible deployment. For a review, we can look at the
structure of the following ansible command:
.. code-block:: console
@ -10,23 +10,24 @@ structure of the following ansible command:
$ ansible example_group -m shell -a 'hostname'
This command calls on Ansible to run the ``example_group`` using
the ``-m`` shell module with the ``-a`` argument being the hostname command.
You can substitute the group for any other groups you may have defined. For
example, if you had ``compute_hosts`` in one group and
``infra_hosts`` in another, supply either group name and run the
commands. You can also use the ``*`` wild card if you only know the first part
of the group name, for example, ``compute_h*``. The ``-m`` argument is for
module.
the ``-m`` shell module with the ``-a`` argument which is the hostname command.
You can substitute example_group for any groups you may have defined. For
example, if you had ``compute_hosts`` in one group and ``infra_hosts`` in
another, supply either group name and run the command. You can also use the
``*`` wild card if you only know the first part of the group name, for
instance if you know the group name starts with compute you would use
``compute_h*``. The ``-m`` argument is for module.
Modules can be used to control system resources, or handle the execution of
system commands. For a more information about modules , see
Modules can be used to control system resources or handle the execution of
system commands. For more information about modules, see
`Module Index <https://docs.ansible.com/ansible/modules_by_category.html>`_ and
`About Modules <https://docs.ansible.com/ansible/modules.html>`_.
If you need to run a particular command against a subset of a group, you
could use the limit flag ``-l``. For example, if a ``compute_hosts`` group
contained ``compute1``, ``compute2``, ``compute3``, and ``compute4``, and you
only needed to execute a command on ``compute1`` and ``compute4``:
only needed to execute a command on ``compute1`` and ``compute4`` you could
limit the command as follows:
.. code-block:: console
@ -80,10 +81,10 @@ For more information, see `shell - Execute commands in nodes
Running the copy module
-----------------------
The copy module copies a file on a local machine to remote locations. Use the
fetch module to copy files from remote locations to the local machine. If you
need variable interpolation in copied files, use the template module. For more
information, see `copy - Copies files to remote locations
The copy module copies a file on a local machine to remote locations. To copy
files from remote locations to the local machine you would use the fetch
module. If you need variable interpolation in copied files, use the template
module. For more information, see `copy - Copies files to remote locations
<https://docs.ansible.com/ansible/copy_module.html>`_.
The following example shows how to move a file from your deployment host to the
@ -94,10 +95,9 @@ The following example shows how to move a file from your deployment host to the
$ ansible remote_machines -m copy -a 'src=/root/FILE \
dest=/tmp/FILE'
If you want to gather files from remote machines, use the fetch module. The
fetch module stores files locally in a file tree, organized by the hostname
from remote machines and stores them locally in a file tree, organized by
hostname.
The fetch module gathers files from remote machines and stores the files
locally in a file tree, organized by the hostname from remote machines and
stores them locally in a file tree, organized by hostname.
.. note::
@ -128,7 +128,7 @@ from a single Compute host:
Using tags
----------
Tags are similar to the limit flag for groups except tags are used to only run
Tags are similar to the limit flag for groups, except tags are used to only run
specific tasks within a playbook. For more information on tags, see
`Tags <http://ansible-docs.readthedocs.io/zh/stable-2.0/rst/playbooks_tags.html>`_
and `Understanding ansible tags
@ -142,10 +142,10 @@ fork makes use of a session. By default, Ansible sets the number of forks to
5. However, you can increase the number of forks used in order to improve
deployment performance in large environments.
Note that more than 10 forks will cause issues for any playbooks
which use ``delegate_to`` or ``local_action`` in the tasks. It is
recommended that the number of forks are not raised when executing against the
control plane, as this is where delegation is most often used.
Note that more than 10 forks will cause issues for any playbooks which use
``delegate_to`` or ``local_action`` in the tasks. It is recommended that the
number of forks are not raised when executing against the control plane, as
this is where delegation is most often used.
The number of forks used may be changed on a permanent basis by including
the appropriate change to the ``ANSIBLE_FORKS`` in your ``.bashrc`` file.

View File

@ -5,9 +5,9 @@ With Ansible, the OpenStack installation process is entirely automated
using playbooks written in YAML. After installation, the settings
configured by the playbooks can be changed and modified. Services and
containers can shift to accommodate certain environment requirements.
Scaling services is achieved by adjusting services within containers, or
adding new deployment groups. It is also possible to destroy containers
if needed after changes and modifications are complete.
Scaling services are achieved by adjusting services within containers, or
adding new deployment groups. It is also possible to destroy containers,
if needed, after changes and modifications are complete.
Scale individual services
-------------------------

View File

@ -2,21 +2,19 @@ Firewalls
=========
OpenStack-Ansible does not configure firewalling for its
infrastructure. It is up to the deployer to define the perimeter
and its firewalling configuration.
OpenStack-Ansible does not configure firewalls for its infrastructure. It is
up to the deployer to define the perimeter and its firewall configuration.
By default, OpenStack-Ansible relies on Ansible SSH connections,
and needs the TCP port 22 to be opened on all hosts
internally.
By default, OpenStack-Ansible relies on Ansible SSH connections, and needs
the TCP port 22 to be opened on all hosts internally.
For more information on generic OpenStack firewalling, see the
For more information on generic OpenStack firewall configuration, see the
`Firewalls and default ports <https://docs.openstack.org/install-guide/firewalls-default-ports.html>`_
You can find in each of the role's respective documentatione, the
default variables for the ports used within the scope of the role.
Reviewing the documentation allow you to find the variable names
if you want to use a different port.
In each of the role's respective documentatione you can find the default
variables for the ports used within the scope of the role. Reviewing the
documentation allow you to find the variable names if you want to use a
different port.
.. note:: OpenStack-Ansible's group vars conveniently expose the vars outside of the
`role scope <https://opendev.org/openstack/openstack-ansible/src/inventory/group_vars/all/all.yml>`_
@ -26,9 +24,9 @@ if you want to use a different port.
Finding ports for your external load balancer
---------------------------------------------
As explained in the previous section, you can find (in each role
documentation) the default variables used for the public
interface endpoint ports.
As explained in the previous section, you can find (in each roles
documentation) the default variables used for the public interface endpoint
ports.
For example, the
`os_glance documentation <https://docs.openstack.org/openstack-ansible-os_glance/latest/#default-variables>`_
@ -37,8 +35,8 @@ the port used for the reaching the service externally. In
this example, it is equal to ``glance_service_port``, whose
value is 9292.
As a hint, you could find the whole list of public URI defaults
by executing the following:
As a hint, you could find the list of all public URI defaults by executing
the following:
.. code::
@ -51,4 +49,3 @@ by executing the following:
can be configured with OpenStack-Ansible.
The automatically generated ``/etc/haproxy/haproxy.cfg`` file have
enough information on the ports to open for your environment.

View File

@ -36,10 +36,10 @@ It should give you information about the status of your cluster.
In this example, only one node responded.
Gracefully shutting down the MariaDB service on all but one node
allows the remaining operational node to continue
processing SQL requests. When gracefully shutting down multiple nodes,
perform the actions sequentially to retain operation.
Gracefully shutting down the MariaDB service on all but one node allows the
remaining operational node to continue processing SQL requests. When
gracefully shutting down multiple nodes, perform the actions sequentially to
retain operation.
Start a cluster
---------------

View File

@ -8,7 +8,7 @@ period of time.
Bulk pruning
------------
It's possible to do mass pruning of the inventory backup. The following
It is possible to do mass pruning of the inventory backup. The following
example will prune all but the last 15 inventories from the running archive.
.. code-block:: bash
@ -22,7 +22,7 @@ example will prune all but the last 15 inventories from the running archive.
Selective Pruning
-----------------
To prune the inventory archive selectively first identify the files you wish
To prune the inventory archive selectively, first identify the files you wish
to remove by listing them out.
.. code-block:: bash

View File

@ -7,13 +7,13 @@ exchanges, bindings, and runtime parameters. A collection of nodes is often
referred to as a `cluster`. For more information on RabbitMQ clustering, see
`RabbitMQ cluster <https://www.rabbitmq.com/clustering.html>`_.
Within OpenStack-Ansible, all data and states required for operation of the RabbitMQ
cluster is replicated across all nodes including the message queues providing
high availability. RabbitMQ nodes address each other using domain names.
The hostnames of all cluster members must be resolvable from all cluster
Within OpenStack-Ansible, all data and states required for operation of the
RabbitMQ cluster is replicated across all nodes including the message queues
providing high availability. RabbitMQ nodes address each other using domain
names. The hostnames of all cluster members must be resolvable from all cluster
nodes, as well as any machines where CLI tools related to RabbitMQ might be
used. There are alternatives that may work in more
restrictive environments. For more details on that setup, see
used. There are alternatives that may work in more restrictive environments.
For more details on that setup, see
`Inet Configuration <http://erlang.org/doc/apps/erts/inet_cfg.html>`_.
@ -121,9 +121,9 @@ process by stopping the RabbitMQ application on the third node.
Stop and restart a RabbitMQ cluster
-----------------------------------
To stop and start the cluster, keep in mind the order in
which you shut the nodes down. The last node you stop, needs to be the
first node you start. This node is the `master`.
To stop and start the cluster, keep in mind the order in which you shut the
nodes down. The last node you stop, needs to be the first node you start.
This node is the `master`.
If you start the nodes out of order, you could run into an issue where
it thinks the current `master` should not be the master and drops the messages
@ -146,8 +146,8 @@ Repair a partitioned RabbitMQ cluster for a single-node
-------------------------------------------------------
Invariably due to something in your environment, you are likely to lose a
node in your cluster. In this scenario, multiple LXC containers on the same host
are running Rabbit and are in a single Rabbit cluster.
node in your cluster. In this scenario, multiple LXC containers on the same
host are running Rabbit and are in a single Rabbit cluster.
If the host still shows as part of the cluster, but it is not running,
execute:
@ -186,7 +186,8 @@ the failing node.
rabbit1$ rabbitmqctl start_app
Starting node rabbit@rabbit1 ...
Error: inconsistent_cluster: Node rabbit@rabbit1 thinks it's clustered with node rabbit@rabbit2, but rabbit@rabbit2 disagrees
Error: inconsistent_cluster: Node rabbit@rabbit1 thinks it's clustered
with node rabbit@rabbit2, but rabbit@rabbit2 disagrees
rabbit1$ rabbitmqctl reset
Resetting node rabbit@rabbit1 ...done.
@ -216,4 +217,3 @@ multi-node cluster are:
bootable again.
Consult the rabbitmqctl manpage for more information.