[docs] Fix lint failures
This patch fixes: doc/source/contributor/testing.rst:281: D000 Explicit markup ends without a blank line; unexpected unindent. doc/source/user/test/example.rst:28: D001 Line too long doc/source/admin/maintenance-tasks.rst:8: D000 Title level inconsistent: doc/source/admin/maintenance-tasks.rst:22: D000 Title level inconsistent: doc/source/admin/troubleshooting.rst:630: D001 Line too long doc/source/admin/troubleshooting.rst:650: D001 Line too long doc/source/admin/maintenance-tasks/inventory-backups.rst:11: D001 Line too long For consistency between maintenance-tasks/ files, they now all have the same markup hierarchy. Depends-On: https://review.openstack.org/567804 Change-Id: Id1cf9cb45543daa7c39d5141d8dc5827a76c6413
This commit is contained in:
parent
abc0c35b13
commit
52a11834ef
@ -45,7 +45,7 @@ For more information, see `Inventory <http://docs.ansible.com/ansible/intro_inve
|
|||||||
and `Patterns <http://docs.ansible.com/ansible/intro_patterns.html>`_.
|
and `Patterns <http://docs.ansible.com/ansible/intro_patterns.html>`_.
|
||||||
|
|
||||||
Running the shell module
|
Running the shell module
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
------------------------
|
||||||
|
|
||||||
The two most common modules used are the ``shell`` and ``copy`` modules. The
|
The two most common modules used are the ``shell`` and ``copy`` modules. The
|
||||||
``shell`` module takes the command name followed by a list of space delimited
|
``shell`` module takes the command name followed by a list of space delimited
|
||||||
@ -78,7 +78,7 @@ For more information, see `shell - Execute commands in nodes
|
|||||||
<http://docs.ansible.com/ansible/shell_module.html>`_.
|
<http://docs.ansible.com/ansible/shell_module.html>`_.
|
||||||
|
|
||||||
Running the copy module
|
Running the copy module
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
-----------------------
|
||||||
|
|
||||||
The copy module copies a file on a local machine to remote locations. Use the
|
The copy module copies a file on a local machine to remote locations. Use the
|
||||||
fetch module to copy files from remote locations to the local machine. If you
|
fetch module to copy files from remote locations to the local machine. If you
|
||||||
@ -126,7 +126,7 @@ from a single Compute host:
|
|||||||
-rw-r--r-- 1 root root 2428624 Dec 15 01:23 /tmp/aio1/var/log/nova/nova-compute.log
|
-rw-r--r-- 1 root root 2428624 Dec 15 01:23 /tmp/aio1/var/log/nova/nova-compute.log
|
||||||
|
|
||||||
Using tags
|
Using tags
|
||||||
~~~~~~~~~~
|
----------
|
||||||
|
|
||||||
Tags are similar to the limit flag for groups except tags are used to only run
|
Tags are similar to the limit flag for groups except tags are used to only run
|
||||||
specific tasks within a playbook. For more information on tags, see
|
specific tasks within a playbook. For more information on tags, see
|
||||||
@ -135,7 +135,7 @@ and `Understanding ansible tags
|
|||||||
<http://www.caphrim.net/ansible/2015/05/24/understanding-ansible-tags.html>`_.
|
<http://www.caphrim.net/ansible/2015/05/24/understanding-ansible-tags.html>`_.
|
||||||
|
|
||||||
Ansible forks
|
Ansible forks
|
||||||
~~~~~~~~~~~~~
|
-------------
|
||||||
|
|
||||||
The default ``MaxSessions`` setting for the OpenSSH Daemon is 10. Each Ansible
|
The default ``MaxSessions`` setting for the OpenSSH Daemon is 10. Each Ansible
|
||||||
fork makes use of a session. By default, Ansible sets the number of forks to
|
fork makes use of a session. By default, Ansible sets the number of forks to
|
||||||
|
@ -10,7 +10,7 @@ adding new deployment groups. It is also possible to destroy containers
|
|||||||
if needed after changes and modifications are complete.
|
if needed after changes and modifications are complete.
|
||||||
|
|
||||||
Scale individual services
|
Scale individual services
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
-------------------------
|
||||||
|
|
||||||
Individual OpenStack services, and other open source project services,
|
Individual OpenStack services, and other open source project services,
|
||||||
run within containers. It is possible to scale out these services by
|
run within containers. It is possible to scale out these services by
|
||||||
@ -63,7 +63,7 @@ modifying the ``/etc/openstack_deploy/openstack_user_config.yml`` file.
|
|||||||
$ openstack-ansible lxc-containers-create.yml rabbitmq-install.yml
|
$ openstack-ansible lxc-containers-create.yml rabbitmq-install.yml
|
||||||
|
|
||||||
Destroy and recreate containers
|
Destroy and recreate containers
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
-------------------------------
|
||||||
|
|
||||||
Resolving some issues may require destroying a container, and rebuilding
|
Resolving some issues may require destroying a container, and rebuilding
|
||||||
that container from the beginning. It is possible to destroy and
|
that container from the beginning. It is possible to destroy and
|
||||||
|
@ -24,7 +24,7 @@ if you want to use a different port.
|
|||||||
configure your firewall.
|
configure your firewall.
|
||||||
|
|
||||||
Finding ports for your external load balancer
|
Finding ports for your external load balancer
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
---------------------------------------------
|
||||||
|
|
||||||
As explained in the previous section, you can find (in each role
|
As explained in the previous section, you can find (in each role
|
||||||
documentation) the default variables used for the public
|
documentation) the default variables used for the public
|
||||||
|
@ -10,7 +10,7 @@ node, when the service is not running, or when changes are made to the
|
|||||||
``/etc/mysql/my.cnf`` configuration file.
|
``/etc/mysql/my.cnf`` configuration file.
|
||||||
|
|
||||||
Verify cluster status
|
Verify cluster status
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
---------------------
|
||||||
|
|
||||||
Compare the output of the following command with the following output.
|
Compare the output of the following command with the following output.
|
||||||
It should give you information about the status of your cluster.
|
It should give you information about the status of your cluster.
|
||||||
@ -42,7 +42,7 @@ processing SQL requests. When gracefully shutting down multiple nodes,
|
|||||||
perform the actions sequentially to retain operation.
|
perform the actions sequentially to retain operation.
|
||||||
|
|
||||||
Start a cluster
|
Start a cluster
|
||||||
~~~~~~~~~~~~~~~
|
---------------
|
||||||
|
|
||||||
Gracefully shutting down all nodes destroys the cluster. Starting or
|
Gracefully shutting down all nodes destroys the cluster. Starting or
|
||||||
restarting a cluster from zero nodes requires creating a new cluster on
|
restarting a cluster from zero nodes requires creating a new cluster on
|
||||||
@ -147,7 +147,7 @@ one of the nodes.
|
|||||||
.. _galera-cluster-recovery:
|
.. _galera-cluster-recovery:
|
||||||
|
|
||||||
Galera cluster recovery
|
Galera cluster recovery
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
-----------------------
|
||||||
|
|
||||||
Run the ``galera-install`` playbook using the ``galera-bootstrap`` tag
|
Run the ``galera-install`` playbook using the ``galera-bootstrap`` tag
|
||||||
to automatically recover a node or an entire environment.
|
to automatically recover a node or an entire environment.
|
||||||
@ -161,7 +161,7 @@ to automatically recover a node or an entire environment.
|
|||||||
The cluster comes back online after completion of this command.
|
The cluster comes back online after completion of this command.
|
||||||
|
|
||||||
Recover a single-node failure
|
Recover a single-node failure
|
||||||
-----------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
If a single node fails, the other nodes maintain quorum and
|
If a single node fails, the other nodes maintain quorum and
|
||||||
continue to process SQL requests.
|
continue to process SQL requests.
|
||||||
@ -202,7 +202,7 @@ continue to process SQL requests.
|
|||||||
for the node.
|
for the node.
|
||||||
|
|
||||||
Recover a multi-node failure
|
Recover a multi-node failure
|
||||||
----------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
When all but one node fails, the remaining node cannot achieve quorum and
|
When all but one node fails, the remaining node cannot achieve quorum and
|
||||||
stops processing SQL requests. In this situation, failed nodes that
|
stops processing SQL requests. In this situation, failed nodes that
|
||||||
@ -290,7 +290,7 @@ recover cannot join the cluster because it no longer exists.
|
|||||||
last resort, rebuild the container for the node.
|
last resort, rebuild the container for the node.
|
||||||
|
|
||||||
Recover a complete environment failure
|
Recover a complete environment failure
|
||||||
--------------------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Restore from backup if all of the nodes in a Galera cluster fail (do not
|
Restore from backup if all of the nodes in a Galera cluster fail (do not
|
||||||
shutdown gracefully). Change to the ``playbook`` directory and run the
|
shutdown gracefully). Change to the ``playbook`` directory and run the
|
||||||
@ -332,7 +332,7 @@ restart the cluster using the ``--wsrep-new-cluster`` command on one
|
|||||||
node.
|
node.
|
||||||
|
|
||||||
Rebuild a container
|
Rebuild a container
|
||||||
-------------------
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Recovering from certain failures require rebuilding one or more containers.
|
Recovering from certain failures require rebuilding one or more containers.
|
||||||
|
|
||||||
|
@ -1,41 +1,41 @@
|
|||||||
Prune Inventory Backup Archive
|
Prune Inventory Backup Archive
|
||||||
==============================
|
==============================
|
||||||
|
|
||||||
The inventory backup archive will require maintenance over a long enough period
|
The inventory backup archive will require maintenance over a long enough
|
||||||
of time.
|
period of time.
|
||||||
|
|
||||||
|
|
||||||
Bulk pruning
|
Bulk pruning
|
||||||
------------
|
------------
|
||||||
|
|
||||||
It's possible to do mass pruning of the inventory backup. The following example will
|
It's possible to do mass pruning of the inventory backup. The following
|
||||||
prune all but the last 15 inventories from the running archive.
|
example will prune all but the last 15 inventories from the running archive.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
ARCHIVE="/etc/openstack_deploy/backup_openstack_inventory.tar"
|
ARCHIVE="/etc/openstack_deploy/backup_openstack_inventory.tar"
|
||||||
tar -tvf ${ARCHIVE} | \
|
tar -tvf ${ARCHIVE} | \
|
||||||
head -n -15 | awk '{print $6}' | \
|
head -n -15 | awk '{print $6}' | \
|
||||||
xargs -n 1 tar -vf ${ARCHIVE} --delete
|
xargs -n 1 tar -vf ${ARCHIVE} --delete
|
||||||
|
|
||||||
|
|
||||||
Selective Pruning
|
Selective Pruning
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
To prune the inventory archive selectively first identify the files you wish to
|
To prune the inventory archive selectively first identify the files you wish
|
||||||
remove by listing them out.
|
to remove by listing them out.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
tar -tvf /etc/openstack_deploy/backup_openstack_inventory.tar
|
tar -tvf /etc/openstack_deploy/backup_openstack_inventory.tar
|
||||||
|
|
||||||
-rw-r--r-- root/root 110096 2018-05-03 10:11 openstack_inventory.json-20180503_151147.json
|
-rw-r--r-- root/root 110096 2018-05-03 10:11 openstack_inventory.json-20180503_151147.json
|
||||||
-rw-r--r-- root/root 110090 2018-05-03 10:11 openstack_inventory.json-20180503_151205.json
|
-rw-r--r-- root/root 110090 2018-05-03 10:11 openstack_inventory.json-20180503_151205.json
|
||||||
-rw-r--r-- root/root 110098 2018-05-03 10:12 openstack_inventory.json-20180503_151217.json
|
-rw-r--r-- root/root 110098 2018-05-03 10:12 openstack_inventory.json-20180503_151217.json
|
||||||
|
|
||||||
|
|
||||||
Now delete the targeted inventory archive.
|
Now delete the targeted inventory archive.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
tar -vf /etc/openstack_deploy/backup_openstack_inventory.tar --delete openstack_inventory.json-20180503_151205.json
|
tar -vf /etc/openstack_deploy/backup_openstack_inventory.tar --delete openstack_inventory.json-20180503_151205.json
|
||||||
|
@ -27,7 +27,7 @@ restrictive environments. For more details on that setup, see
|
|||||||
be released in Ansible version 2.3.
|
be released in Ansible version 2.3.
|
||||||
|
|
||||||
Create a RabbitMQ cluster
|
Create a RabbitMQ cluster
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
-------------------------
|
||||||
|
|
||||||
RabbitMQ clusters can be formed in two ways:
|
RabbitMQ clusters can be formed in two ways:
|
||||||
|
|
||||||
@ -86,7 +86,7 @@ cluster of the first node.
|
|||||||
Starting node rabbit@rabbit2 ...done.
|
Starting node rabbit@rabbit2 ...done.
|
||||||
|
|
||||||
Check the RabbitMQ cluster status
|
Check the RabbitMQ cluster status
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
---------------------------------
|
||||||
|
|
||||||
#. Run ``rabbitmqctl cluster_status`` from either node.
|
#. Run ``rabbitmqctl cluster_status`` from either node.
|
||||||
|
|
||||||
@ -119,7 +119,7 @@ process by stopping the rabbitmq application on the third node.
|
|||||||
...done.
|
...done.
|
||||||
|
|
||||||
Stop and restart a RabbitMQ cluster
|
Stop and restart a RabbitMQ cluster
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
-----------------------------------
|
||||||
|
|
||||||
To stop and start the cluster, keep in mind the order in
|
To stop and start the cluster, keep in mind the order in
|
||||||
which you shut the nodes down. The last node you stop, needs to be the
|
which you shut the nodes down. The last node you stop, needs to be the
|
||||||
@ -130,7 +130,7 @@ it thinks the current `master` should not be the master and drops the messages
|
|||||||
to ensure that no new messages are queued while the real master is down.
|
to ensure that no new messages are queued while the real master is down.
|
||||||
|
|
||||||
RabbitMQ and mnesia
|
RabbitMQ and mnesia
|
||||||
~~~~~~~~~~~~~~~~~~~
|
-------------------
|
||||||
|
|
||||||
Mnesia is a distributed database that RabbitMQ uses to store information about
|
Mnesia is a distributed database that RabbitMQ uses to store information about
|
||||||
users, exchanges, queues, and bindings. Messages, however
|
users, exchanges, queues, and bindings. Messages, however
|
||||||
@ -143,7 +143,7 @@ To view the locations of important Rabbit files, see
|
|||||||
`File Locations <https://www.rabbitmq.com/relocate.html>`_.
|
`File Locations <https://www.rabbitmq.com/relocate.html>`_.
|
||||||
|
|
||||||
Repair a partitioned RabbitMQ cluster for a single-node
|
Repair a partitioned RabbitMQ cluster for a single-node
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
-------------------------------------------------------
|
||||||
|
|
||||||
Invariably due to something in your environment, you are likely to lose a
|
Invariably due to something in your environment, you are likely to lose a
|
||||||
node in your cluster. In this scenario, multiple LXC containers on the same host
|
node in your cluster. In this scenario, multiple LXC containers on the same host
|
||||||
@ -195,7 +195,7 @@ the failing node.
|
|||||||
...done.
|
...done.
|
||||||
|
|
||||||
Repair a partitioned RabbitMQ cluster for a multi-node cluster
|
Repair a partitioned RabbitMQ cluster for a multi-node cluster
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
--------------------------------------------------------------
|
||||||
|
|
||||||
The same concepts apply to a multi-node cluster that exist in a single-node
|
The same concepts apply to a multi-node cluster that exist in a single-node
|
||||||
cluster. The only difference is that the various nodes will actually be
|
cluster. The only difference is that the various nodes will actually be
|
||||||
|
@ -627,11 +627,11 @@ containers.
|
|||||||
Restoring inventory from backup
|
Restoring inventory from backup
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
OpenStack-Ansible maintains a running archive of inventory. If a change has been
|
OpenStack-Ansible maintains a running archive of inventory. If a change has
|
||||||
introduced into the system that has broken inventory or otherwise has caused an
|
been introduced into the system that has broken inventory or otherwise has
|
||||||
unforseen issue, the inventory can be reverted to an early version. The backup
|
caused an unforseen issue, the inventory can be reverted to an early version.
|
||||||
file ``/etc/openstack_deploy/backup_openstack_inventory.tar`` contains a set of
|
The backup file ``/etc/openstack_deploy/backup_openstack_inventory.tar``
|
||||||
timestamped inventories that can be restored as needed.
|
contains a set of timestamped inventories that can be restored as needed.
|
||||||
|
|
||||||
Example inventory restore process.
|
Example inventory restore process.
|
||||||
|
|
||||||
@ -647,5 +647,5 @@ Example inventory restore process.
|
|||||||
rm -rf /tmp/inventory_restore
|
rm -rf /tmp/inventory_restore
|
||||||
|
|
||||||
|
|
||||||
At the completion of this operation the inventory will be restored to the ealier
|
At the completion of this operation the inventory will be restored to the
|
||||||
version.
|
earlier version.
|
||||||
|
@ -278,6 +278,7 @@ Testing a new role with an AIO
|
|||||||
deployment requirements (secrets and var files, HAProxy yml fragments,
|
deployment requirements (secrets and var files, HAProxy yml fragments,
|
||||||
repo_package files, etc.) in their own files it makes it easy for you to
|
repo_package files, etc.) in their own files it makes it easy for you to
|
||||||
automate these additional steps when testing your role.
|
automate these additional steps when testing your role.
|
||||||
|
|
||||||
Integrated repo functional or scenario testing
|
Integrated repo functional or scenario testing
|
||||||
----------------------------------------------
|
----------------------------------------------
|
||||||
|
|
||||||
|
@ -25,10 +25,11 @@ Network configuration
|
|||||||
Switch port configuration
|
Switch port configuration
|
||||||
-------------------------
|
-------------------------
|
||||||
|
|
||||||
The following example provides a good reference for switch configuration and cab
|
The following example provides a good reference for switch configuration and
|
||||||
layout. This example may be more that what is required for basic setups however
|
cab layout. This example may be more that what is required for basic setups
|
||||||
it can be adjusted to just about any configuration. Additionally you will need
|
however it can be adjusted to just about any configuration. Additionally you
|
||||||
to adjust the VLANS noted within this example to match your environment.
|
will need to adjust the VLANS noted within this example to match your
|
||||||
|
environment.
|
||||||
|
|
||||||
.. image:: ../figures/example-switchport-config-and-cabling.png
|
.. image:: ../figures/example-switchport-config-and-cabling.png
|
||||||
:width: 100%
|
:width: 100%
|
||||||
|
Loading…
Reference in New Issue
Block a user