docs: Remove all the unnecessary blockquotes

This change removes all unnessary blockquotes in the generated
html documentation. This was caused by too much whitespace before
doing lists or code blocks. Those blockquotes are easy to grep with:

    $ grep -n -R "<blockquote>" --include "*.html"

There is one blockquote left (bug-triage.rst), which is indeed a
paragraph which can be quoted.

Change-Id: Ib84ee3fb95caf2e9a545e1d89f22faf63304ad43
This commit is contained in:
Markus Zoeller 2017-05-31 13:18:24 +02:00
parent 215bb56fb3
commit a8aba563c7
12 changed files with 150 additions and 150 deletions

View File

@ -260,32 +260,32 @@ using the YAML dictionary format.
Example YAML dictionary format:
.. code-block:: yaml
.. code-block:: yaml
- name: The name of the tasks
module_name:
thing1: "some-stuff"
thing2: "some-other-stuff"
tags:
- some-tag
- some-other-tag
- name: The name of the tasks
module_name:
thing1: "some-stuff"
thing2: "some-other-stuff"
tags:
- some-tag
- some-other-tag
Example what **NOT** to do:
.. code-block:: yaml
.. code-block:: yaml
- name: The name of the tasks
module_name: thing1="some-stuff" thing2="some-other-stuff"
tags: some-tag
- name: The name of the tasks
module_name: thing1="some-stuff" thing2="some-other-stuff"
tags: some-tag
.. code-block:: yaml
.. code-block:: yaml
- name: The name of the tasks
module_name: >
thing1="some-stuff"
thing2="some-other-stuff"
tags: some-tag
- name: The name of the tasks
module_name: >
thing1="some-stuff"
thing2="some-other-stuff"
tags: some-tag
Usage of the ">" and "|" operators should be limited to Ansible conditionals

View File

@ -49,13 +49,13 @@ any previous contents in the event of conflicts.
The following file must be present in the configuration directory:
* ``openstack_user_config.yml``
* ``openstack_user_config.yml``
Additionally, the configuration or environment could be spread between two
additional sub-directories:
* ``conf.d``
* ``env.d`` (for environment customization)
* ``conf.d``
* ``env.d`` (for environment customization)
The dynamic inventory script does the following:

View File

@ -45,8 +45,8 @@ Group memberships
When adding groups, keep the following in mind:
* A group can contain hosts
* A group can contain child groups
* A group can contain hosts
* A group can contain child groups
However, groups cannot contain child groups and hosts.

View File

@ -41,13 +41,13 @@ any previous contents in the event of conflicts.
The following file must be present in the configuration directory:
* ``openstack_user_config.yml``
* ``openstack_user_config.yml``
Additionally, the configuration or environment could be spread between two
additional sub-directories:
* ``conf.d``
* ``env.d`` (for environment customization)
* ``conf.d``
* ``env.d`` (for environment customization)
The dynamic inventory script does the following:

View File

@ -152,12 +152,12 @@ removed, prior to the removal of the OpenStack-Ansible configuration.
following playbooks:
.. code-block:: console
.. code-block:: console
# openstack-ansible lxc-containers-create.yml --limit infra01:infra01-host_containers
# openstack-ansible lxc-containers-create.yml --limit infra02:infra02-host_containers
# openstack-ansible lxc-containers-create.yml --limit infra03:infra03-host_containers
# openstack-ansible os-neutron-install.yml --tags neutron-config
# openstack-ansible lxc-containers-create.yml --limit infra01:infra01-host_containers
# openstack-ansible lxc-containers-create.yml --limit infra02:infra02-host_containers
# openstack-ansible lxc-containers-create.yml --limit infra03:infra03-host_containers
# openstack-ansible os-neutron-install.yml --tags neutron-config
Move from Open vSwitch to LinuxBridge and vice versa

View File

@ -177,13 +177,13 @@ Recover a compute host failure
The following procedure addresses Compute node failure if shared storage
is used.
.. note::
.. note::
If shared storage is not used, data can be copied from the
``/var/lib/nova/instances`` directory on the failed Compute node
``${FAILED_NODE}`` to another node ``${RECEIVING_NODE}``\ before
performing the following procedure. Please note this method is
not supported.
If shared storage is not used, data can be copied from the
``/var/lib/nova/instances`` directory on the failed Compute node
``${FAILED_NODE}`` to another node ``${RECEIVING_NODE}``\ before
performing the following procedure. Please note this method is
not supported.
#. Re-launch all instances on the failed node.

View File

@ -10,74 +10,74 @@ Log in to any utility container to run the following commands:
List images
~~~~~~~~~~~
The :command:`openstack image list` command shows details about currently
available images:
The :command:`openstack image list` command shows details about currently
available images:
.. code::
.. code::
$ openstack image list
+------------------+--------------+--------+
| ID | Name | Status |
+------------------+--------------+--------+
| [ID truncated] | ExampleImage | active |
+------------------+--------------+--------+
$ openstack image list
+------------------+--------------+--------+
| ID | Name | Status |
+------------------+--------------+--------+
| [ID truncated] | ExaeImage | active |
+------------------+--------------+--------+
List compute services
~~~~~~~~~~~~~~~~~~~~~
The :command:`openstack compute service list` command details the currently
running compute services:
The :command:`openstack compute service list` command details the currently
running compute services:
.. code::
.. code::
$ openstack compute service list
+------------------+------------+----------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+------------+----------+---------+-------+----------------------------+
| nova-consoleauth | controller | internal | enabled | up | 2017-02-21T20:25:17.000000 |
| nova-scheduler | controller | internal | enabled | up | 2017-02-21T20:25:18.000000 |
| nova-conductor | controller | internal | enabled | up | 2017-02-21T20:25:20.000000 |
| nova-compute | compute | nova | enabled | up | 2017-02-21T20:25:20.000000 |
+------------------+------------+----------+---------+-------+----------------------------+
$ openstack compute service list
+------------------+------------+----------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+------------+----------+---------+-------+----------------------------+
| nova-consoleauth | controller | internal | enabled | up | 2017-02-21T20:25:17.000000 |
| nova-scheduler | controller | internal | enabled | up | 2017-02-21T20:25:18.000000 |
| nova-conductor | controller | internal | enabled | up | 2017-02-21T20:25:20.000000 |
| nova-compute | compute | nova | enabled | up | 2017-02-21T20:25:20.000000 |
+------------------+------------+----------+---------+-------+----------------------------+
List flavors
~~~~~~~~~~~~
The **openstack flavor list** command lists the *flavors* that are
available. These are different disk sizes that can be assigned to
images:
The **openstack flavor list** command lists the *flavors* that are
available. These are different disk sizes that can be assigned to
images:
.. code::
.. code::
$ openstack flavor list
+-----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+-----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+-----+-----------+-------+------+-----------+-------+-----------+
$ openstack flavor list
+-----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+-----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+-----+-----------+-------+------+-----------+-------+-----------+
List floating IP addresses
~~~~~~~~~~~~~~~~~~~~~~~~~~
The **openstack floating ip list** command lists the currently
available floating IP addresses and the instances they are
associated with:
he **openstack floating ip list** command lists the currently
available floating IP addresses and the instances they are
associated with:
.. code::
.. code::
$ openstack floating ip list
+------------------+------------------+---------------------+------------ +
| id | fixed_ip_address | floating_ip_address | port_id |
+------------------+------------------+---------------------+-------------+
| 0a88589a-ffac... | | 208.113.177.100 | |
+------------------+------------------+---------------------+-------------+
$ openstack floating ip list
+------------------+------------------+---------------------+------------ +
| id | fixed_ip_address | floating_ip_address | port_id |
+------------------+------------------+---------------------+-------------+
| 0a88589a-ffac... | | 208.113.177.100 | |
+------------------+------------------+---------------------+-------------+
For more information about OpenStack client utilities, see these links:

View File

@ -23,11 +23,11 @@ In order to add an image using the Dashboard, prepare an image binary
file, which must be accessible over HTTP using a valid and direct URL.
Images can be compressed using ``.zip`` or ``.tar.gz``.
.. note::
.. note::
Uploading images using the Dashboard will be available to users
with administrator privileges. Operators can set user access
privileges.
Uploading images using the Dashboard will be available to users
with administrator privileges. Operators can set user access
privileges.
#. Log in to the Dashboard.

View File

@ -203,9 +203,9 @@ The **Actions** column includes the following options:
- Soft or hard reset the instance
.. note::
.. note::
Terminate the instance under the **Actions** column.
Terminate the instance under the **Actions** column.
Managing volumes for persistent storage
@ -222,9 +222,9 @@ Nova instances live migration
Nova is capable of live migration instances from one host to
a different host to support various operational tasks including:
* Host Maintenance
* Host capacity management
* Resizing and moving instances to better hardware
* Host Maintenance
* Host capacity management
* Resizing and moving instances to better hardware
Nova configuration drive implication
@ -271,9 +271,9 @@ URL for how to transfer the data from one host to the other.
Depending on the ``nova_virt_type`` override the following configurations
are used:
* kvm defaults to ``qemu+tcp://%s/system``
* qemu defaults to ``qemu+tcp://%s/system``
* xen defaults to ``xenmigr://%s/system``
* kvm defaults to ``qemu+tcp://%s/system``
* qemu defaults to ``qemu+tcp://%s/system``
* xen defaults to ``xenmigr://%s/system``
Libvirt TCP port to transfer the data to migrate.
@ -365,22 +365,22 @@ The following nova client commands are provided:
* ``host-evacuate-live``
Live migrate all instances of the specified host
to other hosts if resource utilzation allows.
It is best to use shared storage like Ceph or NFS
for host evacuation.
Live migrate all instances of the specified host
to other hosts if resource utilzation allows.
It is best to use shared storage like Ceph or NFS
for host evacuation.
* ``host-servers-migrate``
This command is similar to host evacuation but
migrates all instances off the specified host while
they are shutdown.
This command is similar to host evacuation but
migrates all instances off the specified host while
they are shutdown.
* ``resize``
Changes the flavor of an Nova instance (increase) while rebooting
and also migrates (cold) the instance to a new host to accommodate
the new resource requirements. This operation can take considerate
amount of time, depending disk image sizes.
Changes the flavor of an Nova instance (increase) while rebooting
and also migrates (cold) the instance to a new host to accommodate
the new resource requirements. This operation can take considerate
amount of time, depending disk image sizes.

View File

@ -11,19 +11,19 @@ Load-Balancer-as-a-Service (LBaaS)
Understand the following characteristics of the OpenStack-Ansible LBaaS
technical preview:
* The preview release is not intended to provide highly scalable or
highly available load balancing services.
* Testing and recommended usage is limited to 10 members in a pool
and no more than 20 pools.
* Virtual load balancers deployed as part of the LBaaS service are
not monitored for availability or performance.
* OpenStack-Ansible enables LBaaS v2 with the default HAProxy-based agent.
* The Octavia agent is not supported.
* Integration with physical load balancer devices is not supported.
* Customers can use API or CLI LBaaS interfaces.
* The Dashboard offers a panel for creating and managing LBaaS load balancers,
listeners, pools, members, and health checks.
* SDN integration is not supported.
* The preview release is not intended to provide highly scalable or
highly available load balancing services.
* Testing and recommended usage is limited to 10 members in a pool
and no more than 20 pools.
* Virtual load balancers deployed as part of the LBaaS service are
not monitored for availability or performance.
* OpenStack-Ansible enables LBaaS v2 with the default HAProxy-based agent.
* The Octavia agent is not supported.
* Integration with physical load balancer devices is not supported.
* Customers can use API or CLI LBaaS interfaces.
* The Dashboard offers a panel for creating and managing LBaaS load balancers,
listeners, pools, members, and health checks.
* SDN integration is not supported.
Since Mitaka, you can `enable Dashboard (horizon) panels`_ for LBaaS.
Additionally, a deployer can specify a list of servers behind a

View File

@ -507,9 +507,9 @@ Cached Ansible facts issues
At the beginning of a playbook run, information about each host is gathered,
such as:
* Linux distribution
* Kernel version
* Network interfaces
* Linux distribution
* Kernel version
* Network interfaces
To improve performance, particularly in large deployments, you can
cache host facts and information.
@ -572,9 +572,9 @@ Predictable interface naming
On the host, all virtual Ethernet devices are named based on their
container as well as the name of the interface inside the container:
.. code-block:: shell-session
.. code-block:: shell-session
${CONTAINER_UNIQUE_ID}_${NETWORK_DEVICE_NAME}
${CONTAINER_UNIQUE_ID}_${NETWORK_DEVICE_NAME}
As an example, an all-in-one (AIO) build might provide a utility
container called `aio1_utility_container-d13b7132`. That container
@ -583,27 +583,27 @@ will have two network interfaces: `d13b7132_eth0` and `d13b7132_eth1`.
Another option would be to use the LXC tools to retrieve information
about the utility container. For example:
.. code-block:: shell-session
.. code-block:: shell-session
# lxc-info -n aio1_utility_container-d13b7132
# lxc-info -n aio1_utility_container-d13b7132
Name: aio1_utility_container-d13b7132
State: RUNNING
PID: 8245
IP: 10.0.3.201
IP: 172.29.237.204
CPU use: 79.18 seconds
BlkIO use: 678.26 MiB
Memory use: 613.33 MiB
KMem use: 0 bytes
Link: d13b7132_eth0
TX bytes: 743.48 KiB
RX bytes: 88.78 MiB
Total bytes: 89.51 MiB
Link: d13b7132_eth1
TX bytes: 412.42 KiB
RX bytes: 17.32 MiB
Total bytes: 17.73 MiB
Name: aio1_utility_container-d13b7132
State: RUNNING
PID: 8245
IP: 10.0.3.201
IP: 172.29.237.204
CPU use: 79.18 seconds
BlkIO use: 678.26 MiB
Memory use: 613.33 MiB
KMem use: 0 bytes
Link: d13b7132_eth0
TX bytes: 743.48 KiB
RX bytes: 88.78 MiB
Total bytes: 89.51 MiB
Link: d13b7132_eth1
TX bytes: 412.42 KiB
RX bytes: 17.32 MiB
Total bytes: 17.73 MiB
The ``Link:`` lines will show the network interfaces that are attached
to the utility container.

View File

@ -78,21 +78,21 @@ component playbooks against groups.
For example, you can update only the Compute hosts by running the following
command:
.. code-block:: console
.. code-block:: console
# openstack-ansible os-nova-install.yml --limit nova_compute
# openstack-ansible os-nova-install.yml --limit nova_compute
To update only a single Compute host, run the following command:
.. code-block:: console
.. code-block:: console
# openstack-ansible os-nova-install.yml --limit <node-name> \
--skip-tags 'nova-key'
# openstack-ansible os-nova-install.yml --limit <node-name> \
--skip-tags 'nova-key'
.. note::
.. note::
Skipping the ``nova-key`` tag is necessary so that the keys on
all Compute hosts are not gathered.
Skipping the ``nova-key`` tag is necessary so that the keys on
all Compute hosts are not gathered.
To see which hosts belong to which groups, use the ``inventory-manage.py``
script to show all groups and their hosts. For example: