Install Guide Cleanup

This commit does the following:

- sets all shell prompts in code-blocks to the root prompt
- uses shell-session code-block since the shell prompt was being
  treated as a comment
- links configure-aodh.rst in configure.rst (running tox was
  complaining that this file wasn't being linked anywhere)
- other minor cleanup

Change-Id: I9e3ac8bb0cabd1cc17952cfd765dbb0d8f7b6fa2
This commit is contained in:
Matt Thompson 2015-10-20 12:07:04 +01:00 committed by Jesse Pretorius
parent f7bf93f6db
commit 06708191d6
43 changed files with 330 additions and 307 deletions

View File

@ -15,87 +15,125 @@ In order to facilitate this extra options may be passed to the python package
installer to reinstall based on whatever version of the package is available installer to reinstall based on whatever version of the package is available
in the repository. This is done by executing, for example: in the repository. This is done by executing, for example:
.. code-block:: bash .. code-block:: shell-session
openstack-ansible -e pip_install_options="--force-reinstall" \ # openstack-ansible -e pip_install_options="--force-reinstall" \
setup-openstack.yml setup-openstack.yml
A minor upgrade will typically require the execution of the following: A minor upgrade will typically require the execution of the following:
.. code-block:: bash 1. Change directory into the repository clone root directory::
# Change directory into the repository clone root directory .. code-block:: shell-session
cd /opt/openstack-ansible
# Update the git remotes # cd /opt/openstack-ansible
git fetch --all
# Checkout the latest tag (the below tag is an example) 2. Update the git remotes
git checkout 12.0.1
# Change into the playbooks directory .. code-block:: shell-session
cd playbooks
# Build the updated repository # git fetch --all
openstack-ansible repo-install.yml
# Update RabbitMQ 3. Checkout the latest tag (the below tag is an example)
openstack-ansible -e rabbitmq_upgrade=true \
.. code-block:: shell-session
# git checkout 12.0.1
4. Change into the playbooks directory
.. code-block:: shell-session
# cd playbooks
5. Build the updated repository
.. code-block:: shell-session
# openstack-ansible repo-install.yml
6. Update RabbitMQ
.. code-block:: shell-session
# openstack-ansible -e rabbitmq_upgrade=true \
rabbitmq-install.yml rabbitmq-install.yml
# Update the Utility Container 7. Update the Utility Container
openstack-ansible -e pip_install_options="--force-reinstall" \
.. code-block:: shell-session
# openstack-ansible -e pip_install_options="--force-reinstall" \
utility-install.yml utility-install.yml
# Update all OpenStack Services 8. Update all OpenStack Services
openstack-ansible -e pip_install_options="--force-reinstall" \
.. code-block:: shell-session
# openstack-ansible -e pip_install_options="--force-reinstall" \
setup-openstack.yml setup-openstack.yml
Note that if you wish to scope the upgrades to specific OpenStack components Note that if you wish to scope the upgrades to specific OpenStack components
then each of the component playbooks may be executed and scoped using groups. then each of the component playbooks may be executed and scoped using groups.
For example: For example:
.. code-block:: bash 1. Update only the Compute Hosts
# Update only the Compute Hosts .. code-block:: shell-session
openstack-ansible -e pip_install_options="--force-reinstall" \
# openstack-ansible -e pip_install_options="--force-reinstall" \
os-nova-install.yml --limit nova_compute os-nova-install.yml --limit nova_compute
# Update only a single Compute Host 2. Update only a single Compute Host (skipping the 'nova-key' tag is necessary as the keys on all compute hosts will not be gathered)
# Skipping the 'nova-key' tag is necessary as the keys on all compute
# hosts will not be gathered. .. code-block:: shell-session
openstack-ansible -e pip_install_options="--force-reinstall" \
# openstack-ansible -e pip_install_options="--force-reinstall" \
os-nova-install.yml --limit <node-name> --skip-tags 'nova-key' os-nova-install.yml --limit <node-name> --skip-tags 'nova-key'
If you wish to see which hosts belong to which groups, the If you wish to see which hosts belong to which groups, the
``inventory-manage.py`` script will show all groups and their hosts. ``inventory-manage.py`` script will show all groups and their hosts.
For example: For example:
.. code-block:: bash 1. Change directory into the repository clone root directory
# Change directory into the repository clone root directory .. code-block:: shell-session
cd /opt/openstack-ansible
# Show all groups and which hosts belong to them # cd /opt/openstack-ansible
./scripts/inventory-manage.py -G
# Show all hosts and which groups they belong to 2. Show all groups and which hosts belong to them
./scripts/inventory-manage.py -g
.. code-block:: shell-session
# ./scripts/inventory-manage.py -G
3. Show all hosts and which groups they belong to
.. code-block:: shell-session
# ./scripts/inventory-manage.py -g
You may also see which hosts a playbook will execute against, and which tasks You may also see which hosts a playbook will execute against, and which tasks
will be executed: will be executed:
.. code-block:: bash 1. Change directory into the repository clone playbooks directory
# Change directory into the repository clone playbooks directory .. code-block:: shell-session
cd /opt/openstack-ansible/playbooks
# See the hosts in the nova_compute group which a playbook will execute # cd /opt/openstack-ansible/playbooks
# against
openstack-ansible os-nova-install.yml --limit nova_compute --list-hosts
# See the tasks which will be executed on hosts in the nova_compute group 2. See the hosts in the nova_compute group which a playbook will execute against
openstack-ansible os-nova-install.yml --limit nova_compute \
.. code-block:: shell-session
# openstack-ansible os-nova-install.yml --limit nova_compute \
--list-hosts
3. See the tasks which will be executed on hosts in the nova_compute group
.. code-block:: shell-session
# openstack-ansible os-nova-install.yml --limit nova_compute \
--skip-tags 'nova-key' \ --skip-tags 'nova-key' \
--list-tasks --list-tasks

View File

@ -16,9 +16,9 @@ entry in ``ansible.cfg``, or for a particular playbook execution by using the
``--forks`` CLI parameter. For example, to execute the ``--forks`` CLI parameter. For example, to execute the
``os-keystone-install.yml`` playbook using 10 forks: ``os-keystone-install.yml`` playbook using 10 forks:
.. code-block:: bash .. code-block:: shell-session
openstack-ansible --forks 10 os-keystone-install.yml # openstack-ansible --forks 10 os-keystone-install.yml
.. _forks: http://docs.ansible.com/ansible/intro_configuration.html#forks .. _forks: http://docs.ansible.com/ansible/intro_configuration.html#forks

View File

@ -11,7 +11,7 @@ The Alarming services of the Telemetry perform the following functions:
Aodh on OSA requires a monogodb backend to be configured prior to running the aodh Aodh on OSA requires a monogodb backend to be configured prior to running the aodh
playbooks. The connection data will then need to be given in the ``user_variables.yml`` playbooks. The connection data will then need to be given in the ``user_variables.yml``
file(See section Configuring User Data below). file (see section `Configuring the user data`_ below).
Setting up a Mongodb database for Aodh Setting up a Mongodb database for Aodh
@ -19,31 +19,31 @@ Setting up a Mongodb database for Aodh
1. Install the MongoDB package: 1. Install the MongoDB package:
.. code-block:: shell .. code-block:: shell-session
apt-get install mongodb-server mongodb-clients python-pymongo # apt-get install mongodb-server mongodb-clients python-pymongo
2. Edit the ``/etc/mongodb.conf`` file and change the bind_ip to the management interface of the node your running this on. 2. Edit the ``/etc/mongodb.conf`` file and change the bind_ip to the management interface of the node your running this on.
.. code-block:: shell .. code-block:: shell-session
bind_ip = 10.0.0.11 bind_ip = 10.0.0.11
3. Edit the ``/etc/mongodb.conf`` file and enable smallfiles 3. Edit the ``/etc/mongodb.conf`` file and enable smallfiles
.. code-block:: shell .. code-block:: shell-session
smallfiles = true smallfiles = true
4. Restart the mongodb service 4. Restart the mongodb service
.. code-block:: shell .. code-block:: shell-session
service mongodb restart # service mongodb restart
5. Create the aodh database 5. Create the aodh database
.. code-block:: shell .. code-block:: shell-session
# mongo --host controller --eval ' # mongo --host controller --eval '
db = db.getSiblingDB("aodh"); db = db.getSiblingDB("aodh");
@ -51,6 +51,10 @@ Setting up a Mongodb database for Aodh
pwd: "AODH_DBPASS", pwd: "AODH_DBPASS",
roles: [ "readWrite", "dbAdmin" ]})' roles: [ "readWrite", "dbAdmin" ]})'
This should return:
.. code-block:: shell-session
MongoDB shell version: 2.4.x MongoDB shell version: 2.4.x
connecting to: controller:27017/test connecting to: controller:27017/test
{ {
@ -63,7 +67,7 @@ Setting up a Mongodb database for Aodh
"_id" : ObjectId("5489c22270d7fad1ba631dc3") "_id" : ObjectId("5489c22270d7fad1ba631dc3")
} }
NOTE: The ``AODH_DBPASS`` must match the ``aodh_container_db_password`` in the ``/etc/openstack_deploy/user_secrets.yml`` file. This is how ansible knows how to configure the connection string within the aodh configuration files. NOTE: The ``AODH_DBPASS`` must match the ``aodh_container_db_password`` in the ``/etc/openstack_deploy/user_secrets.yml`` file. This is how ansible knows how to configure the connection string within the aodh configuration files.
Configuring the hosts Configuring the hosts
##################### #####################

View File

@ -19,7 +19,7 @@ The Telemetry module(Ceilometer) performs the following functions:
Ceilometer on OSA requires a monogodb backend to be configured prior to running the ceilometer Ceilometer on OSA requires a monogodb backend to be configured prior to running the ceilometer
playbooks. The connection data will then need to be given in the ``user_variables.yml`` playbooks. The connection data will then need to be given in the ``user_variables.yml``
file(See section Configuring User Data below). file (see section `Configuring the user data`_ below).
Setting up a Mongodb database for ceilometer Setting up a Mongodb database for ceilometer
@ -27,31 +27,31 @@ Setting up a Mongodb database for ceilometer
1. Install the MongoDB package: 1. Install the MongoDB package:
.. code-block:: shell .. code-block:: shell-session
apt-get install mongodb-server mongodb-clients python-pymongo # apt-get install mongodb-server mongodb-clients python-pymongo
2. Edit the ``/etc/mongodb.conf`` file and change the bind_ip to the management interface of the node your running this on. 2. Edit the ``/etc/mongodb.conf`` file and change the bind_ip to the management interface of the node your running this on.
.. code-block:: shell .. code-block:: shell-session
bind_ip = 10.0.0.11 bind_ip = 10.0.0.11
3. Edit the ``/etc/mongodb.conf`` file and enable smallfiles 3. Edit the ``/etc/mongodb.conf`` file and enable smallfiles
.. code-block:: shell .. code-block:: shell-session
smallfiles = true smallfiles = true
4. Restart the mongodb service 4. Restart the mongodb service
.. code-block:: shell .. code-block:: shell-session
service mongodb restart # service mongodb restart
5. Create the ceilometer database 5. Create the ceilometer database
.. code-block:: shell .. code-block:: shell-session
# mongo --host controller --eval ' # mongo --host controller --eval '
db = db.getSiblingDB("ceilometer"); db = db.getSiblingDB("ceilometer");
@ -59,6 +59,10 @@ Setting up a Mongodb database for ceilometer
pwd: "CEILOMETER_DBPASS", pwd: "CEILOMETER_DBPASS",
roles: [ "readWrite", "dbAdmin" ]})' roles: [ "readWrite", "dbAdmin" ]})'
This should return:
.. code-block:: shell-session
MongoDB shell version: 2.4.x MongoDB shell version: 2.4.x
connecting to: controller:27017/test connecting to: controller:27017/test
{ {
@ -71,7 +75,7 @@ Setting up a Mongodb database for ceilometer
"_id" : ObjectId("5489c22270d7fad1ba631dc3") "_id" : ObjectId("5489c22270d7fad1ba631dc3")
} }
NOTE: The ``CEILOMETER_DBPASS`` must match the ``ceilometer_container_db_password`` in the ``/etc/openstack_deploy/user_secrets.yml`` file. This is how ansible knows how to configure the connection string within the ceilometer configuration files. NOTE: The ``CEILOMETER_DBPASS`` must match the ``ceilometer_container_db_password`` in the ``/etc/openstack_deploy/user_secrets.yml`` file. This is how ansible knows how to configure the connection string within the ceilometer configuration files.
Configuring the hosts Configuring the hosts
##################### #####################

View File

@ -16,8 +16,8 @@ availability zones.
cinder_storage_availability_zone: CINDERAZ cinder_storage_availability_zone: CINDERAZ
Replace *``CINDERAZ``* with a suitable name. For example Replace ``CINDERAZ`` with a suitable name. For example
*``cinderAZ_2``* ``cinderAZ_2``
#. If more than one availability zone is created, configure the default #. If more than one availability zone is created, configure the default
availability zone for all the hosts by creating a availability zone for all the hosts by creating a
@ -28,8 +28,8 @@ availability zones.
cinder_default_availability_zone: CINDERAZ_DEFAULT cinder_default_availability_zone: CINDERAZ_DEFAULT
Replace *``CINDERAZ_DEFAULT``* with a suitable name. For example, Replace ``CINDERAZ_DEFAULT`` with a suitable name. For example,
*``cinderAZ_1``*. The default availability zone should be the same ``cinderAZ_1``. The default availability zone should be the same
for all cinder hosts. for all cinder hosts.
If the ``cinder_default_availability_zone`` is not defined, the If the ``cinder_default_availability_zone`` is not defined, the

View File

@ -14,7 +14,7 @@ back up to an external Object Storage installation.
``/etc/openstack_deploy/user_variables.yml`` file and set the value ``/etc/openstack_deploy/user_variables.yml`` file and set the value
to ``True``: to ``True``:
.. code-block:: bash .. code-block:: yaml
cinder_service_backup_program_enabled: True cinder_service_backup_program_enabled: True
@ -27,7 +27,7 @@ back up to an external Object Storage installation.
following variables to the following variables to the
``/etc/openstack_deploy/user_variables.yml`` file: ``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: bash .. code-block:: yaml
... ...
cinder_service_backup_swift_auth: per_user cinder_service_backup_swift_auth: per_user

View File

@ -7,8 +7,8 @@ If the NetApp back end is configured to use an NFS storage protocol,
edit ``/etc/openstack_deploy/openstack_user_config.yml``, and configure edit ``/etc/openstack_deploy/openstack_user_config.yml``, and configure
the NFS client on each storage node that will use it. the NFS client on each storage node that will use it.
#. Add the *``cinder_backends``* stanza (which includes #. Add the ``cinder_backends`` stanza (which includes
*``cinder_nfs_client``*) under the *``container_vars``* stanza for ``cinder_nfs_client``) under the ``container_vars`` stanza for
each storage node: each storage node:
.. code-block:: yaml .. code-block:: yaml
@ -19,13 +19,13 @@ the NFS client on each storage node that will use it.
#. Configure the location of the file that lists shares available to the #. Configure the location of the file that lists shares available to the
block storage service. This configuration file must include block storage service. This configuration file must include
*``nfs_shares_config``*: ``nfs_shares_config``:
.. code-block:: yaml .. code-block:: yaml
nfs_shares_config: SHARE_CONFIG nfs_shares_config: SHARE_CONFIG
Replace *``SHARE_CONFIG``* with the location of the share Replace ``SHARE_CONFIG`` with the location of the share
configuration file. For example, ``/etc/cinder/nfs_shares``. configuration file. For example, ``/etc/cinder/nfs_shares``.
#. Configure one or more NFS shares: #. Configure one or more NFS shares:
@ -35,8 +35,8 @@ the NFS client on each storage node that will use it.
shares: shares:
- { ip: "NFS_HOST", share: "NFS_SHARE" } - { ip: "NFS_HOST", share: "NFS_SHARE" }
Replace *``NFS_HOST``* with the IP address or hostname of the NFS Replace ``NFS_HOST`` with the IP address or hostname of the NFS
server, and the *``NFS_SHARE``* with the absolute path to an existing server, and the ``NFS_SHARE`` with the absolute path to an existing
and accessible NFS share. and accessible NFS share.
-------------- --------------

View File

@ -43,7 +43,7 @@ Ensure that the NAS Team enables httpd.admin.access.
netapp_storage_family: STORAGE_FAMILY netapp_storage_family: STORAGE_FAMILY
Replace *``STORAGE_FAMILY``* with ``ontap_7mode`` for Data ONTAP Replace ``STORAGE_FAMILY`` with ``ontap_7mode`` for Data ONTAP
operating in 7-mode or ``ontap_cluster`` for Data ONTAP operating as operating in 7-mode or ``ontap_cluster`` for Data ONTAP operating as
a cluster. a cluster.
@ -53,7 +53,7 @@ Ensure that the NAS Team enables httpd.admin.access.
netapp_storage_protocol: STORAGE_PROTOCOL netapp_storage_protocol: STORAGE_PROTOCOL
Replace *``STORAGE_PROTOCOL``* with ``iscsi`` for iSCSI or ``nfs`` Replace ``STORAGE_PROTOCOL`` with ``iscsi`` for iSCSI or ``nfs``
for NFS. for NFS.
For the NFS protocol, you must also specify the location of the For the NFS protocol, you must also specify the location of the
@ -64,7 +64,7 @@ Ensure that the NAS Team enables httpd.admin.access.
nfs_shares_config: SHARE_CONFIG nfs_shares_config: SHARE_CONFIG
Replace *``SHARE_CONFIG``* with the location of the share Replace ``SHARE_CONFIG`` with the location of the share
configuration file. For example, ``/etc/cinder/nfs_shares``. configuration file. For example, ``/etc/cinder/nfs_shares``.
#. Configure the server: #. Configure the server:
@ -73,7 +73,7 @@ Ensure that the NAS Team enables httpd.admin.access.
netapp_server_hostname: SERVER_HOSTNAME netapp_server_hostname: SERVER_HOSTNAME
Replace *``SERVER_HOSTNAME``* with the hostnames for both netapp Replace ``SERVER_HOSTNAME`` with the hostnames for both netapp
controllers. controllers.
#. Configure the server API port: #. Configure the server API port:
@ -82,7 +82,7 @@ Ensure that the NAS Team enables httpd.admin.access.
netapp_server_port: PORT_NUMBER netapp_server_port: PORT_NUMBER
Replace *``PORT_NUMBER``* with 80 for HTTP or 443 for HTTPS. Replace ``PORT_NUMBER`` with 80 for HTTP or 443 for HTTPS.
#. Configure the server credentials: #. Configure the server credentials:
@ -91,7 +91,7 @@ Ensure that the NAS Team enables httpd.admin.access.
netapp_login: USER_NAME netapp_login: USER_NAME
netapp_password: PASSWORD netapp_password: PASSWORD
Replace *``USER_NAME``* and *``PASSWORD``* with the appropriate Replace ``USER_NAME`` and ``PASSWORD`` with the appropriate
values. values.
#. Select the NetApp driver: #. Select the NetApp driver:
@ -106,7 +106,7 @@ Ensure that the NAS Team enables httpd.admin.access.
volume_backend_name: BACKEND_NAME volume_backend_name: BACKEND_NAME
Replace *``BACKEND_NAME``* with a suitable value that provides a hint Replace ``BACKEND_NAME`` with a suitable value that provides a hint
for the Block Storage scheduler. For example, ``NETAPP_iSCSI``. for the Block Storage scheduler. For example, ``NETAPP_iSCSI``.
#. Check that the ``openstack_user_config.yml`` configuration is #. Check that the ``openstack_user_config.yml`` configuration is
@ -130,10 +130,10 @@ Ensure that the NAS Team enables httpd.admin.access.
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name: NETAPP_NFS volume_backend_name: NETAPP_NFS
For *``netapp_server_hostname``*, specify the IP address of the Data For ``netapp_server_hostname``, specify the IP address of the Data
ONTAP server. Include iSCSI or NFS for the ONTAP server. Include iSCSI or NFS for the
*``netapp_storage_family``* depending on the configuration. Add 80 if ``netapp_storage_family`` depending on the configuration. Add 80 if
using HTTP or 443 if using HTTPS for *``netapp_server_port``*. using HTTP or 443 if using HTTPS for ``netapp_server_port``.
The ``cinder-volume.yml`` playbook will automatically install the The ``cinder-volume.yml`` playbook will automatically install the
``nfs-common`` file across the hosts, transitioning from an LVM to a ``nfs-common`` file across the hosts, transitioning from an LVM to a

View File

@ -16,9 +16,9 @@ Here are a few steps to execute before running any playbook:
#. Run your command with syntax-check, for example, #. Run your command with syntax-check, for example,
in the playbooks directory: in the playbooks directory:
.. code-block:: bash .. code-block:: shell-session
$ openstack-ansible setup-infrastructure.yml --syntax-check # openstack-ansible setup-infrastructure.yml --syntax-check
#. Recheck that all indentation seems correct: the syntax of the #. Recheck that all indentation seems correct: the syntax of the
configuration files can be correct while not being meaningful configuration files can be correct while not being meaningful

View File

@ -23,11 +23,10 @@ interfaces:
Recommended: Use the ``pw-token-gen.py`` script to generate random Recommended: Use the ``pw-token-gen.py`` script to generate random
values for the variables in each file that contains service credentials: values for the variables in each file that contains service credentials:
.. code-block:: bash .. code-block:: shell-session
$ cd /opt/openstack-ansible/scripts
$ python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
# cd /opt/openstack-ansible/scripts
# python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
To regenerate existing passwords, add the ``--regen`` flag. To regenerate existing passwords, add the ``--regen`` flag.

View File

@ -34,7 +34,7 @@ Each IdP trusted by an SP must have the following configuration:
With the above information, Ansible implements the equivalent of the With the above information, Ansible implements the equivalent of the
following OpenStack CLI commands: following OpenStack CLI commands:
.. code-block:: shell .. code-block:: shell-session
# if the domain does not already exist # if the domain does not already exist
openstack domain create Default openstack domain create Default
@ -112,7 +112,7 @@ The ``mapping`` dictionary is a yaml representation very similar to the
keystone mapping property which Ansible uploads. The above mapping keystone mapping property which Ansible uploads. The above mapping
produces the following in keystone. produces the following in keystone.
.. code-block:: shell .. code-block:: shell-session
root@aio1_keystone_container-783aa4c0:~# openstack mapping list root@aio1_keystone_container-783aa4c0:~# openstack mapping list
+------------------+ +------------------+

View File

@ -45,7 +45,7 @@ for the user does not yet exist.
To simplify the task of obtaining access to a SP cloud, OpenStack Ansible provides a script that wraps the above steps. The script is called ``federated-login.sh`` and is To simplify the task of obtaining access to a SP cloud, OpenStack Ansible provides a script that wraps the above steps. The script is called ``federated-login.sh`` and is
used as follows:: used as follows::
./federated-login.sh -p project [-d domain] sp_id # ./scripts/federated-login.sh -p project [-d domain] sp_id
Where ``project`` is the project in the SP cloud that the user wants to access, Where ``project`` is the project in the SP cloud that the user wants to access,
``domain`` is the domain in which the project lives (the default domain is ``domain`` is the domain in which the project lives (the default domain is
@ -59,13 +59,13 @@ and the scoped token provided by the SP.
The endpoints and token can be used with the openstack command line client as The endpoints and token can be used with the openstack command line client as
follows:: follows::
$ openstack --os-token=<token> --os-url=<service-endpoint> [options] # openstack --os-token=<token> --os-url=<service-endpoint> [options]
or alternatively:: or alternatively::
$ export OS_TOKEN=<token> # export OS_TOKEN=<token>
$ export OS_URL=<service-endpoint> # export OS_URL=<service-endpoint>
$ openstack [options] # openstack [options]
The user must select the appropriate endpoint for the desired The user must select the appropriate endpoint for the desired
operation. For example, if the user wants to work with servers, the ``OS_URL`` operation. For example, if the user wants to work with servers, the ``OS_URL``

View File

@ -33,8 +33,8 @@ usage.
glance_swift_store_user: GLANCE_SWIFT_TENANT:GLANCE_SWIFT_USER glance_swift_store_user: GLANCE_SWIFT_TENANT:GLANCE_SWIFT_USER
glance_swift_store_key: SWIFT_PASSWORD_OR_KEY glance_swift_store_key: SWIFT_PASSWORD_OR_KEY
#. Change the *``glance_swift_store_endpoint_type``* from the default #. Change the ``glance_swift_store_endpoint_type`` from the default
*``internalURL``* settings to *``publicURL``* if needed. ``internalURL`` settings to ``publicURL`` if needed.
.. code-block:: yaml .. code-block:: yaml
@ -46,7 +46,7 @@ usage.
glance_swift_store_container: STORE_NAME glance_swift_store_container: STORE_NAME
Replace *``STORE_NAME``* with the container name in swift to be Replace ``STORE_NAME`` with the container name in swift to be
used for storing images. If the container doesn't exist, it will be used for storing images. If the container doesn't exist, it will be
automatically created. automatically created.
@ -56,7 +56,7 @@ usage.
glance_swift_store_region: STORE_REGION glance_swift_store_region: STORE_REGION
Replace *``STORE_REGION``* if needed. Replace ``STORE_REGION`` if needed.
#. (Optional) Set the paste deploy flavor: #. (Optional) Set the paste deploy flavor:
@ -64,21 +64,20 @@ usage.
glance_flavor: GLANCE_FLAVOR glance_flavor: GLANCE_FLAVOR
By default, the Image service uses caching and authenticates with the By default, the Image service uses caching and authenticates with the
Identity service. The default maximum size of the image cache is 10 Identity service. The default maximum size of the image cache is 10
GB. The default Image service container size is 12 GB. In some GB. The default Image service container size is 12 GB. In some
configurations, the Image service might attempt to cache an image configurations, the Image service might attempt to cache an image
which exceeds the available disk space. If necessary, you can disable which exceeds the available disk space. If necessary, you can disable
caching. For example, to use Identity without caching, replace caching. For example, to use Identity without caching, replace
*``GLANCE_FLAVOR``* with ``keystone``: ``GLANCE_FLAVOR`` with ``keystone``:
.. code-block:: yaml .. code-block:: yaml
glance_flavor: keystone glance_flavor: keystone
Or, to disable both authentication and caching, set Or, to disable both authentication and caching, set
*``GLANCE_FLAVOR``* to no value: ``GLANCE_FLAVOR`` to no value:
.. code-block:: yaml .. code-block:: yaml
@ -89,7 +88,7 @@ usage.
file. To override the default behavior, set ``glance_flavor`` to a file. To override the default behavior, set ``glance_flavor`` to a
different value in ``/etc/openstack_deploy/user_variables.yml``. different value in ``/etc/openstack_deploy/user_variables.yml``.
The possible values for *``GLANCE_FLAVOR``* are: The possible values for ``GLANCE_FLAVOR`` are:
- (Nothing) - (Nothing)

View File

@ -20,13 +20,13 @@ configure target host networking.
#Storage (same range as br-storage on the target hosts) #Storage (same range as br-storage on the target hosts)
storage: STORAGE_CIDR storage: STORAGE_CIDR
Replace *``*_CIDR``* with the appropriate IP address range in CIDR Replace ``*_CIDR`` with the appropriate IP address range in CIDR
notation. For example, 203.0.113.0/24. notation. For example, 203.0.113.0/24.
Use the same IP address ranges as the underlying physical network Use the same IP address ranges as the underlying physical network
interfaces or bridges configured in `the section called "Configuring interfaces or bridges configured in `the section called "Configuring
the network" <targethosts-network.html>`_. For example, if the the network" <targethosts-network.html>`_. For example, if the
container network uses 203.0.113.0/24, the *``CONTAINER_MGMT_CIDR``* container network uses 203.0.113.0/24, the ``CONTAINER_MGMT_CIDR``
should also use 203.0.113.0/24. should also use 203.0.113.0/24.
The default configuration includes the optional storage and service The default configuration includes the optional storage and service
@ -40,7 +40,7 @@ configure target host networking.
used_ips: used_ips:
- EXISTING_IP_ADDRESSES - EXISTING_IP_ADDRESSES
Replace *``EXISTING_IP_ADDRESSES``* with a list of existing IP Replace ``EXISTING_IP_ADDRESSES`` with a list of existing IP
addresses in the ranges defined in the previous step. This list addresses in the ranges defined in the previous step. This list
should include all IP addresses manually configured on target hosts should include all IP addresses manually configured on target hosts
in the `the section called "Configuring the in the `the section called "Configuring the
@ -79,18 +79,18 @@ configure target host networking.
# Tunnel network bridge device # Tunnel network bridge device
tunnel_bridge: "TUNNEL_BRIDGE" tunnel_bridge: "TUNNEL_BRIDGE"
Replace *``INTERNAL_LB_VIP_ADDRESS``* with the internal IP address of Replace ``INTERNAL_LB_VIP_ADDRESS`` with the internal IP address of
the load balancer. Infrastructure and OpenStack services use this IP the load balancer. Infrastructure and OpenStack services use this IP
address for internal communication. address for internal communication.
Replace *``EXTERNAL_LB_VIP_ADDRESS``* with the external, public, or Replace ``EXTERNAL_LB_VIP_ADDRESS`` with the external, public, or
DMZ IP address of the load balancer. Users primarily use this IP DMZ IP address of the load balancer. Users primarily use this IP
address for external API and web interfaces access. address for external API and web interfaces access.
Replace *``MGMT_BRIDGE``* with the container bridge device name, Replace ``MGMT_BRIDGE`` with the container bridge device name,
typically ``br-mgmt``. typically ``br-mgmt``.
Replace *``TUNNEL_BRIDGE``* with the tunnel/overlay bridge device Replace ``TUNNEL_BRIDGE`` with the tunnel/overlay bridge device
name, typically ``br-vxlan``. name, typically ``br-vxlan``.
#. Configure the management network in the ``provider_networks`` subsection: #. Configure the management network in the ``provider_networks`` subsection:
@ -149,7 +149,7 @@ configure target host networking.
range: "TUNNEL_ID_RANGE" range: "TUNNEL_ID_RANGE"
net_name: "vxlan" net_name: "vxlan"
Replace *``TUNNEL_ID_RANGE``* with the tunnel ID range. For example, Replace ``TUNNEL_ID_RANGE`` with the tunnel ID range. For example,
1:1000. 1:1000.
#. Configure OpenStack Networking flat (untagged) and VLAN (tagged) networks #. Configure OpenStack Networking flat (untagged) and VLAN (tagged) networks
@ -177,7 +177,7 @@ configure target host networking.
range: VLAN_ID_RANGE range: VLAN_ID_RANGE
net_name: "vlan" net_name: "vlan"
Replace *``VLAN_ID_RANGE``* with the VLAN ID range for each VLAN network. Replace ``VLAN_ID_RANGE`` with the VLAN ID range for each VLAN network.
For example, 1:1000. Supports more than one range of VLANs on a particular For example, 1:1000. Supports more than one range of VLANs on a particular
network. For example, 1:1000,2001:3000. Create a similar stanza for each network. For example, 1:1000,2001:3000. Create a similar stanza for each
additional network. additional network.
@ -210,7 +210,7 @@ configure target host networking.
``/etc/network/interfaces.d/eth2.cfg`` file in the appropriate ``/etc/network/interfaces.d/eth2.cfg`` file in the appropriate
containers: containers:
.. code-block:: shell .. code-block:: shell-session
post-up ip route add 10.176.0.0/12 via 172.29.248.1 || true post-up ip route add 10.176.0.0/12 via 172.29.248.1 || true

View File

@ -70,9 +70,9 @@ certificates unless the user sets ``<servicename>_ssl_self_signed_regen`` to
To force a self-signed certificate to regenerate you can pass the variable to To force a self-signed certificate to regenerate you can pass the variable to
``openstack-ansible`` on the command line: ``openstack-ansible`` on the command line:
.. code-block:: bash .. code-block:: shell-session
openstack-ansible -e "horizon_ssl_self_signed_regen=true" os-horizon-install.yml # openstack-ansible -e "horizon_ssl_self_signed_regen=true" os-horizon-install.yml
To force a self-signed certificate to regenerate **with every playbook run**, To force a self-signed certificate to regenerate **with every playbook run**,
simply set the appropriate regeneration option to ``true``. For example, if simply set the appropriate regeneration option to ``true``. For example, if
@ -120,9 +120,9 @@ variables:
Simply run the playbook to apply the certificates: Simply run the playbook to apply the certificates:
.. code-block:: bash .. code-block:: shell-session
openstack-ansible rabbitmq-install.yml # openstack-ansible rabbitmq-install.yml
The playbook will deploy your user-provided SSL certificate, key, and CA The playbook will deploy your user-provided SSL certificate, key, and CA
certificate to each RabbitMQ container. certificate to each RabbitMQ container.

View File

@ -28,11 +28,10 @@ existing deployment.
#. Run the Object Storage play: #. Run the Object Storage play:
.. code-block:: bash .. code-block:: shell-session
$ cd /opt/openstack-ansible/playbooks
$ openstack-ansible os-swift-install.yml
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-swift-install.yml
-------------- --------------

View File

@ -9,9 +9,9 @@ file**
#. Copy the ``/etc/openstack_deploy/conf.d/swift.yml.example`` file to #. Copy the ``/etc/openstack_deploy/conf.d/swift.yml.example`` file to
``/etc/openstack_deploy/conf.d/swift.yml``: ``/etc/openstack_deploy/conf.d/swift.yml``:
.. code-block:: bash .. code-block:: shell-session
#cp /etc/openstack_deploy/conf.d/swift.yml.example \ # cp /etc/openstack_deploy/conf.d/swift.yml.example \
/etc/openstack_deploy/conf.d/swift.yml /etc/openstack_deploy/conf.d/swift.yml
#. Update the global override values: #. Update the global override values:

View File

@ -26,53 +26,47 @@ through ``sdg``.
For example, create the file systems on the devices using the For example, create the file systems on the devices using the
**mkfs** command **mkfs** command
.. code-block:: bash .. code-block:: shell-session
$ apt-get install xfsprogs # apt-get install xfsprogs
# mkfs.xfs -f -i size=1024 -L sdc /dev/sdc
$ mkfs.xfs -f -i size=1024 -L sdc /dev/sdc # mkfs.xfs -f -i size=1024 -L sdd /dev/sdd
$ mkfs.xfs -f -i size=1024 -L sdd /dev/sdd # mkfs.xfs -f -i size=1024 -L sde /dev/sde
$ mkfs.xfs -f -i size=1024 -L sde /dev/sde # mkfs.xfs -f -i size=1024 -L sdf /dev/sdf
$ mkfs.xfs -f -i size=1024 -L sdf /dev/sdf # mkfs.xfs -f -i size=1024 -L sdg /dev/sdg
$ mkfs.xfs -f -i size=1024 -L sdg /dev/sdg
#. Add the mount locations to the ``fstab`` file so that the storage #. Add the mount locations to the ``fstab`` file so that the storage
devices are remounted on boot. The following example mount options devices are remounted on boot. The following example mount options
are recommended when using XFS. are recommended when using XFS.
.. code-block:: bash .. code-block:: shell-session
$ LABEL=sdc /srv/node/sdc xfs noatime,nodiratime, \ LABEL=sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
nobarrier,logbufs=8,noauto 0 0 LABEL=sdd /srv/node/sdd xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
$ LABEL=sdd /srv/node/sdd xfs noatime,nodiratime, \ LABEL=sde /srv/node/sde xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
nobarrier,logbufs=8,noauto 0 0 LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
$ LABEL=sde /srv/node/sde xfs noatime,nodiratime, \ LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
nobarrier,logbufs=8,noauto 0 0
$ LABEL=sdf /srv/node/sdf xfs noatime,nodiratime, \
nobarrier,logbufs=8,noauto 0 0
$ LABEL=sdg /srv/node/sdg xfs noatime,nodiratime, \
nobarrier,logbufs=8,noauto 0 0
#. Create the mount points for the devices using the **mkdir** command. #. Create the mount points for the devices using the **mkdir** command.
.. code-block:: bash .. code-block:: shell-session
$ mkdir -p /srv/node/sdc # mkdir -p /srv/node/sdc
$ mkdir -p /srv/node/sdd # mkdir -p /srv/node/sdd
$ mkdir -p /srv/node/sde # mkdir -p /srv/node/sde
$ mkdir -p /srv/node/sdf # mkdir -p /srv/node/sdf
$ mkdir -p /srv/node/sdg # mkdir -p /srv/node/sdg
The mount point is referenced as the ``mount_point``\ parameter in The mount point is referenced as the ``mount_point`` parameter in
the ``swift.yml`` file (``/etc/rpc_deploy/conf.d/swift.yml``). the ``swift.yml`` file (``/etc/rpc_deploy/conf.d/swift.yml``).
.. code-block:: bash .. code-block:: shell-session
$ mount /srv/node/sdc # mount /srv/node/sdc
$ mount /srv/node/sdd # mount /srv/node/sdd
$ mount /srv/node/sde # mount /srv/node/sde
$ mount /srv/node/sdf # mount /srv/node/sdf
$ mount /srv/node/sdg # mount /srv/node/sdg
To view an annotated example of the ``swift.yml`` file, see `Appendix A, To view an annotated example of the ``swift.yml`` file, see `Appendix A,
*OSA configuration files* <app-configfiles.html>`_. *OSA configuration files* <app-configfiles.html>`_.

View File

@ -60,10 +60,10 @@ This procedure requires the following:
#. Run the Image Service (glance) playbook: #. Run the Image Service (glance) playbook:
.. code-block:: bash .. code-block:: shell-session
$ cd /opt/openstack-ansible/playbooks # cd /opt/openstack-ansible/playbooks
$ openstack-ansible os-glance-install.yml --tags "glance-config" # openstack-ansible os-glance-install.yml --tags "glance-config"
-------------- --------------

View File

@ -38,10 +38,9 @@ cluster:
policy groups) creates an empty ring for that storage policy. policy groups) creates an empty ring for that storage policy.
- A non-default storage policy is used only if specified when creating - A non-default storage policy is used only if specified when creating
a container, using the a container, using the ``X-Storage-Policy: <policy-name>`` header.
``X-Storage-Policy: <policy-name>`` header. After the After the container is created, it uses the created storage policy.
container is created, it uses the created storage policy. Other Other containers continue using the default or another storage policy
containers continue using the default or another storage policy
specified when created. specified when created.
For more information about storage policies, see: `Storage For more information about storage policies, see: `Storage

View File

@ -20,10 +20,10 @@ The following procedure describes how to set up storage devices and
modify the Object Storage configuration files to enable Object Storage modify the Object Storage configuration files to enable Object Storage
usage. usage.
#. `the section called "Configure and mount storage #. `The section called "Configure and mount storage
devices" <configure-swift-devices.html>`_ devices" <configure-swift-devices.html>`_
#. `the section called "Configure an Object Storage #. `The section called "Configure an Object Storage
deployment" <configure-swift-config.html>`_ deployment" <configure-swift-config.html>`_
#. Optionally, allow all Identity users to use Object Storage by setting #. Optionally, allow all Identity users to use Object Storage by setting

View File

@ -17,6 +17,7 @@ Chapter 5. Deployment configuration
configure-horizon.rst configure-horizon.rst
configure-rabbitmq.rst configure-rabbitmq.rst
configure-ceilometer.rst configure-ceilometer.rst
configure-aodh.rst
configure-keystone.rst configure-keystone.rst
configure-openstack.rst configure-openstack.rst
configure-sslcertificates.rst configure-sslcertificates.rst

View File

@ -8,12 +8,11 @@ Install additional software packages and configure NTP.
#. Install additional software packages if not already installed during #. Install additional software packages if not already installed during
operating system installation: operating system installation:
.. code-block:: bash .. code-block:: shell-session
# apt-get install aptitude build-essential git ntp ntpdate \ # apt-get install aptitude build-essential git ntp ntpdate \
openssh-server python-dev sudo openssh-server python-dev sudo
#. Configure NTP to synchronize with a suitable time source. #. Configure NTP to synchronize with a suitable time source.
-------------- --------------

View File

@ -8,17 +8,16 @@ Install the source and dependencies for the deployment host.
#. Clone the OSA repository into the ``/opt/openstack-ansible`` #. Clone the OSA repository into the ``/opt/openstack-ansible``
directory: directory:
.. code-block:: bash .. code-block:: shell-session
# git clone -b TAG https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible # git clone -b TAG https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
Replace *``TAG``* with the current stable release tag. Replace *``TAG``* with the current stable release tag.
#. Change to the ``/opt/openstack-ansible`` directory, and run the #. Change to the ``/opt/openstack-ansible`` directory, and run the
Ansible bootstrap script: Ansible bootstrap script:
.. code-block:: bash .. code-block:: shell-session
# scripts/bootstrap-ansible.sh # scripts/bootstrap-ansible.sh

View File

@ -11,14 +11,14 @@ Running the foundation playbook
#. Run the host setup playbook, which runs a series of sub-playbooks: #. Run the host setup playbook, which runs a series of sub-playbooks:
.. code-block:: bash .. code-block:: shell-session
$ openstack-ansible setup-hosts.yml # openstack-ansible setup-hosts.yml
Confirm satisfactory completion with zero items unreachable or Confirm satisfactory completion with zero items unreachable or
failed: failed:
.. code-block:: bash .. code-block:: shell-session
PLAY RECAP ******************************************************************** PLAY RECAP ********************************************************************
... ...
@ -33,21 +33,21 @@ Running the foundation playbook
been downloaded during the bootstrap-ansible stage. If not, you should been downloaded during the bootstrap-ansible stage. If not, you should
rerun the following command before running the haproxy playbook: rerun the following command before running the haproxy playbook:
.. code-block:: shell .. code-block:: shell-session
$ ../scripts/bootstrap-ansible.sh # ../scripts/bootstrap-ansible.sh
or or
.. code-block:: shell .. code-block:: shell-session
$ ansible-galaxy install -r ../ansible-role-requirements.yml # ansible-galaxy install -r ../ansible-role-requirements.yml
Run the playbook to deploy haproxy: Run the playbook to deploy haproxy:
.. code-block:: bash .. code-block:: shell-session
$ openstack-ansible haproxy-install.yml # openstack-ansible haproxy-install.yml
-------------- --------------

View File

@ -12,14 +12,14 @@ Running the infrastructure playbook
#. Run the infrastructure setup playbook, which runs a series of #. Run the infrastructure setup playbook, which runs a series of
sub-playbooks: sub-playbooks:
.. code-block:: bash .. code-block:: shell-session
$ openstack-ansible setup-infrastructure.yml # openstack-ansible setup-infrastructure.yml
Confirm satisfactory completion with zero items unreachable or Confirm satisfactory completion with zero items unreachable or
failed: failed:
.. code-block:: bash .. code-block:: shell-session
PLAY RECAP ******************************************************************** PLAY RECAP ********************************************************************
... ...

View File

@ -9,22 +9,22 @@ Verify the database cluster and Kibana web interface operation.
#. Determine the Galera container name: #. Determine the Galera container name:
.. code-block:: bash .. code-block:: shell-session
$ lxc-ls | grep galera # lxc-ls | grep galera
infra1_galera_container-4ed0d84a infra1_galera_container-4ed0d84a
#. Access the Galera container: #. Access the Galera container:
.. code-block:: bash .. code-block:: shell-session
$ lxc-attach -n infra1_galera_container-4ed0d84a # lxc-attach -n infra1_galera_container-4ed0d84a
#. Run the MariaDB client, show cluster status, and exit the client: #. Run the MariaDB client, show cluster status, and exit the client:
.. code-block:: bash .. code-block:: shell-session
$ mysql -u root -p # mysql -u root -p
MariaDB> show status like 'wsrep_cluster%'; MariaDB> show status like 'wsrep_cluster%';
+--------------------------+--------------------------------------+ +--------------------------+--------------------------------------+
| Variable_name | Value | | Variable_name | Value |

View File

@ -8,9 +8,9 @@ Running the OpenStack playbook
#. Run the OpenStack setup playbook, which runs a series of #. Run the OpenStack setup playbook, which runs a series of
sub-playbooks: sub-playbooks:
.. code-block:: bash .. code-block:: shell-session
$ openstack-ansible setup-openstack.yml # openstack-ansible setup-openstack.yml
The openstack-common.yml sub-playbook builds all OpenStack services The openstack-common.yml sub-playbook builds all OpenStack services
from source and takes up to 30 minutes to complete. As the playbook from source and takes up to 30 minutes to complete. As the playbook
@ -18,7 +18,7 @@ Running the OpenStack playbook
approach zero. If any operations take longer than 30 minutes to approach zero. If any operations take longer than 30 minutes to
complete, the playbook will terminate with an error. complete, the playbook will terminate with an error.
.. code-block:: bash .. code-block:: shell-session
changed: [target_host_glance_container-f2ebdc06] changed: [target_host_glance_container-f2ebdc06]
changed: [target_host_heat_engine_container-36022446] changed: [target_host_heat_engine_container-36022446]
@ -53,7 +53,7 @@ Running the OpenStack playbook
approach zero. If any operations take longer than 30 minutes to approach zero. If any operations take longer than 30 minutes to
complete, the playbook will terminate with an error. complete, the playbook will terminate with an error.
.. code-block:: bash .. code-block:: shell-session
ok: [target_host_nova_conductor_container-2b495dc4] ok: [target_host_nova_conductor_container-2b495dc4]
ok: [target_host_nova_api_metadata_container-600fe8b3] ok: [target_host_nova_api_metadata_container-600fe8b3]
@ -70,7 +70,7 @@ Running the OpenStack playbook
Confirm satisfactory completion with zero items unreachable or Confirm satisfactory completion with zero items unreachable or
failed: failed:
.. code-block:: bash .. code-block:: shell-session
PLAY RECAP ********************************************************************** PLAY RECAP **********************************************************************
... ...

View File

@ -5,7 +5,6 @@ Verifying OpenStack operation
Verify basic operation of the OpenStack API and dashboard. Verify basic operation of the OpenStack API and dashboard.
 
**Procedure 8.1. Verifying the API** **Procedure 8.1. Verifying the API**
@ -14,28 +13,28 @@ configuration and testing.
#. Determine the utility container name: #. Determine the utility container name:
.. code-block:: bash .. code-block:: shell-session
$ lxc-ls | grep utility # lxc-ls | grep utility
infra1_utility_container-161a4084 infra1_utility_container-161a4084
#. Access the utility container: #. Access the utility container:
.. code-block:: bash .. code-block:: shell-session
$ lxc-attach -n infra1_utility_container-161a4084 # lxc-attach -n infra1_utility_container-161a4084
#. Source the ``admin`` tenant credentials: #. Source the ``admin`` tenant credentials:
.. code-block:: bash .. code-block:: shell-session
$ source openrc # source openrc
#. Run an OpenStack command that uses one or more APIs. For example: #. Run an OpenStack command that uses one or more APIs. For example:
.. code-block:: bash .. code-block:: shell-session
$ keystone user-list # keystone user-list
+----------------------------------+----------+---------+-------+ +----------------------------------+----------+---------+-------+
| id | name | enabled | email | | id | name | enabled | email |
+----------------------------------+----------+---------+-------+ +----------------------------------+----------+---------+-------+

View File

@ -17,10 +17,10 @@ cluster.
#. Run the following commands to add the host. Replace #. Run the following commands to add the host. Replace
``NEW_HOST_NAME`` with the name of the new host. ``NEW_HOST_NAME`` with the name of the new host.
.. code-block:: bash .. code-block:: shell-session
$ cd /opt/openstack-ansible/playbooks # cd /opt/openstack-ansible/playbooks
$ openstack-ansible setup-everything.yml --limit NEW_HOST_NAME # openstack-ansible setup-everything.yml --limit NEW_HOST_NAME
-------------- --------------

View File

@ -11,9 +11,9 @@ entire environment.
#. Run the following Ansible command to show the failed nodes: #. Run the following Ansible command to show the failed nodes:
.. code-block:: bash .. code-block:: shell-session
$ openstack-ansible galera-install --tags galera-bootstrap # openstack-ansible galera-install --tags galera-bootstrap
Upon completion of this command the cluster should be back online an in Upon completion of this command the cluster should be back online an in

View File

@ -8,9 +8,9 @@ gracefully), then the integrity of the database can no longer be
guaranteed and should be restored from backup. Run the following command guaranteed and should be restored from backup. Run the following command
to determine if all nodes in the cluster have failed: to determine if all nodes in the cluster have failed:
.. code-block:: bash .. code-block:: shell-session
$ ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat" # ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat"
node3_galera_container-3ea2cbd3 | success | rc=0 >> node3_galera_container-3ea2cbd3 | success | rc=0 >>
# GALERA saved state # GALERA saved state
version: 2.1 version: 2.1

View File

@ -16,19 +16,18 @@ containers.
MariaDB data stored outside of the container. In this example, node 3 MariaDB data stored outside of the container. In this example, node 3
failed. failed.
.. code-block:: bash .. code-block:: shell-session
$ lxc-stop -n node3_galera_container-3ea2cbd3
$ lxc-destroy -n node3_galera_container-3ea2cbd3
$ rm -rf /openstack/node3_galera_container-3ea2cbd3/*
# lxc-stop -n node3_galera_container-3ea2cbd3
# lxc-destroy -n node3_galera_container-3ea2cbd3
# rm -rf /openstack/node3_galera_container-3ea2cbd3/*
#. Run the host setup playbook to rebuild the container specifically on #. Run the host setup playbook to rebuild the container specifically on
node 3: node 3:
.. code-block:: bash .. code-block:: shell-session
$ openstack-ansible setup-hosts.yml -l node3 \ # openstack-ansible setup-hosts.yml -l node3 \
-l node3_galera_container-3ea2cbd3 -l node3_galera_container-3ea2cbd3
@ -37,9 +36,9 @@ containers.
#. Run the infrastructure playbook to configure the container #. Run the infrastructure playbook to configure the container
specifically on node 3: specifically on node 3:
.. code-block:: bash .. code-block:: shell-session
$ openstack-ansible infrastructure-setup.yml \ # openstack-ansible infrastructure-setup.yml \
-l node3_galera_container-3ea2cbd3 -l node3_galera_container-3ea2cbd3
@ -47,9 +46,9 @@ containers.
state because the environment contains more than one active database state because the environment contains more than one active database
with potentially different data. with potentially different data.
.. code-block:: bash .. code-block:: shell-session
$ ansible galera_container -m shell -a "mysql \ # ansible galera_container -m shell -a "mysql \
-h localhost -e 'show status like \"%wsrep_cluster_%\";'" -h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | success | rc=0 >> node3_galera_container-3ea2cbd3 | success | rc=0 >>
Variable_name Value Variable_name Value
@ -72,13 +71,12 @@ containers.
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1 wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary wsrep_cluster_status Primary
#. Restart MariaDB in the new container and verify that it rejoins the #. Restart MariaDB in the new container and verify that it rejoins the
cluster. cluster.
.. code-block:: bash .. code-block:: shell-session
$ ansible galera_container -m shell -a "mysql \ # ansible galera_container -m shell -a "mysql \
-h localhost -e 'show status like \"%wsrep_cluster_%\";'" -h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node2_galera_container-49a47d25 | success | rc=0 >> node2_galera_container-49a47d25 | success | rc=0 >>
Variable_name Value Variable_name Value

View File

@ -9,9 +9,9 @@ recover cannot join the cluster because it no longer exists.
#. Run the following Ansible command to show the failed nodes: #. Run the following Ansible command to show the failed nodes:
.. code-block:: bash .. code-block:: shell-session
$ ansible galera_container -m shell -a "mysql \ # ansible galera_container -m shell -a "mysql \
-h localhost -e 'show status like \"%wsrep_cluster_%\";'" -h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node2_galera_container-49a47d25 | FAILED | rc=1 >> node2_galera_container-49a47d25 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server ERROR 2002 (HY000): Can't connect to local MySQL server
@ -28,7 +28,6 @@ recover cannot join the cluster because it no longer exists.
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1 wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status non-Primary wsrep_cluster_status non-Primary
In this example, nodes 2 and 3 have failed. The remaining operational In this example, nodes 2 and 3 have failed. The remaining operational
server indicates ``non-Primary`` because it cannot achieve quorum. server indicates ``non-Primary`` because it cannot achieve quorum.
@ -36,9 +35,9 @@ recover cannot join the cluster because it no longer exists.
`rebootstrap <http://galeracluster.com/documentation-webpages/quorumreset.html#id1>`_ `rebootstrap <http://galeracluster.com/documentation-webpages/quorumreset.html#id1>`_
the operational node into the cluster. the operational node into the cluster.
.. code-block:: bash .. code-block:: shell-session
$ mysql -e "SET GLOBAL wsrep_provider_options='pc.bootstrap=yes';" # mysql -e "SET GLOBAL wsrep_provider_options='pc.bootstrap=yes';"
node4_galera_container-76275635 | success | rc=0 >> node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value Variable_name Value
wsrep_cluster_conf_id 15 wsrep_cluster_conf_id 15
@ -54,16 +53,15 @@ recover cannot join the cluster because it no longer exists.
ERROR 2002 (HY000): Can't connect to local MySQL server ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111) through socket '/var/run/mysqld/mysqld.sock' (111)
The remaining operational node becomes the primary node and begins The remaining operational node becomes the primary node and begins
processing SQL requests. processing SQL requests.
#. Restart MariaDB on the failed nodes and verify that they rejoin the #. Restart MariaDB on the failed nodes and verify that they rejoin the
cluster. cluster.
.. code-block:: bash .. code-block:: shell-session
$ ansible galera_container -m shell -a "mysql \ # ansible galera_container -m shell -a "mysql \
-h localhost -e 'show status like \"%wsrep_cluster_%\";'" -h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | success | rc=0 >> node3_galera_container-3ea2cbd3 | success | rc=0 >>
Variable_name Value Variable_name Value
@ -86,7 +84,6 @@ recover cannot join the cluster because it no longer exists.
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1 wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary wsrep_cluster_status Primary
#. If MariaDB fails to start on any of the failed nodes, run the #. If MariaDB fails to start on any of the failed nodes, run the
**mysqld** command and perform further analysis on the output. As a **mysqld** command and perform further analysis on the output. As a
last resort, rebuild the container for the node. last resort, rebuild the container for the node.

View File

@ -8,9 +8,9 @@ process SQL requests.
#. Run the following Ansible command to determine the failed node: #. Run the following Ansible command to determine the failed node:
.. code-block:: bash .. code-block:: shell-session
$ ansible galera_container -m shell -a "mysql -h localhost\ # ansible galera_container -m shell -a "mysql -h localhost \
-e 'show status like \"%wsrep_cluster_%\";'" -e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >> node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server through ERROR 2002 (HY000): Can't connect to local MySQL server through

View File

@ -5,9 +5,9 @@ Removing nodes
In the following example, all but one node was shut down gracefully: In the following example, all but one node was shut down gracefully:
.. code-block:: bash .. code-block:: shell-session
$ ansible galera_container -m shell -a "mysql -h localhost\ # ansible galera_container -m shell -a "mysql -h localhost \
-e 'show status like \"%wsrep_cluster_%\";'" -e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >> node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server ERROR 2002 (HY000): Can't connect to local MySQL server

View File

@ -11,9 +11,9 @@ one of the nodes.
following command to check the ``seqno`` value in the following command to check the ``seqno`` value in the
``grastate.dat`` file on all of the nodes: ``grastate.dat`` file on all of the nodes:
.. code-block:: bash .. code-block:: shell-session
$ ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat" # ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat"
node2_galera_container-49a47d25 | success | rc=0 >> node2_galera_container-49a47d25 | success | rc=0 >>
# GALERA saved state version: 2.1 # GALERA saved state version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1 uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
@ -32,22 +32,20 @@ one of the nodes.
seqno: 31 seqno: 31
cert_index: cert_index:
In this example, all nodes in the cluster contain the same positive In this example, all nodes in the cluster contain the same positive
``seqno`` values because they were synchronized just prior to ``seqno`` values because they were synchronized just prior to
graceful shutdown. If all ``seqno`` values are equal, any node can graceful shutdown. If all ``seqno`` values are equal, any node can
start the new cluster. start the new cluster.
.. code-block:: bash .. code-block:: shell-session
$ /etc/init.d/mysql start --wsrep-new-cluster
# /etc/init.d/mysql start --wsrep-new-cluster
This command results in a cluster containing a single node. The This command results in a cluster containing a single node. The
``wsrep_cluster_size`` value shows the number of nodes in the ``wsrep_cluster_size`` value shows the number of nodes in the
cluster. cluster.
.. code-block:: bash .. code-block:: shell-session
node2_galera_container-49a47d25 | FAILED | rc=1 >> node2_galera_container-49a47d25 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server ERROR 2002 (HY000): Can't connect to local MySQL server
@ -64,11 +62,10 @@ one of the nodes.
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1 wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary wsrep_cluster_status Primary
#. Restart MariaDB on the other nodes and verify that they rejoin the #. Restart MariaDB on the other nodes and verify that they rejoin the
cluster. cluster.
.. code-block:: bash .. code-block:: shell-session
node2_galera_container-49a47d25 | success | rc=0 >> node2_galera_container-49a47d25 | success | rc=0 >>
Variable_name Value Variable_name Value
@ -91,7 +88,6 @@ one of the nodes.
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1 wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary wsrep_cluster_status Primary
-------------- --------------
.. include:: navigation.txt .. include:: navigation.txt

View File

@ -18,7 +18,7 @@ Predictable interface naming
On the host, all virtual ethernet devices are named based on their container On the host, all virtual ethernet devices are named based on their container
as well as the name of the interface inside the container: as well as the name of the interface inside the container:
.. code-block:: bash .. code-block:: shell-session
${CONTAINER_UNIQUE_ID}_${NETWORK_DEVICE_NAME} ${CONTAINER_UNIQUE_ID}_${NETWORK_DEVICE_NAME}
@ -29,9 +29,9 @@ network interfaces: `d13b7132_eth0` and `d13b7132_eth1`.
Another option would be to use LXC's tools to retrieve information about the Another option would be to use LXC's tools to retrieve information about the
utility container: utility container:
.. code-block:: bash .. code-block:: shell-session
$ lxc-info -n aio1_utility_container-d13b7132 # lxc-info -n aio1_utility_container-d13b7132
Name: aio1_utility_container-d13b7132 Name: aio1_utility_container-d13b7132
State: RUNNING State: RUNNING

View File

@ -30,32 +30,32 @@ Useful commands:
- List containers and summary information such as operational state and - List containers and summary information such as operational state and
network configuration: network configuration:
.. code-block:: bash .. code-block:: shell-session
# lxc-ls --fancy # lxc-ls --fancy
- Show container details including operational state, resource - Show container details including operational state, resource
utilization, and ``veth`` pairs: utilization, and ``veth`` pairs:
.. code-block:: bash .. code-block:: shell-session
# lxc-info --name container_name # lxc-info --name container_name
- Start a container: - Start a container:
.. code-block:: bash .. code-block:: shell-session
# lxc-start --name container_name # lxc-start --name container_name
- Attach to a container: - Attach to a container:
.. code-block:: bash .. code-block:: shell-session
# lxc-attach --name container_name # lxc-attach --name container_name
- Stop a container: - Stop a container:
.. code-block:: bash .. code-block:: shell-session
# lxc-stop --name container_name # lxc-stop --name container_name

View File

@ -12,16 +12,15 @@ configure NTP.
#. Install additional software packages if not already installed during #. Install additional software packages if not already installed during
operating system installation: operating system installation:
.. code-block:: bash .. code-block:: shell-session
# apt-get install bridge-utils debootstrap ifenslave ifenslave-2.6 \ # apt-get install bridge-utils debootstrap ifenslave ifenslave-2.6 \
lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan
#. Add the appropriate kernel modules to the ``/etc/modules`` file to #. Add the appropriate kernel modules to the ``/etc/modules`` file to
enable VLAN and bond interfaces: enable VLAN and bond interfaces:
.. code-block:: bash .. code-block:: shell-session
# echo 'bonding' >> /etc/modules # echo 'bonding' >> /etc/modules
# echo '8021q' >> /etc/modules # echo '8021q' >> /etc/modules

View File

@ -8,12 +8,11 @@ Configuring LVM
metadata size of 2048 must be specified during physical volume metadata size of 2048 must be specified during physical volume
creation. For example: creation. For example:
.. code-block:: bash .. code-block:: shell-session
# pvcreate --metadatasize 2048 physical_volume_device_path # pvcreate --metadatasize 2048 physical_volume_device_path
# vgcreate cinder-volumes physical_volume_device_path # vgcreate cinder-volumes physical_volume_device_path
#. Optionally, create an LVM volume group named *lxc* for container file #. Optionally, create an LVM volume group named *lxc* for container file
systems. If the lxc volume group does not exist, containers will be systems. If the lxc volume group does not exist, containers will be
automatically installed into the file system under */var/lib/lxc* by automatically installed into the file system under */var/lib/lxc* by

View File

@ -61,8 +61,8 @@ described in the following procedure.
bond-downdelay 250 bond-downdelay 250
bond-updelay 250 bond-updelay 250
If not already complete, replace *``HOST_IP_ADDRESS``*, If not already complete, replace ``HOST_IP_ADDRESS``,
*``HOST_NETMASK``*, *``HOST_GATEWAY``*, and *``HOST_DNS_SERVERS``* ``HOST_NETMASK``, ``HOST_GATEWAY``, and ``HOST_DNS_SERVERS``
with the appropriate configuration for the host management network. with the appropriate configuration for the host management network.
#. Logical (VLAN) interfaces: #. Logical (VLAN) interfaces:
@ -81,7 +81,7 @@ described in the following procedure.
iface bond0.STORAGE_VLAN_ID inet manual iface bond0.STORAGE_VLAN_ID inet manual
vlan-raw-device bond0 vlan-raw-device bond0
Replace *``*_VLAN_ID``* with the appropriate configuration for the Replace ``*_VLAN_ID`` with the appropriate configuration for the
environment. environment.
#. Bridge devices: #. Bridge devices:
@ -131,8 +131,8 @@ described in the following procedure.
address STORAGE_BRIDGE_IP_ADDRESS address STORAGE_BRIDGE_IP_ADDRESS
netmask STORAGE_BRIDGE_NETMASK netmask STORAGE_BRIDGE_NETMASK
Replace *``*_VLAN_ID``*, *``*_BRIDGE_IP_ADDRESS``*, and Replace ``*_VLAN_ID``, ``*_BRIDGE_IP_ADDRESS``, and
*``*_BRIDGE_NETMASK``*, *``*_BRIDGE_DNS_SERVERS``* with the ``*_BRIDGE_NETMASK``, ``*_BRIDGE_DNS_SERVERS`` with the
appropriate configuration for the environment. appropriate configuration for the environment.
-------------- --------------

View File

@ -7,7 +7,7 @@ Ansible uses Secure Shell (SSH) for connectivity between the deployment
and target hosts. and target hosts.
#. Copy the contents of the public key file on the deployment host to #. Copy the contents of the public key file on the deployment host to
the ``/root/.ssh/authorized_keys`` on each target host. the ``/root/.ssh/authorized_keys`` file on each target host.
#. Test public key authentication from the deployment host to each #. Test public key authentication from the deployment host to each
target host. SSH should provide a shell without asking for a target host. SSH should provide a shell without asking for a