Fix 'D001 Line too long' documentation lint failures

Change-Id: I4522fe318541dac7f4ff4e45d72d4cd8869420ba
This commit is contained in:
Jesse Pretorius 2016-07-13 11:40:48 +01:00 committed by Jesse Pretorius (odyssey4me)
parent fbab4f5508
commit 24e63abeb2
71 changed files with 383 additions and 337 deletions

View File

@ -107,11 +107,11 @@ Deploying the Role
#. If your service is installed from source or relies on python packages which #. If your service is installed from source or relies on python packages which
need to be installed from source, specify a repository for the source need to be installed from source, specify a repository for the source
code of each requirement by adding a file to your deploy host under code of each requirement by adding a file to your deploy host under
``playbooks/defaults/repo_packages`` in the OpenStack-Ansible source repository ``playbooks/defaults/repo_packages`` in the OpenStack-Ansible source
and following the pattern of files currently in that directory. You could repository and following the pattern of files currently in that directory.
also simply add an entry to an existing file there. Be sure to run the You could also simply add an entry to an existing file there. Be sure to
``repo-build.yml`` play later so that wheels for your packages will be run the ``repo-build.yml`` play later so that wheels for your packages will
included in the repository infrastructure. be included in the repository infrastructure.
#. Make any required adjustments to the load balancer configuration #. Make any required adjustments to the load balancer configuration
(e.g. modify ``playbooks/vars/configs/haproxy_config.yml`` in the (e.g. modify ``playbooks/vars/configs/haproxy_config.yml`` in the
OpenStack-Ansible source repository on your deploy host) so that your OpenStack-Ansible source repository on your deploy host) so that your

View File

@ -7,10 +7,11 @@ The Telemetry (ceilometer) alarming services perform the following functions:
- Creates an API endpoint for controlling alarms. - Creates an API endpoint for controlling alarms.
- Allows you to set alarms based on threshold evaluation for a collection of samples. - Allows you to set alarms based on threshold evaluation for a collection of
samples.
Aodh on OpenStack-Ansible requires a configured MongoDB backend prior to running Aodh on OpenStack-Ansible requires a configured MongoDB backend prior to
the Aodh playbooks. To specify the connection data, edit the running the Aodh playbooks. To specify the connection data, edit the
``user_variables.yml`` file (see section `Configuring the user data`_ ``user_variables.yml`` file (see section `Configuring the user data`_
below). below).

View File

@ -7,9 +7,11 @@ The Telemetry module (ceilometer) performs the following functions:
- Efficiently polls metering data related to OpenStack services. - Efficiently polls metering data related to OpenStack services.
- Collects event and metering data by monitoring notifications sent from services. - Collects event and metering data by monitoring notifications sent from
services.
- Publishes collected data to various targets including data stores and message queues. - Publishes collected data to various targets including data stores and
message queues.
.. note:: .. note::
@ -32,8 +34,8 @@ Setting up a MongoDB database for ceilometer
# apt-get install mongodb-server mongodb-clients python-pymongo # apt-get install mongodb-server mongodb-clients python-pymongo
2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the management 2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the
interface: management interface:
.. code-block:: ini .. code-block:: ini

View File

@ -57,11 +57,11 @@ Ceph configuration:
auth_service_required = cephx auth_service_required = cephx
auth_client_required = cephx auth_client_required = cephx
The use of the ``ceph_conf_file`` variable is optional. By default, OpenStack-Ansible The use of the ``ceph_conf_file`` variable is optional. By default,
obtains a copy of ``ceph.conf`` from one of your Ceph monitors. This OpenStack-Ansible obtains a copy of ``ceph.conf`` from one of your Ceph
transfer of ``ceph.conf`` requires the OpenStack-Ansible deployment host public key monitors. This transfer of ``ceph.conf`` requires the OpenStack-Ansible
to be deployed to all of the Ceph monitors. More details are available deployment host public key to be deployed to all of the Ceph monitors. More
here: `Deploying SSH Keys`_. details are available here: `Deploying SSH Keys`_.
The following minimal example configuration sets nova and glance The following minimal example configuration sets nova and glance
to use ceph pools: ``ephemeral-vms`` and ``images`` respectively. to use ceph pools: ``ephemeral-vms`` and ``images`` respectively.

View File

@ -3,8 +3,8 @@
Configuring the Block (cinder) storage service (optional) Configuring the Block (cinder) storage service (optional)
========================================================= =========================================================
By default, the Block (cinder) storage service installs on the host itself using By default, the Block (cinder) storage service installs on the host itself
the LVM backend. using the LVM backend.
.. note:: .. note::
@ -181,7 +181,8 @@ By default, no horizon configuration is set.
Configuring cinder to use LVM Configuring cinder to use LVM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. List the ``container_vars`` that contain the storage options for the target host. #. List the ``container_vars`` that contain the storage options for the target
host.
.. note:: .. note::

View File

@ -1,9 +1,9 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide `Home <index.html>`__ OpenStack-Ansible Installation Guide
Configuring Active Directory Federation Services (ADFS) 3.0 as an identity provider Configuring ADFS 3.0 as an identity provider
=================================================================================== ============================================
To install ADFS: To install Active Directory Federation Services (ADFS):
* `Prerequisites for ADFS from Microsoft Technet <https://technet.microsoft.com/library/bf7f9cf4-6170-40e8-83dd-e636cb4f9ecb>`_ * `Prerequisites for ADFS from Microsoft Technet <https://technet.microsoft.com/library/bf7f9cf4-6170-40e8-83dd-e636cb4f9ecb>`_
* `ADFS installation procedure from Microsoft Technet <https://technet.microsoft.com/en-us/library/dn303423>`_ * `ADFS installation procedure from Microsoft Technet <https://technet.microsoft.com/en-us/library/dn303423>`_

View File

@ -47,13 +47,14 @@ The following list is a reference of allowed settings:
this IdP. this IdP.
* ``regen_cert`` by default is set to ``False``. When set to ``True``, the * ``regen_cert`` by default is set to ``False``. When set to ``True``, the
next Ansible run replaces the existing signing certificate with a new one. This next Ansible run replaces the existing signing certificate with a new one.
setting is added as a convenience mechanism to renew a certificate when it This setting is added as a convenience mechanism to renew a certificate when
is close to its expiration date. it is close to its expiration date.
* ``idp_entity_id`` is the entity ID. The service providers * ``idp_entity_id`` is the entity ID. The service providers
use this as a unique identifier for each IdP. ``<keystone-public-endpoint>/OS-FEDERATION/saml2/idp`` use this as a unique identifier for each IdP.
is the value we recommend for this setting. ``<keystone-public-endpoint>/OS-FEDERATION/saml2/idp`` is the value we
recommend for this setting.
* ``idp_sso_endpoint`` is the single sign-on endpoint for this IdP. * ``idp_sso_endpoint`` is the single sign-on endpoint for this IdP.
``<keystone-public-endpoint>/OS-FEDERATION/saml2/sso>`` is the value ``<keystone-public-endpoint>/OS-FEDERATION/saml2/sso>`` is the value
@ -72,7 +73,8 @@ The following list is a reference of allowed settings:
* ``organization_name``, ``organization_display_name``, ``organization_url``, * ``organization_name``, ``organization_display_name``, ``organization_url``,
``contact_company``, ``contact_name``, ``contact_surname``, ``contact_company``, ``contact_name``, ``contact_surname``,
``contact_email``, ``contact_telephone`` and ``contact_type`` are ``contact_email``, ``contact_telephone`` and ``contact_type`` are
settings that describe the identity provider. These settings are all optional. settings that describe the identity provider. These settings are all
optional.
-------------- --------------

View File

@ -104,8 +104,9 @@ The following settings must be set to configure a service provider (SP):
#. ``metadata_reload`` is the number of seconds between metadata #. ``metadata_reload`` is the number of seconds between metadata
refresh polls. refresh polls.
#. ``federated_identities`` is a mapping list of domain, project, group, and users. #. ``federated_identities`` is a mapping list of domain, project, group, and
See `Configure Identity Service (keystone) Domain-Project-Group-Role mappings <configure-federation-mapping.html>`_ users. See
`Configure Identity Service (keystone) Domain-Project-Group-Role mappings`_
for more information. for more information.
#. ``protocols`` is a list of protocols supported for the IdP and the set #. ``protocols`` is a list of protocols supported for the IdP and the set
@ -113,7 +114,11 @@ The following settings must be set to configure a service provider (SP):
with the name ``saml2``. with the name ``saml2``.
#. ``mapping`` is the local to remote mapping configuration for federated #. ``mapping`` is the local to remote mapping configuration for federated
users. For more information, see `Configure Identity Service (keystone) Domain-Project-Group-Role mappings. <configure-federation-mapping.html>`_ users. See
`Configure Identity Service (keystone) Domain-Project-Group-Role mappings`_
for more information.
.. _Configure Identity Service (keystone) Domain-Project-Group-Role mappings: configure-federation-mapping.html
-------------- --------------

View File

@ -48,8 +48,8 @@ is "cloud2".
Keystone service provider (SP) configuration Keystone service provider (SP) configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The configuration for keystone SP needs to define the remote-to-local user mappings. The configuration for keystone SP needs to define the remote-to-local user
The following is the complete configuration: mappings. The following is the complete configuration:
.. code:: .. code::
@ -142,9 +142,9 @@ user based on the attributes exposed by the IdP in the SAML2 assertion. The
use case for this scenario calls for mapping users in "Group A" and "Group B", use case for this scenario calls for mapping users in "Group A" and "Group B",
but the group or groups a user belongs to are not exported in the SAML2 but the group or groups a user belongs to are not exported in the SAML2
assertion. To make the example work, the groups A and B in the use case are assertion. To make the example work, the groups A and B in the use case are
projects. Export projects A and B in the assertion under the ``openstack_project`` attribute. projects. Export projects A and B in the assertion under the
The two rules above select the corresponding project using the ``any_one_of`` ``openstack_project`` attribute. The two rules above select the corresponding
selector. project using the ``any_one_of`` selector.
The ``local`` part of the mapping rule specifies how keystone represents The ``local`` part of the mapping rule specifies how keystone represents
the remote user in the local SP cloud. Configuring the two federated identities the remote user in the local SP cloud. Configuring the two federated identities
@ -157,10 +157,10 @@ role.
Keystone creates a ephemeral user in the specified group as Keystone creates a ephemeral user in the specified group as
you cannot specify user names. you cannot specify user names.
The IdP exports the final setting of the configuration defines the SAML2 ``attributes``. The IdP exports the final setting of the configuration defines the SAML2
For a keystone IdP, these are the five attributes ``attributes``. For a keystone IdP, these are the five attributes shown above.
shown above. Configure the attributes above into the Shibboleth service. This Configure the attributes above into the Shibboleth service. This ensures they
ensures they are available to use in the mappings. are available to use in the mappings.
Reviewing or modifying the configuration with the OpenStack client Reviewing or modifying the configuration with the OpenStack client
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -48,8 +48,8 @@ The following steps above involve manually sending API requests.
The infrastructure for the command line utilities that performs these steps The infrastructure for the command line utilities that performs these steps
for the user does not exist. for the user does not exist.
To obtain access to a SP cloud, OpenStack-Ansible provides a script that wraps the To obtain access to a SP cloud, OpenStack-Ansible provides a script that wraps
above steps. The script is called ``federated-login.sh`` and is the above steps. The script is called ``federated-login.sh`` and is
used as follows: used as follows:
.. code:: .. code::

View File

@ -3,13 +3,14 @@
Configuring the Dashboard (horizon) (optional) Configuring the Dashboard (horizon) (optional)
============================================== ==============================================
Customize your horizon deployment in ``/etc/openstack_deploy/user_variables.yml``. Customize your horizon deployment in
``/etc/openstack_deploy/user_variables.yml``.
Securing horizon communication with SSL certificates Securing horizon communication with SSL certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure Dashboard (horizon) The OpenStack-Ansible project provides the ability to secure Dashboard
communications with self-signed or user-provided SSL certificates. (horizon) communications with self-signed or user-provided SSL certificates.
Refer to `Securing services with SSL certificates`_ for available configuration Refer to `Securing services with SSL certificates`_ for available configuration
options. options.
@ -20,8 +21,8 @@ Configuring a horizon customization module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Openstack-Ansible supports deployment of a horizon `customization module`_. Openstack-Ansible supports deployment of a horizon `customization module`_.
After building your customization module, configure the ``horizon_customization_module`` variable After building your customization module, configure the
with a path to your module. ``horizon_customization_module`` variable with a path to your module.
.. code-block:: yaml .. code-block:: yaml

View File

@ -5,14 +5,14 @@ Configuring the Bare Metal (ironic) service (optional)
.. note:: .. note::
This feature is experimental at this time and it has not been fully production This feature is experimental at this time and it has not been fully
tested yet. These implementation instructions assume that ironic is being deployed production tested yet. These implementation instructions assume that
as the sole hypervisor for the region. ironic is being deployed as the sole hypervisor for the region.
Ironic is an OpenStack project which provisions bare metal (as opposed to virtual) Ironic is an OpenStack project which provisions bare metal (as opposed to
machines by leveraging common technologies such as PXE boot and IPMI to cover a wide virtual) machines by leveraging common technologies such as PXE boot and IPMI
range of hardware, while supporting pluggable drivers to allow vendor-specific to cover a wide range of hardware, while supporting pluggable drivers to allow
functionality to be added. vendor-specific functionality to be added.
OpenStack's ironic project makes physical servers as easy to provision as OpenStack's ironic project makes physical servers as easy to provision as
virtual machines in a cloud. virtual machines in a cloud.
@ -140,8 +140,8 @@ Creating an ironic flavor
After successfully deploying the ironic node on subsequent boots, the instance After successfully deploying the ironic node on subsequent boots, the instance
boots from your local disk as first preference. This speeds up the deployed boots from your local disk as first preference. This speeds up the deployed
node's boot time. Alternatively, if this is not set, the ironic node PXE boots first and node's boot time. Alternatively, if this is not set, the ironic node PXE boots
allows for operator-initiated image updates and other operations. first and allows for operator-initiated image updates and other operations.
.. note:: .. note::
@ -151,7 +151,8 @@ allows for operator-initiated image updates and other operations.
Enroll ironic nodes Enroll ironic nodes
------------------- -------------------
#. From the utility container, enroll a new baremetal node by executing the following: #. From the utility container, enroll a new baremetal node by executing the
following:
.. code-block:: bash .. code-block:: bash

View File

@ -3,7 +3,8 @@
Configuring the Identity service (keystone) (optional) Configuring the Identity service (keystone) (optional)
====================================================== ======================================================
Customize your keystone deployment in ``/etc/openstack_deploy/user_variables.yml``. Customize your keystone deployment in
``/etc/openstack_deploy/user_variables.yml``.
Securing keystone communication with SSL certificates Securing keystone communication with SSL certificates

View File

@ -155,8 +155,8 @@ The following procedure describes how to modify the
- { name: "ah4", pattern: "CONFIG_INET_AH=", group: "network_hosts" } - { name: "ah4", pattern: "CONFIG_INET_AH=", group: "network_hosts" }
- { name: "ipcomp", pattern: "CONFIG_INET_IPCOMP=", group: "network_hosts" } - { name: "ipcomp", pattern: "CONFIG_INET_IPCOMP=", group: "network_hosts" }
#. Execute the openstack hosts setup in order to load the kernel modules at boot #. Execute the openstack hosts setup in order to load the kernel modules at
and runtime in the network hosts boot and runtime in the network hosts
.. code-block:: shell-session .. code-block:: shell-session

View File

@ -52,8 +52,8 @@ You can disable discard by setting ``nova_libvirt_hw_disk_discard`` to
string to disable ``network=writeback``. string to disable ``network=writeback``.
The following minimal example configuration sets nova to use the The following minimal example configuration sets nova to use the
``ephemeral-vms`` Ceph pool. The following example uses cephx authentication, and ``ephemeral-vms`` Ceph pool. The following example uses cephx authentication,
requires an existing ``cinder`` account for the ``ephemeral-vms`` pool: and requires an existing ``cinder`` account for the ``ephemeral-vms`` pool:
.. code-block:: console .. code-block:: console
@ -70,7 +70,8 @@ If you have a different Ceph username for the pool, use it as:
cinder_ceph_client: <ceph-username> cinder_ceph_client: <ceph-username>
* The `Ceph documentation for OpenStack`_ has additional information about these settings. * The `Ceph documentation for OpenStack`_ has additional information about
these settings.
* `OpenStack-Ansible and Ceph Working Example`_ * `OpenStack-Ansible and Ceph Working Example`_

View File

@ -7,7 +7,8 @@ RabbitMQ provides the messaging broker for various OpenStack services. The
OpenStack-Ansible project configures a plaintext listener on port 5672 and OpenStack-Ansible project configures a plaintext listener on port 5672 and
a SSL/TLS encrypted listener on port 5671. a SSL/TLS encrypted listener on port 5671.
Customize your RabbitMQ deployment in ``/etc/openstack_deploy/user_variables.yml``. Customize your RabbitMQ deployment in
``/etc/openstack_deploy/user_variables.yml``.
Add a TLS encrypted listener to RabbitMQ Add a TLS encrypted listener to RabbitMQ
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -84,9 +84,10 @@ General Guidelines for Submitting Code
* New features, breaking changes and other patches of note must include a * New features, breaking changes and other patches of note must include a
release note generated using `the reno tool`_. Please see the release note generated using `the reno tool`_. Please see the
`Documentation and Release Note Guidelines`_ for more information. `Documentation and Release Note Guidelines`_ for more information.
* All patches including code, documentation and release notes should be built and * All patches including code, documentation and release notes should be built
tested locally with the appropriate test suite before submitting for review. and tested locally with the appropriate test suite before submitting for
See `Development and Testing`_ for more information. review. See `Development and Testing`_ for more information.
.. _Git Commit Good Practice: https://wiki.openstack.org/wiki/GitCommitMessages .. _Git Commit Good Practice: https://wiki.openstack.org/wiki/GitCommitMessages
.. _workflow documented here: http://docs.openstack.org/infra/manual/developers.html#development-workflow .. _workflow documented here: http://docs.openstack.org/infra/manual/developers.html#development-workflow
.. _advanced gerrit usage: http://www.mediawiki.org/wiki/Gerrit/Advanced_usage .. _advanced gerrit usage: http://www.mediawiki.org/wiki/Gerrit/Advanced_usage
@ -160,8 +161,8 @@ Backporting
Documentation and Release Note Guidelines Documentation and Release Note Guidelines
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Documentation is a critical part of ensuring that the deployers of OpenStack-Ansible Documentation is a critical part of ensuring that the deployers of
are appropriately informed about: OpenStack-Ansible are appropriately informed about:
* How to use the project's tooling effectively to deploy OpenStack. * How to use the project's tooling effectively to deploy OpenStack.
* How to implement the right configuration to meet the needs of their specific * How to implement the right configuration to meet the needs of their specific

View File

@ -101,17 +101,18 @@ See also `Understanding Host Groups`_ in Appendix H.
user\_*.yml files user\_*.yml files
----------------- -----------------
Files in ``/etc/openstack_deploy`` beginning with ``user_`` will be automatically Files in ``/etc/openstack_deploy`` beginning with ``user_`` will be
sourced in any ``openstack-ansible`` command. Alternatively, the files can be automatically sourced in any ``openstack-ansible`` command. Alternatively,
sourced with the ``-e`` parameter of the ``ansible-playbook`` command. the files can be sourced with the ``-e`` parameter of the ``ansible-playbook``
command.
``user_variables.yml`` and ``user_secrets.yml`` are used directly by ``user_variables.yml`` and ``user_secrets.yml`` are used directly by
OpenStack-Ansible. Adding custom variables used by your own roles and playbooks OpenStack-Ansible. Adding custom variables used by your own roles and
to these files is not recommended. Doing so will complicate your upgrade path playbooks to these files is not recommended. Doing so will complicate your
by making comparison of your existing files with later versions of these files upgrade path by making comparison of your existing files with later versions
more arduous. Rather, recommended practice is to place your own variables in files of these files more arduous. Rather, recommended practice is to place your own
named following the ``user_*.yml`` pattern so they will be sourced alongside variables in files named following the ``user_*.yml`` pattern so they will be
those used exclusively by OpenStack-Ansible. sourced alongside those used exclusively by OpenStack-Ansible.
Ordering and Precedence Ordering and Precedence
+++++++++++++++++++++++ +++++++++++++++++++++++
@ -136,9 +137,9 @@ All of the services that use YAML, JSON, or INI for configuration can receive
overrides through the use of a Ansible action plugin named ``config_template``. overrides through the use of a Ansible action plugin named ``config_template``.
The configuration template engine allows a deployer to use a simple dictionary The configuration template engine allows a deployer to use a simple dictionary
to modify or add items into configuration files at run time that may not have a to modify or add items into configuration files at run time that may not have a
preset template option. All OpenStack-Ansible roles allow for this functionality preset template option. All OpenStack-Ansible roles allow for this
where applicable. Files available to receive overrides can be seen in the functionality where applicable. Files available to receive overrides can be
``defaults/main.yml`` file as standard empty dictionaries (hashes). seen in the ``defaults/main.yml`` file as standard empty dictionaries (hashes).
Practical guidance for using this feature is available in the `Install Guide`_. Practical guidance for using this feature is available in the `Install Guide`_.
@ -157,12 +158,12 @@ git based or PyPi installable package. When the package is built the repo-build
role will create the sources as Python wheels to extend the base system and role will create the sources as Python wheels to extend the base system and
requirements. requirements.
While the packages pre-built in the repository-infrastructure are comprehensive, While the packages pre-built in the repository-infrastructure are
it may be needed to change the source locations and versions of packages to suit comprehensive, it may be needed to change the source locations and versions of
different deployment needs. Adding additional repositories as overrides is as packages to suit different deployment needs. Adding additional repositories as
simple as listing entries within the variable file of your choice. Any overrides is as simple as listing entries within the variable file of your
``user_.*.yml`` file within the "/etc/openstack_deployment" directory will work choice. Any ``user_.*.yml`` file within the "/etc/openstack_deployment"
to facilitate the addition of a new packages. directory will work to facilitate the addition of a new packages.
.. code-block:: yaml .. code-block:: yaml
@ -171,8 +172,8 @@ to facilitate the addition of a new packages.
swift_git_install_branch: master swift_git_install_branch: master
Additional lists of python packages can also be overridden using a ``user_.*.yml`` Additional lists of python packages can also be overridden using a
variable file. ``user_.*.yml`` variable file.
.. code-block:: yaml .. code-block:: yaml
@ -191,8 +192,8 @@ deploy your overridden source code.
Module documentation Module documentation
++++++++++++++++++++ ++++++++++++++++++++
These are the options available as found within the virtual module documentation These are the options available as found within the virtual module
section. documentation section.
.. code-block:: yaml .. code-block:: yaml

View File

@ -16,8 +16,8 @@ cluster.
If necessary, also modify the ``used_ips`` stanza. If necessary, also modify the ``used_ips`` stanza.
#. If the cluster is utilizing Telemetry/Metering (Ceilometer), #. If the cluster is utilizing Telemetry/Metering (Ceilometer),
edit the ``/etc/openstack_deploy/conf.d/ceilometer.yml`` file and add the host to edit the ``/etc/openstack_deploy/conf.d/ceilometer.yml`` file and add the
the ``metering-compute_hosts`` stanza. host to the ``metering-compute_hosts`` stanza.
#. Run the following commands to add the host. Replace #. Run the following commands to add the host. Replace
``NEW_HOST_NAME`` with the name of the new host. ``NEW_HOST_NAME`` with the name of the new host.

View File

@ -148,9 +148,9 @@ recover cannot join the cluster because it no longer exists.
Complete failure Complete failure
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
Restore from backup if all of the nodes in a Galera cluster fail (do not shutdown Restore from backup if all of the nodes in a Galera cluster fail (do not
gracefully). Run the following command to determine if all nodes in the shutdown gracefully). Run the following command to determine if all nodes in
cluster have failed: the cluster have failed:
.. code-block:: shell-session .. code-block:: shell-session

View File

@ -22,7 +22,8 @@ deployment:
directories named after the container or physical host. directories named after the container or physical host.
* Each physical host has the logs from its service containers mounted at * Each physical host has the logs from its service containers mounted at
``/openstack/log/``. ``/openstack/log/``.
* Each service container has its own logs stored at ``/var/log/<service_name>``. * Each service container has its own logs stored at
``/var/log/<service_name>``.
-------------- --------------

View File

@ -106,8 +106,8 @@ documentation on `fact caching`_ for more details.
Forcing regeneration of cached facts Forcing regeneration of cached facts
------------------------------------ ------------------------------------
Cached facts may be incorrect if the host receives a kernel upgrade or new network Cached facts may be incorrect if the host receives a kernel upgrade or new
interfaces. Newly created bridges also disrupt cache facts. network interfaces. Newly created bridges also disrupt cache facts.
This can lead to unexpected errors while running playbooks, and This can lead to unexpected errors while running playbooks, and
require that the cached facts be regenerated. require that the cached facts be regenerated.

View File

@ -118,11 +118,11 @@ options:
:git-clone: Clone all of the role dependencies using native git :git-clone: Clone all of the role dependencies using native git
Notes: Notes:
When doing role development it may be useful to set ``ANSIBLE_ROLE_FETCH_MODE`` When doing role development it may be useful to set
to *git-clone*. This will provide you the ability to develop roles within the ``ANSIBLE_ROLE_FETCH_MODE`` to *git-clone*. This will provide you the
environment by modifying, patching, or committing changes using an intact ability to develop roles within the environment by modifying, patching, or
git tree while the *galaxy* option scrubs the ``.git`` directory when committing changes using an intact git tree while the *galaxy* option scrubs
it resolves a dependency. the ``.git`` directory when it resolves a dependency.
.. code-block:: bash .. code-block:: bash
@ -145,8 +145,8 @@ for the OpenStack Deployment. This preparation is completed by executing:
# scripts/bootstrap-aio.sh # scripts/bootstrap-aio.sh
If you wish to add any additional configuration entries for the OpenStack configuration If you wish to add any additional configuration entries for the OpenStack
then this can be done now by editing configuration then this can be done now by editing
``/etc/openstack_deploy/user_variables.yml``. Please see the `Install Guide`_ ``/etc/openstack_deploy/user_variables.yml``. Please see the `Install Guide`_
for more details. for more details.
@ -173,9 +173,9 @@ general estimates:
* Virtual machines with SSD storage: ~ 45-60 minutes * Virtual machines with SSD storage: ~ 45-60 minutes
* Systems with traditional hard disks: ~ 90-120 minutes * Systems with traditional hard disks: ~ 90-120 minutes
Once the playbooks have fully executed, it is possible to experiment with various Once the playbooks have fully executed, it is possible to experiment with
settings changes in ``/etc/openstack_deploy/user_variables.yml`` and only various settings changes in ``/etc/openstack_deploy/user_variables.yml`` and
run individual playbooks. For example, to run the playbook for the only run individual playbooks. For example, to run the playbook for the
Keystone service, execute: Keystone service, execute:
.. code-block:: bash .. code-block:: bash
@ -247,10 +247,11 @@ AIOs are best run inside of some form of virtual machine or cloud guest.
Quick AIO build on Rackspace Cloud Quick AIO build on Rackspace Cloud
---------------------------------- ----------------------------------
You can automate the AIO build process with a virtual machine from the Rackspace Cloud. You can automate the AIO build process with a virtual machine from the
Rackspace Cloud.
First, we will need a cloud-config file that will allow us to run the build as soon as the First, we will need a cloud-config file that will allow us to run the build as
instance starts. Save this file as ``user_data.yml``: soon as the instance starts. Save this file as ``user_data.yml``:
.. code-block:: yaml .. code-block:: yaml

View File

@ -51,7 +51,8 @@ On the deployment host, copy the Nuage user variables file from
# cp /opt/nuage-openstack-ansible/etc/user_nuage_vars.yml /etc/openstack_deploy/ # cp /opt/nuage-openstack-ansible/etc/user_nuage_vars.yml /etc/openstack_deploy/
Also modify the following parameters in this file as per your Nuage VCS environment: Also modify the following parameters in this file as per your Nuage VCS
environment:
#. Replace *VSD Enterprise Name* parameter with user desired name of VSD #. Replace *VSD Enterprise Name* parameter with user desired name of VSD
Enterprise: Enterprise:

View File

@ -56,8 +56,8 @@ parameters.
PLUMgrid configurations PLUMgrid configurations
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
On the deployment host, create a PLUMgrid user variables file using the sample in On the deployment host, create a PLUMgrid user variables file using the sample
``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to in ``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to
``/etc/openstack_deploy/user_pg_vars.yml``. You must configure the ``/etc/openstack_deploy/user_pg_vars.yml``. You must configure the
following parameters. following parameters.
@ -119,8 +119,8 @@ to as ``gateway_devs`` in the configuration files.
Replace ``*_IP_ADDRESS`` with the IP address of the ``br-mgmt`` container management Replace ``*_IP_ADDRESS`` with the IP address of the ``br-mgmt`` container management
bridge on each Gateway host. bridge on each Gateway host.
#. Add a ``gateway_hosts`` section to the end of the PLUMgrid ``user_pg_vars.yml`` #. Add a ``gateway_hosts`` section to the end of the PLUMgrid
file: ``user_pg_vars.yml`` file:
.. note:: .. note::

View File

@ -11,10 +11,10 @@ swift backend or some form of shared storage.
Configuring default and additional stores Configuring default and additional stores
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible provides two configurations for controlling where glance stores OpenStack-Ansible provides two configurations for controlling where glance
files: the default store and additional stores. glance stores images in file-based stores files: the default store and additional stores. glance stores images in
storage by default. Two additional stores, ``http`` and ``cinder`` (Block Storage), file-based storage by default. Two additional stores, ``http`` and ``cinder``
are also enabled by default. (Block Storage), are also enabled by default.
You can choose alternative default stores and alternative additional stores. You can choose alternative default stores and alternative additional stores.
For example, a deployer that uses Ceph may configure the following Ansible For example, a deployer that uses Ceph may configure the following Ansible
@ -165,8 +165,8 @@ Special considerations
If the swift password or key contains a dollar sign (``$``), it must If the swift password or key contains a dollar sign (``$``), it must
be escaped with an additional dollar sign (``$$``). For example, a password of be escaped with an additional dollar sign (``$$``). For example, a password of
``super$ecure`` would need to be entered as ``super$$ecure``. This is necessary ``super$ecure`` would need to be entered as ``super$$ecure``. This is
due to the way `oslo.config formats strings`_. necessary due to the way `oslo.config formats strings`_.
.. _oslo.config formats strings: https://bugs.launchpad.net/oslo-incubator/+bug/1259729 .. _oslo.config formats strings: https://bugs.launchpad.net/oslo-incubator/+bug/1259729

View File

@ -10,6 +10,12 @@ for Ansible. Start by getting those files into the correct places:
``/opt/openstack-ansible/etc/openstack_deploy`` directory to the ``/opt/openstack-ansible/etc/openstack_deploy`` directory to the
``/etc/openstack_deploy`` directory. ``/etc/openstack_deploy`` directory.
.. note::
As of Newton, the ``env.d`` directory has been moved from this source
directory to ``playbooks/inventory/``. See `Appendix H`_ for more
details on this change.
#. Change to the ``/etc/openstack_deploy`` directory. #. Change to the ``/etc/openstack_deploy`` directory.
#. Copy the ``openstack_user_config.yml.example`` file to #. Copy the ``openstack_user_config.yml.example`` file to
@ -40,7 +46,8 @@ OpenStack-Ansible's dynamic inventory generation has a concept called
`affinity`. This determines how many containers of a similar type are deployed `affinity`. This determines how many containers of a similar type are deployed
onto a single physical host. onto a single physical host.
Using `shared-infra_hosts` as an example, consider this ``openstack_user_config.yml``: Using `shared-infra_hosts` as an example, consider this
``openstack_user_config.yml``:
.. code-block:: yaml .. code-block:: yaml
@ -123,6 +130,7 @@ fine-tune certain security configurations.
.. _openstack-ansible-security: http://docs.openstack.org/developer/openstack-ansible-security/ .. _openstack-ansible-security: http://docs.openstack.org/developer/openstack-ansible-security/
.. _Security Technical Implementation Guide (STIG): https://en.wikipedia.org/wiki/Security_Technical_Implementation_Guide .. _Security Technical Implementation Guide (STIG): https://en.wikipedia.org/wiki/Security_Technical_Implementation_Guide
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html .. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
.. _Appendix H: ../install-guide/app-custom-layouts.html
-------------- --------------

View File

@ -26,8 +26,8 @@ Overriding .conf files
~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~
The most common use-case for implementing overrides are for the The most common use-case for implementing overrides are for the
``<service>.conf`` files (for example, ``nova.conf``). These files use a standard ``<service>.conf`` files (for example, ``nova.conf``). These files use a
``INI`` file format. standard ``INI`` file format.
For example, if you add the following parameters to ``nova.conf``: For example, if you add the following parameters to ``nova.conf``:

View File

@ -31,8 +31,8 @@ many services as possible.
Self-signed certificates Self-signed certificates
~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
Self-signed certificates ensure you are able to start quickly and you are able to Self-signed certificates ensure you are able to start quickly and you are able
encrypt data in transit. However, they do not provide a high level of trust to encrypt data in transit. However, they do not provide a high level of trust
for highly secure environments. The use of self-signed certificates is for highly secure environments. The use of self-signed certificates is
currently the default in OpenStack-Ansible. When self-signed certificates are currently the default in OpenStack-Ansible. When self-signed certificates are
being used, certificate verification must be disabled using the following being used, certificate verification must be disabled using the following
@ -82,9 +82,9 @@ To force a self-signed certificate to regenerate, you can pass the variable to
To force a self-signed certificate to regenerate with every playbook run, To force a self-signed certificate to regenerate with every playbook run,
set the appropriate regeneration option to ``true``. For example, if set the appropriate regeneration option to ``true``. For example, if
you have already run the ``os-horizon`` playbook, but you want to regenerate the you have already run the ``os-horizon`` playbook, but you want to regenerate
self-signed certificate, set the ``horizon_ssl_self_signed_regen`` variable to the self-signed certificate, set the ``horizon_ssl_self_signed_regen`` variable
``true`` in ``/etc/openstack_deploy/user_variables.yml``: to ``true`` in ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml .. code-block:: yaml

View File

@ -41,9 +41,10 @@ Install additional software packages and configure NTP.
Configuring the network Configuring the network
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
Ansible deployments fail if the deployment server is unable to SSH to the containers. Ansible deployments fail if the deployment server is unable to SSH to the
Configure the deployment host to be on the same network designated for container management. containers. Configure the deployment host to be on the same network designated
This configuration reduces the rate of failure due to connectivity issues. for container management. This configuration reduces the rate of failure due
to connectivity issues.
The following network information is used as an example: The following network information is used as an example:

View File

@ -89,8 +89,9 @@ Production environment
~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~
The layout for a production environment involves seven target The layout for a production environment involves seven target
hosts in total: three control plane and infrastructure hosts, two compute hosts, hosts in total: three control plane and infrastructure hosts, two compute
one storage host and one log aggregation host. It also has the following features: hosts, one storage host and one log aggregation host. It also has the
following features:
- Bonded NICs - Bonded NICs
- NFS/Ceph-backed storage for nova, glance, and cinder - NFS/Ceph-backed storage for nova, glance, and cinder

View File

@ -7,7 +7,8 @@ Network architecture
==================== ====================
For a production environment, some components are mandatory, such as bridges For a production environment, some components are mandatory, such as bridges
described below. We recommend other components such as a bonded network interface. described below. We recommend other components such as a bonded network
interface.
.. important:: .. important::
@ -23,11 +24,11 @@ particular environment.
Bonded network interfaces Bonded network interfaces
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
The reference architecture for a production environment includes bonded network The reference architecture for a production environment includes bonded
interfaces, which use multiple physical network interfaces for better redundancy network interfaces, which use multiple physical network interfaces for better
and throughput. Avoid using two ports on the same multi-port network card for the redundancy and throughput. Avoid using two ports on the same multi-port
same bonded interface since a network card failure affects both physical network network card for the same bonded interface since a network card failure
interfaces used by the bond. affects both physical network interfaces used by the bond.
The ``bond0`` interface carries traffic from the containers The ``bond0`` interface carries traffic from the containers
running your OpenStack infrastructure. Configure a static IP address on the running your OpenStack infrastructure. Configure a static IP address on the

View File

@ -64,15 +64,15 @@ Logging hosts
logs on the logging hosts. logs on the logging hosts.
Hosts that provide Block Storage (cinder) volumes must have logical volume Hosts that provide Block Storage (cinder) volumes must have logical volume
manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume group manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume
that OpenStack-Ansible can configure for use with cinder. group that OpenStack-Ansible can configure for use with cinder.
Each control plane host runs services inside LXC containers. The container Each control plane host runs services inside LXC containers. The container
filesystems are deployed by default onto the root filesystem of each control filesystems are deployed by default onto the root filesystem of each control
plane hosts. You have the option to deploy those container filesystems plane hosts. You have the option to deploy those container filesystems
into logical volumes by creating a volume group called ``lxc``. OpenStack-Ansible into logical volumes by creating a volume group called ``lxc``.
creates a 5GB logical volume for the filesystem of each container running OpenStack-Ansible creates a 5GB logical volume for the filesystem of each
on the host. container running on the host.
Network requirements Network requirements
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
@ -83,8 +83,9 @@ Network requirements
network interface. This works for small environments, but it can cause network interface. This works for small environments, but it can cause
problems when your environment grows. problems when your environment grows.
For the best performance, reliability and scalability in a production environment, For the best performance, reliability and scalability in a production
deployers should consider a network configuration that contains the following features: environment, deployers should consider a network configuration that contains
the following features:
* Bonded network interfaces: Increases performance and/or reliability * Bonded network interfaces: Increases performance and/or reliability
(dependent on bonding architecture). (dependent on bonding architecture).

View File

@ -45,10 +45,10 @@ and how to configure SSL certificates, see
Host security hardening Host security hardening
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
Security hardening is applied by default to OpenStack infrastructure and compute Security hardening is applied by default to OpenStack infrastructure and
hosts using the ``openstack-ansible-security`` role. The purpose of the role is to compute hosts using the ``openstack-ansible-security`` role. The purpose of
apply as many security configurations as possible without disrupting the the role is to apply as many security configurations as possible without
operation of an OpenStack deployment. disrupting the operation of an OpenStack deployment.
Refer to the documentation on :ref:`security_hardening` for more information Refer to the documentation on :ref:`security_hardening` for more information
on the role and how to enable it in OpenStack-Ansible. on the role and how to enable it in OpenStack-Ansible.
@ -102,19 +102,19 @@ The resources within an OpenStack environment can be divided into two groups:
* MariaDB * MariaDB
* RabbitMQ * RabbitMQ
Configure firewalls to limit network access to all services that users must access Configure firewalls to limit network access to all services that users must
directly. access directly.
Other services, such as MariaDB and RabbitMQ, must be segmented away from Other services, such as MariaDB and RabbitMQ, must be segmented away from
direct user access. Configure a firewall to only allow connectivity to direct user access. Configure a firewall to only allow connectivity to
these services within the OpenStack environment itself. This these services within the OpenStack environment itself. This
reduces an attacker's ability to query or manipulate data in OpenStack's reduces an attacker's ability to query or manipulate data in OpenStack's
critical database and queuing services, especially if one of these services has critical database and queuing services, especially if one of these services
a known vulnerability. has a known vulnerability.
For more details on recommended network policies for OpenStack clouds, refer to For more details on recommended network policies for OpenStack clouds, refer
the `API endpoint process isolation and policy`_ section from the `OpenStack to the `API endpoint process isolation and policy`_ section from the
Security Guide`_ `OpenStack Security Guide`_
.. _API endpoint process isolation and policy: http://docs.openstack.org/security-guide/api-endpoints/api-endpoint-configuration-recommendations.html#network-policy .. _API endpoint process isolation and policy: http://docs.openstack.org/security-guide/api-endpoints/api-endpoint-configuration-recommendations.html#network-policy
.. _OpenStack Security Guide: http://docs.openstack.org/security-guide .. _OpenStack Security Guide: http://docs.openstack.org/security-guide

View File

@ -8,10 +8,8 @@ Overview
~~~~~~~~ ~~~~~~~~
This example uses the following parameters to configure networking on a This example uses the following parameters to configure networking on a
single target host. See `Figure 3.2, "Target hosts for infrastructure, single target host. See `Figure 3.2`_ for a visual representation of these
networking, compute, and storage parameters in the architecture.
services" <targethosts-networkexample.html#fig_hosts-target-network-containerexample>`_
for a visual representation of these parameters in the architecture.
- VLANs: - VLANs:
@ -47,6 +45,7 @@ for a visual representation of these parameters in the architecture.
- Storage: 172.29.244.11 - Storage: 172.29.244.11
.. _Figure 3.2: targethosts-networkexample.html#fig_hosts-target-network-containerexample
**Figure 3.2. Target host for infrastructure, networking, compute, and **Figure 3.2. Target host for infrastructure, networking, compute, and
storage services** storage services**

View File

@ -9,8 +9,8 @@ Overview
This example allows you to use your own parameters for the deployment. This example allows you to use your own parameters for the deployment.
The following is a table of the bridges that are be configured on hosts, if you followed the The following is a table of the bridges that are be configured on hosts, if
previously proposed design. you followed the previously proposed design.
+-------------+-----------------------+-------------------------------------+ +-------------+-----------------------+-------------------------------------+
| Bridge name | Best configured on | With a static IP | | Bridge name | Best configured on | With a static IP |
@ -188,8 +188,10 @@ Example for 3 controller nodes and 2 compute nodes
- Host management gateway: 10.240.0.1 - Host management gateway: 10.240.0.1
- DNS servers: 69.20.0.164 69.20.0.196 - DNS servers: 69.20.0.164 69.20.0.196
- Container management: 172.29.236.11 - 172.29.236.13 - Container management: 172.29.236.11 - 172.29.236.13
- Tunnel: no IP (because IP exist in the containers, when the components aren't deployed directly on metal) - Tunnel: no IP (because IP exist in the containers, when the components
- Storage: no IP (because IP exist in the containers, when the components aren't deployed directly on metal) aren't deployed directly on metal)
- Storage: no IP (because IP exist in the containers, when the components
aren't deployed directly on metal)
- Addresses for the compute nodes: - Addresses for the compute nodes:

View File

@ -79,10 +79,11 @@ practices, refer to `GitHub's documentation on generating SSH keys`_.
Configuring LVM Configuring LVM
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
`Logical Volume Manager (LVM)`_ allows a single device to be split into multiple `Logical Volume Manager (LVM)`_ allows a single device to be split into
logical volumes which appear as a physical storage device to the operating multiple logical volumes which appear as a physical storage device to the
system. The Block Storage (cinder) service, as well as the LXC containers that operating system. The Block Storage (cinder) service, as well as the LXC
run the OpenStack infrastructure, can optionally use LVM for their data storage. containers that run the OpenStack infrastructure, can optionally use LVM for
their data storage.
.. note:: .. note::

View File

@ -135,9 +135,9 @@ Omit a service or component from the deployment
To omit a component from a deployment, several options exist. To omit a component from a deployment, several options exist.
- You could remove the ``physical_skel`` link between the container group and - You could remove the ``physical_skel`` link between the container group and
the host group. The simplest way to do this is to simply copy the relevant file the host group. The simplest way to do this is to simply copy the relevant
to the ``/etc/openstack_deploy/env.d/`` directory, and set the following file to the ``/etc/openstack_deploy/env.d/`` directory, and set the
information: following information:
.. code-block:: yaml .. code-block:: yaml

View File

@ -50,7 +50,8 @@ On the deployment host, copy the Nuage user variables file from
# cp /opt/nuage-openstack-ansible/etc/user_nuage_vars.yml /etc/openstack_deploy/ # cp /opt/nuage-openstack-ansible/etc/user_nuage_vars.yml /etc/openstack_deploy/
Also modify the following parameters in this file as per your Nuage VCS environment: Also modify the following parameters in this file as per your Nuage VCS
environment:
#. Replace *VSD Enterprise Name* parameter with user desired name of VSD #. Replace *VSD Enterprise Name* parameter with user desired name of VSD
Enterprise: Enterprise:

View File

@ -56,8 +56,8 @@ parameters.
PLUMgrid configurations PLUMgrid configurations
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
On the deployment host, create a PLUMgrid user variables file using the sample in On the deployment host, create a PLUMgrid user variables file using the sample
``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to in ``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to
``/etc/openstack_deploy/user_pg_vars.yml``. You must configure the ``/etc/openstack_deploy/user_pg_vars.yml``. You must configure the
following parameters. following parameters.
@ -116,11 +116,11 @@ to as ``gateway_devs`` in the configuration files.
gateway2: gateway2:
ip: GW02_IP_ADDRESS ip: GW02_IP_ADDRESS
Replace ``*_IP_ADDRESS`` with the IP address of the ``br-mgmt`` container management Replace ``*_IP_ADDRESS`` with the IP address of the ``br-mgmt`` container
bridge on each Gateway host. management bridge on each Gateway host.
#. Add a ``gateway_hosts`` section to the end of the PLUMgrid ``user_pg_vars.yml`` #. Add a ``gateway_hosts`` section to the end of the PLUMgrid
file: ``user_pg_vars.yml`` file:
.. note:: .. note::

View File

@ -7,7 +7,8 @@ The Telemetry (ceilometer) alarming services perform the following functions:
- Creates an API endpoint for controlling alarms. - Creates an API endpoint for controlling alarms.
- Allows you to set alarms based on threshold evaluation for a collection of samples. - Allows you to set alarms based on threshold evaluation for a collection of
samples.

View File

@ -7,9 +7,11 @@ The Telemetry module (ceilometer) performs the following functions:
- Efficiently polls metering data related to OpenStack services. - Efficiently polls metering data related to OpenStack services.
- Collects event and metering data by monitoring notifications sent from services. - Collects event and metering data by monitoring notifications sent from
services.
- Publishes collected data to various targets including data stores and message queues. - Publishes collected data to various targets including data stores and
message queues.
.. note:: .. note::
@ -32,8 +34,8 @@ Setting up a MongoDB database for ceilometer
# apt-get install mongodb-server mongodb-clients python-pymongo # apt-get install mongodb-server mongodb-clients python-pymongo
2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the management 2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the
interface: management interface:
.. code-block:: ini .. code-block:: ini

View File

@ -57,11 +57,11 @@ Ceph configuration:
auth_service_required = cephx auth_service_required = cephx
auth_client_required = cephx auth_client_required = cephx
The use of the ``ceph_conf_file`` variable is optional. By default, OpenStack-Ansible The use of the ``ceph_conf_file`` variable is optional. By default,
obtains a copy of ``ceph.conf`` from one of your Ceph monitors. This OpenStack-Ansible obtains a copy of ``ceph.conf`` from one of your Ceph
transfer of ``ceph.conf`` requires the OpenStack-Ansible deployment host public key monitors. This transfer of ``ceph.conf`` requires the OpenStack-Ansible
to be deployed to all of the Ceph monitors. More details are available deployment host public key to be deployed to all of the Ceph monitors. More
here: `Deploying SSH Keys`_. details are available here: `Deploying SSH Keys`_.
The following minimal example configuration sets nova and glance The following minimal example configuration sets nova and glance
to use ceph pools: ``ephemeral-vms`` and ``images`` respectively. to use ceph pools: ``ephemeral-vms`` and ``images`` respectively.

View File

@ -3,8 +3,8 @@
Configuring the Block (cinder) storage service (optional) Configuring the Block (cinder) storage service (optional)
========================================================= =========================================================
By default, the Block (cinder) storage service installs on the host itself using By default, the Block (cinder) storage service installs on the host itself
the LVM backend. using the LVM backend.
.. note:: .. note::
@ -181,7 +181,8 @@ By default, no horizon configuration is set.
Configuring cinder to use LVM Configuring cinder to use LVM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. List the ``container_vars`` that contain the storage options for the target host. #. List the ``container_vars`` that contain the storage options for the target
host.
.. note:: .. note::

View File

@ -1,9 +1,9 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide `Home <index.html>`__ OpenStack-Ansible Installation Guide
Configuring Active Directory Federation Services (ADFS) 3.0 as an identity provider Configuring ADFS 3.0 as an identity provider
=================================================================================== ============================================
To install ADFS: To install Active Directory Federation Services (ADFS):
* `Prerequisites for ADFS from Microsoft Technet <https://technet.microsoft.com/library/bf7f9cf4-6170-40e8-83dd-e636cb4f9ecb>`_ * `Prerequisites for ADFS from Microsoft Technet <https://technet.microsoft.com/library/bf7f9cf4-6170-40e8-83dd-e636cb4f9ecb>`_
* `ADFS installation procedure from Microsoft Technet <https://technet.microsoft.com/en-us/library/dn303423>`_ * `ADFS installation procedure from Microsoft Technet <https://technet.microsoft.com/en-us/library/dn303423>`_
@ -35,9 +35,9 @@ Configuring ADFS
References References
~~~~~~~~~~ ~~~~~~~~~~
* `http://blogs.technet.com/b/rmilne/archive/2014/04/28/how-to-install-adfs-2012-r2-for-office-365.aspx`_ * http://blogs.technet.com/b/rmilne/archive/2014/04/28/how-to-install-adfs-2012-r2-for-office-365.aspx
* `http://blog.kloud.com.au/2013/08/14/powershell-deployment-of-web-application-proxy-and-adfs-in-under-10-minutes/`_ * http://blog.kloud.com.au/2013/08/14/powershell-deployment-of-web-application-proxy-and-adfs-in-under-10-minutes/
* `https://ethernuno.wordpress.com/2014/04/20/install-adds-on-windows-server-2012-r2-with-powershell/`_ * https://ethernuno.wordpress.com/2014/04/20/install-adds-on-windows-server-2012-r2-with-powershell/
-------------- --------------

View File

@ -47,13 +47,14 @@ The following list is a reference of allowed settings:
this IdP. this IdP.
* ``regen_cert`` by default is set to ``False``. When set to ``True``, the * ``regen_cert`` by default is set to ``False``. When set to ``True``, the
next Ansible run replaces the existing signing certificate with a new one. This next Ansible run replaces the existing signing certificate with a new one.
setting is added as a convenience mechanism to renew a certificate when it This setting is added as a convenience mechanism to renew a certificate when
is close to its expiration date. it is close to its expiration date.
* ``idp_entity_id`` is the entity ID. The service providers * ``idp_entity_id`` is the entity ID. The service providers
use this as a unique identifier for each IdP. ``<keystone-public-endpoint>/OS-FEDERATION/saml2/idp`` use this as a unique identifier for each IdP.
is the value we recommend for this setting. ``<keystone-public-endpoint>/OS-FEDERATION/saml2/idp`` is the value we
recommend for this setting.
* ``idp_sso_endpoint`` is the single sign-on endpoint for this IdP. * ``idp_sso_endpoint`` is the single sign-on endpoint for this IdP.
``<keystone-public-endpoint>/OS-FEDERATION/saml2/sso>`` is the value ``<keystone-public-endpoint>/OS-FEDERATION/saml2/sso>`` is the value
@ -72,7 +73,8 @@ The following list is a reference of allowed settings:
* ``organization_name``, ``organization_display_name``, ``organization_url``, * ``organization_name``, ``organization_display_name``, ``organization_url``,
``contact_company``, ``contact_name``, ``contact_surname``, ``contact_company``, ``contact_name``, ``contact_surname``,
``contact_email``, ``contact_telephone`` and ``contact_type`` are ``contact_email``, ``contact_telephone`` and ``contact_type`` are
settings that describe the identity provider. These settings are all optional. settings that describe the identity provider. These settings are all
optional.
-------------- --------------

View File

@ -54,9 +54,9 @@ containers:
References References
---------- ----------
* `http://docs.openstack.org/developer/keystone/configure_federation.html`_ * http://docs.openstack.org/developer/keystone/configure_federation.html
* `http://docs.openstack.org/developer/keystone/extensions/shibboleth.html`_ * http://docs.openstack.org/developer/keystone/extensions/shibboleth.html
* `https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPConfiguration`_ * https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPConfiguration
-------------- --------------

View File

@ -104,8 +104,9 @@ The following settings must be set to configure a service provider (SP):
#. ``metadata_reload`` is the number of seconds between metadata #. ``metadata_reload`` is the number of seconds between metadata
refresh polls. refresh polls.
#. ``federated_identities`` is a mapping list of domain, project, group, and users. #. ``federated_identities`` is a mapping list of domain, project, group, and
See `Configure Identity Service (keystone) Domain-Project-Group-Role mappings <configure-federation-mapping.html>`_ users. See
`Configure Identity Service (keystone) Domain-Project-Group-Role mappings`_
for more information. for more information.
#. ``protocols`` is a list of protocols supported for the IdP and the set #. ``protocols`` is a list of protocols supported for the IdP and the set
@ -113,7 +114,11 @@ The following settings must be set to configure a service provider (SP):
with the name ``saml2``. with the name ``saml2``.
#. ``mapping`` is the local to remote mapping configuration for federated #. ``mapping`` is the local to remote mapping configuration for federated
users. For more information, see `Configure Identity Service (keystone) Domain-Project-Group-Role mappings. <configure-federation-mapping.html>`_ users. See
`Configure Identity Service (keystone) Domain-Project-Group-Role mappings`_
for more information.
.. _Configure Identity Service (keystone) Domain-Project-Group-Role mappings: configure-federation-mapping.html
-------------- --------------

View File

@ -48,8 +48,8 @@ is "cloud2".
Keystone service provider (SP) configuration Keystone service provider (SP) configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The configuration for keystone SP needs to define the remote-to-local user mappings. The configuration for keystone SP needs to define the remote-to-local user
The following is the complete configuration: mappings. The following is the complete configuration:
.. code:: .. code::
@ -142,9 +142,9 @@ user based on the attributes exposed by the IdP in the SAML2 assertion. The
use case for this scenario calls for mapping users in "Group A" and "Group B", use case for this scenario calls for mapping users in "Group A" and "Group B",
but the group or groups a user belongs to are not exported in the SAML2 but the group or groups a user belongs to are not exported in the SAML2
assertion. To make the example work, the groups A and B in the use case are assertion. To make the example work, the groups A and B in the use case are
projects. Export projects A and B in the assertion under the ``openstack_project`` attribute. projects. Export projects A and B in the assertion under the
The two rules above select the corresponding project using the ``any_one_of`` ``openstack_project`` attribute. The two rules above select the corresponding
selector. project using the ``any_one_of`` selector.
The ``local`` part of the mapping rule specifies how keystone represents The ``local`` part of the mapping rule specifies how keystone represents
the remote user in the local SP cloud. Configuring the two federated identities the remote user in the local SP cloud. Configuring the two federated identities
@ -157,10 +157,10 @@ role.
Keystone creates a ephemeral user in the specified group as Keystone creates a ephemeral user in the specified group as
you cannot specify user names. you cannot specify user names.
The IdP exports the final setting of the configuration defines the SAML2 ``attributes``. The IdP exports the final setting of the configuration defines the SAML2
For a keystone IdP, these are the five attributes ``attributes``. For a keystone IdP, these are the five attributes shown above.
shown above. Configure the attributes above into the Shibboleth service. This Configure the attributes above into the Shibboleth service. This ensures they
ensures they are available to use in the mappings. are available to use in the mappings.
Reviewing or modifying the configuration with the OpenStack client Reviewing or modifying the configuration with the OpenStack client
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -48,8 +48,8 @@ The following steps above involve manually sending API requests.
The infrastructure for the command line utilities that performs these steps The infrastructure for the command line utilities that performs these steps
for the user does not exist. for the user does not exist.
To obtain access to a SP cloud, OpenStack-Ansible provides a script that wraps the To obtain access to a SP cloud, OpenStack-Ansible provides a script that wraps
above steps. The script is called ``federated-login.sh`` and is the above steps. The script is called ``federated-login.sh`` and is
used as follows: used as follows:
.. code:: .. code::

View File

@ -11,10 +11,10 @@ swift backend or some form of shared storage.
Configuring default and additional stores Configuring default and additional stores
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible provides two configurations for controlling where glance stores OpenStack-Ansible provides two configurations for controlling where glance
files: the default store and additional stores. glance stores images in file-based stores files: the default store and additional stores. glance stores images in
storage by default. Two additional stores, ``http`` and ``cinder`` (Block Storage), file-based storage by default. Two additional stores, ``http`` and ``cinder``
are also enabled by default. (Block Storage), are also enabled by default.
You can choose alternative default stores and alternative additional stores. You can choose alternative default stores and alternative additional stores.
For example, a deployer that uses Ceph may configure the following Ansible For example, a deployer that uses Ceph may configure the following Ansible
@ -165,8 +165,8 @@ Special considerations
If the swift password or key contains a dollar sign (``$``), it must If the swift password or key contains a dollar sign (``$``), it must
be escaped with an additional dollar sign (``$$``). For example, a password of be escaped with an additional dollar sign (``$$``). For example, a password of
``super$ecure`` would need to be entered as ``super$$ecure``. This is necessary ``super$ecure`` would need to be entered as ``super$$ecure``. This is
due to the way `oslo.config formats strings`_. necessary due to the way `oslo.config formats strings`_.
.. _oslo.config formats strings: https://bugs.launchpad.net/oslo-incubator/+bug/1259729 .. _oslo.config formats strings: https://bugs.launchpad.net/oslo-incubator/+bug/1259729

View File

@ -3,13 +3,14 @@
Configuring the Dashboard (horizon) (optional) Configuring the Dashboard (horizon) (optional)
============================================== ==============================================
Customize your horizon deployment in ``/etc/openstack_deploy/user_variables.yml``. Customize your horizon deployment in
``/etc/openstack_deploy/user_variables.yml``.
Securing horizon communication with SSL certificates Securing horizon communication with SSL certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure Dashboard (horizon) The OpenStack-Ansible project provides the ability to secure Dashboard
communications with self-signed or user-provided SSL certificates. (horizon) communications with self-signed or user-provided SSL certificates.
Refer to `Securing services with SSL certificates`_ for available configuration Refer to `Securing services with SSL certificates`_ for available configuration
options. options.
@ -20,8 +21,8 @@ Configuring a horizon customization module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Openstack-Ansible supports deployment of a horizon `customization module`_. Openstack-Ansible supports deployment of a horizon `customization module`_.
After building your customization module, configure the ``horizon_customization_module`` variable After building your customization module, configure the
with a path to your module. ``horizon_customization_module`` variable with a path to your module.
.. code-block:: yaml .. code-block:: yaml

View File

@ -46,7 +46,8 @@ OpenStack-Ansible's dynamic inventory generation has a concept called
`affinity`. This determines how many containers of a similar type are deployed `affinity`. This determines how many containers of a similar type are deployed
onto a single physical host. onto a single physical host.
Using `shared-infra_hosts` as an example, consider this ``openstack_user_config.yml``: Using `shared-infra_hosts` as an example, consider this
``openstack_user_config.yml``:
.. code-block:: yaml .. code-block:: yaml

View File

@ -5,14 +5,14 @@ Configuring the Bare Metal (ironic) service (optional)
.. note:: .. note::
This feature is experimental at this time and it has not been fully production This feature is experimental at this time and it has not been fully
tested yet. These implementation instructions assume that ironic is being deployed production tested yet. These implementation instructions assume that
as the sole hypervisor for the region. ironic is being deployed as the sole hypervisor for the region.
Ironic is an OpenStack project which provisions bare metal (as opposed to virtual) Ironic is an OpenStack project which provisions bare metal (as opposed to
machines by leveraging common technologies such as PXE boot and IPMI to cover a wide virtual) machines by leveraging common technologies such as PXE boot and IPMI
range of hardware, while supporting pluggable drivers to allow vendor-specific to cover a wide range of hardware, while supporting pluggable drivers to allow
functionality to be added. vendor-specific functionality to be added.
OpenStack's ironic project makes physical servers as easy to provision as OpenStack's ironic project makes physical servers as easy to provision as
virtual machines in a cloud. virtual machines in a cloud.
@ -140,8 +140,8 @@ Creating an ironic flavor
After successfully deploying the ironic node on subsequent boots, the instance After successfully deploying the ironic node on subsequent boots, the instance
boots from your local disk as first preference. This speeds up the deployed boots from your local disk as first preference. This speeds up the deployed
node's boot time. Alternatively, if this is not set, the ironic node PXE boots first and node's boot time. Alternatively, if this is not set, the ironic node PXE boots
allows for operator-initiated image updates and other operations. first and allows for operator-initiated image updates and other operations.
.. note:: .. note::
@ -151,7 +151,8 @@ allows for operator-initiated image updates and other operations.
Enroll ironic nodes Enroll ironic nodes
------------------- -------------------
#. From the utility container, enroll a new baremetal node by executing the following: #. From the utility container, enroll a new baremetal node by executing the
following:
.. code-block:: bash .. code-block:: bash

View File

@ -3,7 +3,8 @@
Configuring the Identity service (keystone) (optional) Configuring the Identity service (keystone) (optional)
====================================================== ======================================================
Customize your keystone deployment in ``/etc/openstack_deploy/user_variables.yml``. Customize your keystone deployment in
``/etc/openstack_deploy/user_variables.yml``.
Securing keystone communication with SSL certificates Securing keystone communication with SSL certificates

View File

@ -94,31 +94,16 @@ Deploying LBaaS v2
Ensure that ``neutron_plugin_base`` includes all of the plugins that you Ensure that ``neutron_plugin_base`` includes all of the plugins that you
want to deploy with neutron in addition to the LBaaS plugin. want to deploy with neutron in addition to the LBaaS plugin.
#. Run the neutron playbook to deploy the LBaaS v2 agent: #. Run the neutron and horizon playbooks to deploy the LBaaS v2 agent and
enable the LBaaS v2 panels in horizon:
.. code-block:: console .. code-block:: console
# cd /opt/openstack-ansible/playbooks # cd /opt/openstack-ansible/playbooks
# openstack-ansible os-neutron-install.yml # openstack-ansible os-neutron-install.yml
Enabling Horizon panels for LBaaS v2
------------------------------------
#. Set the ``horizon_enable_neutron_lbaas`` variable to ``True`` in
``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
horizon_enable_neutron_lbaas: True
#. Run the Horizon playbook to activate the panel:
.. code-block:: console
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-horizon-install.yml # openstack-ansible os-horizon-install.yml
.. _lbaas-special-notes .. _lbaas-special-notes:
Special notes about LBaaS Special notes about LBaaS
------------------------- -------------------------
@ -170,8 +155,8 @@ The following procedure describes how to modify the
- { name: "ah4", pattern: "CONFIG_INET_AH=", group: "network_hosts" } - { name: "ah4", pattern: "CONFIG_INET_AH=", group: "network_hosts" }
- { name: "ipcomp", pattern: "CONFIG_INET_IPCOMP=", group: "network_hosts" } - { name: "ipcomp", pattern: "CONFIG_INET_IPCOMP=", group: "network_hosts" }
#. Execute the openstack hosts setup in order to load the kernel modules at boot #. Execute the openstack hosts setup in order to load the kernel modules at
and runtime in the network hosts boot and runtime in the network hosts
.. code-block:: shell-session .. code-block:: shell-session

View File

@ -52,8 +52,8 @@ You can disable discard by setting ``nova_libvirt_hw_disk_discard`` to
string to disable ``network=writeback``. string to disable ``network=writeback``.
The following minimal example configuration sets nova to use the The following minimal example configuration sets nova to use the
``ephemeral-vms`` Ceph pool. The following example uses cephx authentication, and ``ephemeral-vms`` Ceph pool. The following example uses cephx authentication,
requires an existing ``cinder`` account for the ``ephemeral-vms`` pool: and requires an existing ``cinder`` account for the ``ephemeral-vms`` pool:
.. code-block:: console .. code-block:: console
@ -70,7 +70,8 @@ If you have a different Ceph username for the pool, use it as:
cinder_ceph_client: <ceph-username> cinder_ceph_client: <ceph-username>
* The `Ceph documentation for OpenStack`_ has additional information about these settings. * The `Ceph documentation for OpenStack`_ has additional information about
these settings.
* `OpenStack-Ansible and Ceph Working Example`_ * `OpenStack-Ansible and Ceph Working Example`_

View File

@ -26,8 +26,8 @@ Overriding .conf files
~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~
The most common use-case for implementing overrides are for the The most common use-case for implementing overrides are for the
``<service>.conf`` files (for example, ``nova.conf``). These files use a standard ``<service>.conf`` files (for example, ``nova.conf``). These files use a
``INI`` file format. standard ``INI`` file format.
For example, if you add the following parameters to ``nova.conf``: For example, if you add the following parameters to ``nova.conf``:

View File

@ -7,7 +7,8 @@ RabbitMQ provides the messaging broker for various OpenStack services. The
OpenStack-Ansible project configures a plaintext listener on port 5672 and OpenStack-Ansible project configures a plaintext listener on port 5672 and
a SSL/TLS encrypted listener on port 5671. a SSL/TLS encrypted listener on port 5671.
Customize your RabbitMQ deployment in ``/etc/openstack_deploy/user_variables.yml``. Customize your RabbitMQ deployment in
``/etc/openstack_deploy/user_variables.yml``.
Add a TLS encrypted listener to RabbitMQ Add a TLS encrypted listener to RabbitMQ
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -31,8 +31,8 @@ many services as possible.
Self-signed certificates Self-signed certificates
~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
Self-signed certificates ensure you are able to start quickly and you are able to Self-signed certificates ensure you are able to start quickly and you are able
encrypt data in transit. However, they do not provide a high level of trust to encrypt data in transit. However, they do not provide a high level of trust
for highly secure environments. The use of self-signed certificates is for highly secure environments. The use of self-signed certificates is
currently the default in OpenStack-Ansible. When self-signed certificates are currently the default in OpenStack-Ansible. When self-signed certificates are
being used, certificate verification must be disabled using the following being used, certificate verification must be disabled using the following
@ -82,9 +82,9 @@ To force a self-signed certificate to regenerate, you can pass the variable to
To force a self-signed certificate to regenerate with every playbook run, To force a self-signed certificate to regenerate with every playbook run,
set the appropriate regeneration option to ``true``. For example, if set the appropriate regeneration option to ``true``. For example, if
you have already run the ``os-horizon`` playbook, but you want to regenerate the you have already run the ``os-horizon`` playbook, but you want to regenerate
self-signed certificate, set the ``horizon_ssl_self_signed_regen`` variable to the self-signed certificate, set the ``horizon_ssl_self_signed_regen`` variable
``true`` in ``/etc/openstack_deploy/user_variables.yml``: to ``true`` in ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml .. code-block:: yaml

View File

@ -41,9 +41,10 @@ Install additional software packages and configure NTP.
Configuring the network Configuring the network
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
Ansible deployments fail if the deployment server is unable to SSH to the containers. Ansible deployments fail if the deployment server is unable to SSH to the
Configure the deployment host to be on the same network designated for container management. containers. Configure the deployment host to be on the same network designated
This configuration reduces the rate of failure due to connectivity issues. for container management. This configuration reduces the rate of failure due
to connectivity issues.
The following network information is used as an example: The following network information is used as an example:

View File

@ -64,15 +64,15 @@ Logging hosts
Hosts that provide Block Storage (cinder) volumes must have logical volume Hosts that provide Block Storage (cinder) volumes must have logical volume
manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume group manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume
that OpenStack-Ansible can configure for use with cinder. group that OpenStack-Ansible can configure for use with cinder.
Each control plane host runs services inside LXC containers. The container Each control plane host runs services inside LXC containers. The container
filesystems are deployed by default onto the root filesystem of each control filesystems are deployed by default onto the root filesystem of each control
plane hosts. You have the option to deploy those container filesystems plane hosts. You have the option to deploy those container filesystems
into logical volumes by creating a volume group called ``lxc``. OpenStack-Ansible into logical volumes by creating a volume group called ``lxc``.
creates a 5GB logical volume for the filesystem of each container running OpenStack-Ansible creates a 5GB logical volume for the filesystem of each
on the host. container running on the host.
Network requirements Network requirements
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~

View File

@ -47,8 +47,8 @@ Host security hardening
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
Deployers can apply security hardening to OpenStack infrastructure and compute Deployers can apply security hardening to OpenStack infrastructure and compute
hosts using the ``openstack-ansible-security`` role. The purpose of the role is to hosts using the ``openstack-ansible-security`` role. The purpose of the role
apply as many security configurations as possible without disrupting the is to apply as many security configurations as possible without disrupting the
operation of an OpenStack deployment. operation of an OpenStack deployment.
Refer to the documentation on :ref:`security_hardening` for more information Refer to the documentation on :ref:`security_hardening` for more information
@ -103,8 +103,8 @@ The resources within an OpenStack environment can be divided into two groups:
* MariaDB * MariaDB
* RabbitMQ * RabbitMQ
To manage instances, you are able to access certain public API endpoints, such as To manage instances, you are able to access certain public API endpoints, such
the Nova or Neutron API. Configure firewalls to limit network access to as the Nova or Neutron API. Configure firewalls to limit network access to
these services. these services.
Other services, such as MariaDB and RabbitMQ, must be segmented away from Other services, such as MariaDB and RabbitMQ, must be segmented away from

View File

@ -8,10 +8,8 @@ Overview
~~~~~~~~ ~~~~~~~~
This example uses the following parameters to configure networking on a This example uses the following parameters to configure networking on a
single target host. See `Figure 3.2, "Target hosts for infrastructure, single target host. See `Figure 3.2`_ for a visual representation of these
networking, compute, and storage parameters in the architecture.
services" <targethosts-networkexample.html#fig_hosts-target-network-containerexample>`_
for a visual representation of these parameters in the architecture.
- VLANs: - VLANs:
@ -47,6 +45,7 @@ for a visual representation of these parameters in the architecture.
- Storage: 172.29.244.11 - Storage: 172.29.244.11
.. _Figure 3.2: targethosts-networkexample.html#fig_hosts-target-network-containerexample
**Figure 3.2. Target host for infrastructure, networking, compute, and **Figure 3.2. Target host for infrastructure, networking, compute, and
storage services** storage services**

View File

@ -9,8 +9,8 @@ Overview
This example allows you to use your own parameters for the deployment. This example allows you to use your own parameters for the deployment.
The following is a table of the bridges that are be configured on hosts, if you followed the The following is a table of the bridges that are be configured on hosts, if
previously proposed design. you followed the previously proposed design.
+-------------+-----------------------+-------------------------------------+ +-------------+-----------------------+-------------------------------------+
| Bridge name | Best configured on | With a static IP | | Bridge name | Best configured on | With a static IP |
@ -188,8 +188,10 @@ Example for 3 controller nodes and 2 compute nodes
- Host management gateway: 10.240.0.1 - Host management gateway: 10.240.0.1
- DNS servers: 69.20.0.164 69.20.0.196 - DNS servers: 69.20.0.164 69.20.0.196
- Container management: 172.29.236.11 - 172.29.236.13 - Container management: 172.29.236.11 - 172.29.236.13
- Tunnel: no IP (because IP exist in the containers, when the components aren't deployed directly on metal) - Tunnel: no IP (because IP exist in the containers, when the components
- Storage: no IP (because IP exist in the containers, when the components aren't deployed directly on metal) aren't deployed directly on metal)
- Storage: no IP (because IP exist in the containers, when the components
aren't deployed directly on metal)
- Addresses for the compute nodes: - Addresses for the compute nodes:

View File

@ -79,10 +79,11 @@ practices, refer to `GitHub's documentation on generating SSH keys`_.
Configuring LVM Configuring LVM
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
`Logical Volume Manager (LVM)`_ allows a single device to be split into multiple `Logical Volume Manager (LVM)`_ allows a single device to be split into
logical volumes which appear as a physical storage device to the operating multiple logical volumes which appear as a physical storage device to the
system. The Block Storage (cinder) service, as well as the LXC containers that operating system. The Block Storage (cinder) service, as well as the LXC
run the OpenStack infrastructure, can optionally use LVM for their data storage. containers that run the OpenStack infrastructure, can optionally use LVM for
their data storage.
.. note:: .. note::

View File

@ -15,7 +15,8 @@ script. Any of these steps can safely be run multiple times.
Check out the Newton release Check out the Newton release
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ensure your OpenStack-Ansible code is on the latest Newton release tag (14.x.x). Ensure your OpenStack-Ansible code is on the latest Newton release tag
(14.x.x).
.. code-block:: console .. code-block:: console

View File

@ -19,7 +19,8 @@ A minor upgrade typically requires the following steps:
# cd /opt/openstack-ansible # cd /opt/openstack-ansible
#. Ensure your OpenStack-Ansible code is on the latest Newton release tag (14.x.x): #. Ensure your OpenStack-Ansible code is on the latest Newton release tag
(14.x.x):
.. code-block:: console .. code-block:: console
@ -114,7 +115,8 @@ tasks will execute.
# cd /opt/openstack-ansible/playbooks # cd /opt/openstack-ansible/playbooks
#. See the hosts in the ``nova_compute`` group which a playbook executes against: #. See the hosts in the ``nova_compute`` group which a playbook executes
against:
.. code-block:: console .. code-block:: console

View File

@ -35,23 +35,23 @@ variable override files matching the pattern
Variable names within comments are updated. Variable names within comments are updated.
This script creates files of the form This script creates files of the form
``/etc/openstack_deploy.NEWTON/VARS_MIGRATED_file``. For example, once the script has ``/etc/openstack_deploy.NEWTON/VARS_MIGRATED_file``. For example, once the
processed the file ``/etc/openstack_deploy/user_variables.yml``, it creates script has processed the file ``/etc/openstack_deploy/user_variables.yml``, it
``/etc/openstack_deploy.NEWTON/VARS_MIGRATED_user_variables``. This indicates to creates ``/etc/openstack_deploy.NEWTON/VARS_MIGRATED_user_variables``. This
OpenStack-Ansible to skip this step on successive runs. The script itself does indicates to OpenStack-Ansible to skip this step on successive runs. The script
not check for this file. itself does not check for this file.
The variable changes are shown in the following table. The variable changes are shown in the following table.
.. This table was made with the output of .. This table was made with the output of
``scripts/upgrade-utilities/scripts/make_rst_table.py``. Insertion needs to be ``scripts/upgrade-utilities/scripts/make_rst_table.py``. Insertion needs to
done manually since the OpenStack publish jobs do not use `make` and there be done manually since the OpenStack publish jobs do not use `make` and
is not yet a Sphinx extension that runs an abitrary script on build. there is not yet a Sphinx extension that runs an abitrary script on build.
+------------------------------------------+------------------------------------------+ +--------------------------------------+--------------------------------------+
| Old Value | New Value | | Old Value | New Value |
+==========================================+==========================================+ +======================================+======================================+
+------------------------------------------+------------------------------------------+ +--------------------------------------+--------------------------------------+
Called by :ref:`config-change-playbook` Called by :ref:`config-change-playbook`

View File

@ -27,7 +27,8 @@ the cleanup.
``ansible_fact_cleanup.yml`` ``ansible_fact_cleanup.yml``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This calls a script to removes files in ``/etc/openstack_deploy/ansible_facts/`` This calls a script to removes files in
``/etc/openstack_deploy/ansible_facts/``
.. _config-change-playbook: .. _config-change-playbook:
@ -44,31 +45,32 @@ changing the configuration.
``user-secrets-adjustment.yml`` ``user-secrets-adjustment.yml``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This playbook ensures that the user secrets file is updated based on the example This playbook ensures that the user secrets file is updated based on the
file in the main repository, making it possible to guarantee all secrets move example file in the main repository, making it possible to guarantee all
into the upgraded environment and generate appropriately. secrets move into the upgraded environment and generate appropriately.
This adds only new secrets, such as those necessary for new services or new settings This adds only new secrets, such as those necessary for new services or new
added to existing services. Values set previously are not changed. settings added to existing services. Values set previously are not changed.
.. _repo-server-pip-conf-removal: .. _repo-server-pip-conf-removal:
``repo-server-pip-conf-removal.yml`` ``repo-server-pip-conf-removal.yml``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The presence of ``pip.conf`` locks down all Python installations to packages on the The presence of ``pip.conf`` locks down all Python installations to packages
repo server. If ``pip.conf`` exists on the repo server, it creates a circular on the repo server. If ``pip.conf`` exists on the repo server, it creates a
dependency, causing build failures. circular dependency, causing build failures.
.. _old-hostname-compatibility: .. _old-hostname-compatibility:
``old-hostname-compatibility.yml`` ``old-hostname-compatibility.yml``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This playbook ensures an alias is created for old hostnames that may not be RFC This playbook ensures an alias is created for old hostnames that may not be
1034 or 1035 compatible. Using a hostname alias allows agents to continue working RFC 1034 or 1035 compatible. Using a hostname alias allows agents to continue
in cases where the hostname is also the registered agent name. This playbook is working in cases where the hostname is also the registered agent name. This
only needed for upgrades of in-place upgrades of existing nodes or if a node is replaced or playbook is only needed for upgrades of in-place upgrades of existing nodes or
rebuilt it will be brought into the cluster using a compliant hostname. if a node is replaced or rebuilt it will be brought into the cluster using a
compliant hostname.
.. _setup-infra-playbook: .. _setup-infra-playbook:
@ -79,7 +81,8 @@ The ``playbooks`` directory contains the ``setup-infrastructure.yml`` playbook.
The ``run-upgrade.sh`` script calls ``setup-insfrastructure.yml`` with specific The ``run-upgrade.sh`` script calls ``setup-insfrastructure.yml`` with specific
arguments to upgrade MariaDB and RabbitMQ. arguments to upgrade MariaDB and RabbitMQ.
For example, to run an upgrade for both components at once, run the following commands: For example, to run an upgrade for both components at once, run the following
commands:
.. code-block:: console .. code-block:: console
@ -95,8 +98,8 @@ upgrade RabbitMQ.
controls upgrading the major or minor versions. controls upgrading the major or minor versions.
Upgrading RabbitMQ in the Newton release is optional. The Upgrading RabbitMQ in the Newton release is optional. The
``run-upgrade.sh`` script does not automatically upgrade it. To upgrade RabbitMQ, ``run-upgrade.sh`` script does not automatically upgrade it. To upgrade
insert the ``rabbitmq_upgrade: true`` RabbitMQ, insert the ``rabbitmq_upgrade: true``
line into a file, such as: ``/etc/openstack_deploy/user_variables.yml``. line into a file, such as: ``/etc/openstack_deploy/user_variables.yml``.
The ``galera_upgrade`` variable tells the ``galera_server`` role to remove the The ``galera_upgrade`` variable tells the ``galera_server`` role to remove the

View File

@ -39,8 +39,7 @@ commands=
extensions = .rst extensions = .rst
# Disable some doc8 checks: # Disable some doc8 checks:
# D000: Check RST validity # D000: Check RST validity
# D001 Line too long ignore = D000
ignore = D000,D001
[testenv:releasenotes] [testenv:releasenotes]
# NOTE(sdague): this target does not use constraints because # NOTE(sdague): this target does not use constraints because