Fix 'D001 Line too long' documentation lint failures
Change-Id: I4522fe318541dac7f4ff4e45d72d4cd8869420ba
This commit is contained in:
parent
fbab4f5508
commit
24e63abeb2
@ -107,11 +107,11 @@ Deploying the Role
|
||||
#. If your service is installed from source or relies on python packages which
|
||||
need to be installed from source, specify a repository for the source
|
||||
code of each requirement by adding a file to your deploy host under
|
||||
``playbooks/defaults/repo_packages`` in the OpenStack-Ansible source repository
|
||||
and following the pattern of files currently in that directory. You could
|
||||
also simply add an entry to an existing file there. Be sure to run the
|
||||
``repo-build.yml`` play later so that wheels for your packages will be
|
||||
included in the repository infrastructure.
|
||||
``playbooks/defaults/repo_packages`` in the OpenStack-Ansible source
|
||||
repository and following the pattern of files currently in that directory.
|
||||
You could also simply add an entry to an existing file there. Be sure to
|
||||
run the ``repo-build.yml`` play later so that wheels for your packages will
|
||||
be included in the repository infrastructure.
|
||||
#. Make any required adjustments to the load balancer configuration
|
||||
(e.g. modify ``playbooks/vars/configs/haproxy_config.yml`` in the
|
||||
OpenStack-Ansible source repository on your deploy host) so that your
|
||||
|
@ -7,10 +7,11 @@ The Telemetry (ceilometer) alarming services perform the following functions:
|
||||
|
||||
- Creates an API endpoint for controlling alarms.
|
||||
|
||||
- Allows you to set alarms based on threshold evaluation for a collection of samples.
|
||||
- Allows you to set alarms based on threshold evaluation for a collection of
|
||||
samples.
|
||||
|
||||
Aodh on OpenStack-Ansible requires a configured MongoDB backend prior to running
|
||||
the Aodh playbooks. To specify the connection data, edit the
|
||||
Aodh on OpenStack-Ansible requires a configured MongoDB backend prior to
|
||||
running the Aodh playbooks. To specify the connection data, edit the
|
||||
``user_variables.yml`` file (see section `Configuring the user data`_
|
||||
below).
|
||||
|
||||
|
@ -7,9 +7,11 @@ The Telemetry module (ceilometer) performs the following functions:
|
||||
|
||||
- Efficiently polls metering data related to OpenStack services.
|
||||
|
||||
- Collects event and metering data by monitoring notifications sent from services.
|
||||
- Collects event and metering data by monitoring notifications sent from
|
||||
services.
|
||||
|
||||
- Publishes collected data to various targets including data stores and message queues.
|
||||
- Publishes collected data to various targets including data stores and
|
||||
message queues.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -32,8 +34,8 @@ Setting up a MongoDB database for ceilometer
|
||||
|
||||
# apt-get install mongodb-server mongodb-clients python-pymongo
|
||||
|
||||
2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the management
|
||||
interface:
|
||||
2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the
|
||||
management interface:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
|
@ -57,11 +57,11 @@ Ceph configuration:
|
||||
auth_service_required = cephx
|
||||
auth_client_required = cephx
|
||||
|
||||
The use of the ``ceph_conf_file`` variable is optional. By default, OpenStack-Ansible
|
||||
obtains a copy of ``ceph.conf`` from one of your Ceph monitors. This
|
||||
transfer of ``ceph.conf`` requires the OpenStack-Ansible deployment host public key
|
||||
to be deployed to all of the Ceph monitors. More details are available
|
||||
here: `Deploying SSH Keys`_.
|
||||
The use of the ``ceph_conf_file`` variable is optional. By default,
|
||||
OpenStack-Ansible obtains a copy of ``ceph.conf`` from one of your Ceph
|
||||
monitors. This transfer of ``ceph.conf`` requires the OpenStack-Ansible
|
||||
deployment host public key to be deployed to all of the Ceph monitors. More
|
||||
details are available here: `Deploying SSH Keys`_.
|
||||
|
||||
The following minimal example configuration sets nova and glance
|
||||
to use ceph pools: ``ephemeral-vms`` and ``images`` respectively.
|
||||
|
@ -3,8 +3,8 @@
|
||||
Configuring the Block (cinder) storage service (optional)
|
||||
=========================================================
|
||||
|
||||
By default, the Block (cinder) storage service installs on the host itself using
|
||||
the LVM backend.
|
||||
By default, the Block (cinder) storage service installs on the host itself
|
||||
using the LVM backend.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -181,7 +181,8 @@ By default, no horizon configuration is set.
|
||||
Configuring cinder to use LVM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. List the ``container_vars`` that contain the storage options for the target host.
|
||||
#. List the ``container_vars`` that contain the storage options for the target
|
||||
host.
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -1,9 +1,9 @@
|
||||
`Home <index.html>`__ OpenStack-Ansible Installation Guide
|
||||
|
||||
Configuring Active Directory Federation Services (ADFS) 3.0 as an identity provider
|
||||
===================================================================================
|
||||
Configuring ADFS 3.0 as an identity provider
|
||||
============================================
|
||||
|
||||
To install ADFS:
|
||||
To install Active Directory Federation Services (ADFS):
|
||||
|
||||
* `Prerequisites for ADFS from Microsoft Technet <https://technet.microsoft.com/library/bf7f9cf4-6170-40e8-83dd-e636cb4f9ecb>`_
|
||||
* `ADFS installation procedure from Microsoft Technet <https://technet.microsoft.com/en-us/library/dn303423>`_
|
||||
|
@ -47,13 +47,14 @@ The following list is a reference of allowed settings:
|
||||
this IdP.
|
||||
|
||||
* ``regen_cert`` by default is set to ``False``. When set to ``True``, the
|
||||
next Ansible run replaces the existing signing certificate with a new one. This
|
||||
setting is added as a convenience mechanism to renew a certificate when it
|
||||
is close to its expiration date.
|
||||
next Ansible run replaces the existing signing certificate with a new one.
|
||||
This setting is added as a convenience mechanism to renew a certificate when
|
||||
it is close to its expiration date.
|
||||
|
||||
* ``idp_entity_id`` is the entity ID. The service providers
|
||||
use this as a unique identifier for each IdP. ``<keystone-public-endpoint>/OS-FEDERATION/saml2/idp``
|
||||
is the value we recommend for this setting.
|
||||
use this as a unique identifier for each IdP.
|
||||
``<keystone-public-endpoint>/OS-FEDERATION/saml2/idp`` is the value we
|
||||
recommend for this setting.
|
||||
|
||||
* ``idp_sso_endpoint`` is the single sign-on endpoint for this IdP.
|
||||
``<keystone-public-endpoint>/OS-FEDERATION/saml2/sso>`` is the value
|
||||
@ -72,7 +73,8 @@ The following list is a reference of allowed settings:
|
||||
* ``organization_name``, ``organization_display_name``, ``organization_url``,
|
||||
``contact_company``, ``contact_name``, ``contact_surname``,
|
||||
``contact_email``, ``contact_telephone`` and ``contact_type`` are
|
||||
settings that describe the identity provider. These settings are all optional.
|
||||
settings that describe the identity provider. These settings are all
|
||||
optional.
|
||||
|
||||
--------------
|
||||
|
||||
|
@ -104,8 +104,9 @@ The following settings must be set to configure a service provider (SP):
|
||||
#. ``metadata_reload`` is the number of seconds between metadata
|
||||
refresh polls.
|
||||
|
||||
#. ``federated_identities`` is a mapping list of domain, project, group, and users.
|
||||
See `Configure Identity Service (keystone) Domain-Project-Group-Role mappings <configure-federation-mapping.html>`_
|
||||
#. ``federated_identities`` is a mapping list of domain, project, group, and
|
||||
users. See
|
||||
`Configure Identity Service (keystone) Domain-Project-Group-Role mappings`_
|
||||
for more information.
|
||||
|
||||
#. ``protocols`` is a list of protocols supported for the IdP and the set
|
||||
@ -113,7 +114,11 @@ The following settings must be set to configure a service provider (SP):
|
||||
with the name ``saml2``.
|
||||
|
||||
#. ``mapping`` is the local to remote mapping configuration for federated
|
||||
users. For more information, see `Configure Identity Service (keystone) Domain-Project-Group-Role mappings. <configure-federation-mapping.html>`_
|
||||
users. See
|
||||
`Configure Identity Service (keystone) Domain-Project-Group-Role mappings`_
|
||||
for more information.
|
||||
|
||||
.. _Configure Identity Service (keystone) Domain-Project-Group-Role mappings: configure-federation-mapping.html
|
||||
|
||||
--------------
|
||||
|
||||
|
@ -48,8 +48,8 @@ is "cloud2".
|
||||
Keystone service provider (SP) configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The configuration for keystone SP needs to define the remote-to-local user mappings.
|
||||
The following is the complete configuration:
|
||||
The configuration for keystone SP needs to define the remote-to-local user
|
||||
mappings. The following is the complete configuration:
|
||||
|
||||
.. code::
|
||||
|
||||
@ -142,9 +142,9 @@ user based on the attributes exposed by the IdP in the SAML2 assertion. The
|
||||
use case for this scenario calls for mapping users in "Group A" and "Group B",
|
||||
but the group or groups a user belongs to are not exported in the SAML2
|
||||
assertion. To make the example work, the groups A and B in the use case are
|
||||
projects. Export projects A and B in the assertion under the ``openstack_project`` attribute.
|
||||
The two rules above select the corresponding project using the ``any_one_of``
|
||||
selector.
|
||||
projects. Export projects A and B in the assertion under the
|
||||
``openstack_project`` attribute. The two rules above select the corresponding
|
||||
project using the ``any_one_of`` selector.
|
||||
|
||||
The ``local`` part of the mapping rule specifies how keystone represents
|
||||
the remote user in the local SP cloud. Configuring the two federated identities
|
||||
@ -157,10 +157,10 @@ role.
|
||||
Keystone creates a ephemeral user in the specified group as
|
||||
you cannot specify user names.
|
||||
|
||||
The IdP exports the final setting of the configuration defines the SAML2 ``attributes``.
|
||||
For a keystone IdP, these are the five attributes
|
||||
shown above. Configure the attributes above into the Shibboleth service. This
|
||||
ensures they are available to use in the mappings.
|
||||
The IdP exports the final setting of the configuration defines the SAML2
|
||||
``attributes``. For a keystone IdP, these are the five attributes shown above.
|
||||
Configure the attributes above into the Shibboleth service. This ensures they
|
||||
are available to use in the mappings.
|
||||
|
||||
Reviewing or modifying the configuration with the OpenStack client
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -48,8 +48,8 @@ The following steps above involve manually sending API requests.
|
||||
The infrastructure for the command line utilities that performs these steps
|
||||
for the user does not exist.
|
||||
|
||||
To obtain access to a SP cloud, OpenStack-Ansible provides a script that wraps the
|
||||
above steps. The script is called ``federated-login.sh`` and is
|
||||
To obtain access to a SP cloud, OpenStack-Ansible provides a script that wraps
|
||||
the above steps. The script is called ``federated-login.sh`` and is
|
||||
used as follows:
|
||||
|
||||
.. code::
|
||||
|
@ -3,13 +3,14 @@
|
||||
Configuring the Dashboard (horizon) (optional)
|
||||
==============================================
|
||||
|
||||
Customize your horizon deployment in ``/etc/openstack_deploy/user_variables.yml``.
|
||||
Customize your horizon deployment in
|
||||
``/etc/openstack_deploy/user_variables.yml``.
|
||||
|
||||
Securing horizon communication with SSL certificates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The OpenStack-Ansible project provides the ability to secure Dashboard (horizon)
|
||||
communications with self-signed or user-provided SSL certificates.
|
||||
The OpenStack-Ansible project provides the ability to secure Dashboard
|
||||
(horizon) communications with self-signed or user-provided SSL certificates.
|
||||
|
||||
Refer to `Securing services with SSL certificates`_ for available configuration
|
||||
options.
|
||||
@ -20,8 +21,8 @@ Configuring a horizon customization module
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Openstack-Ansible supports deployment of a horizon `customization module`_.
|
||||
After building your customization module, configure the ``horizon_customization_module`` variable
|
||||
with a path to your module.
|
||||
After building your customization module, configure the
|
||||
``horizon_customization_module`` variable with a path to your module.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -5,14 +5,14 @@ Configuring the Bare Metal (ironic) service (optional)
|
||||
|
||||
.. note::
|
||||
|
||||
This feature is experimental at this time and it has not been fully production
|
||||
tested yet. These implementation instructions assume that ironic is being deployed
|
||||
as the sole hypervisor for the region.
|
||||
This feature is experimental at this time and it has not been fully
|
||||
production tested yet. These implementation instructions assume that
|
||||
ironic is being deployed as the sole hypervisor for the region.
|
||||
|
||||
Ironic is an OpenStack project which provisions bare metal (as opposed to virtual)
|
||||
machines by leveraging common technologies such as PXE boot and IPMI to cover a wide
|
||||
range of hardware, while supporting pluggable drivers to allow vendor-specific
|
||||
functionality to be added.
|
||||
Ironic is an OpenStack project which provisions bare metal (as opposed to
|
||||
virtual) machines by leveraging common technologies such as PXE boot and IPMI
|
||||
to cover a wide range of hardware, while supporting pluggable drivers to allow
|
||||
vendor-specific functionality to be added.
|
||||
|
||||
OpenStack's ironic project makes physical servers as easy to provision as
|
||||
virtual machines in a cloud.
|
||||
@ -140,8 +140,8 @@ Creating an ironic flavor
|
||||
|
||||
After successfully deploying the ironic node on subsequent boots, the instance
|
||||
boots from your local disk as first preference. This speeds up the deployed
|
||||
node's boot time. Alternatively, if this is not set, the ironic node PXE boots first and
|
||||
allows for operator-initiated image updates and other operations.
|
||||
node's boot time. Alternatively, if this is not set, the ironic node PXE boots
|
||||
first and allows for operator-initiated image updates and other operations.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -151,7 +151,8 @@ allows for operator-initiated image updates and other operations.
|
||||
Enroll ironic nodes
|
||||
-------------------
|
||||
|
||||
#. From the utility container, enroll a new baremetal node by executing the following:
|
||||
#. From the utility container, enroll a new baremetal node by executing the
|
||||
following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
@ -3,7 +3,8 @@
|
||||
Configuring the Identity service (keystone) (optional)
|
||||
======================================================
|
||||
|
||||
Customize your keystone deployment in ``/etc/openstack_deploy/user_variables.yml``.
|
||||
Customize your keystone deployment in
|
||||
``/etc/openstack_deploy/user_variables.yml``.
|
||||
|
||||
|
||||
Securing keystone communication with SSL certificates
|
||||
|
@ -155,8 +155,8 @@ The following procedure describes how to modify the
|
||||
- { name: "ah4", pattern: "CONFIG_INET_AH=", group: "network_hosts" }
|
||||
- { name: "ipcomp", pattern: "CONFIG_INET_IPCOMP=", group: "network_hosts" }
|
||||
|
||||
#. Execute the openstack hosts setup in order to load the kernel modules at boot
|
||||
and runtime in the network hosts
|
||||
#. Execute the openstack hosts setup in order to load the kernel modules at
|
||||
boot and runtime in the network hosts
|
||||
|
||||
.. code-block:: shell-session
|
||||
|
||||
|
@ -52,8 +52,8 @@ You can disable discard by setting ``nova_libvirt_hw_disk_discard`` to
|
||||
string to disable ``network=writeback``.
|
||||
|
||||
The following minimal example configuration sets nova to use the
|
||||
``ephemeral-vms`` Ceph pool. The following example uses cephx authentication, and
|
||||
requires an existing ``cinder`` account for the ``ephemeral-vms`` pool:
|
||||
``ephemeral-vms`` Ceph pool. The following example uses cephx authentication,
|
||||
and requires an existing ``cinder`` account for the ``ephemeral-vms`` pool:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -70,7 +70,8 @@ If you have a different Ceph username for the pool, use it as:
|
||||
|
||||
cinder_ceph_client: <ceph-username>
|
||||
|
||||
* The `Ceph documentation for OpenStack`_ has additional information about these settings.
|
||||
* The `Ceph documentation for OpenStack`_ has additional information about
|
||||
these settings.
|
||||
* `OpenStack-Ansible and Ceph Working Example`_
|
||||
|
||||
|
||||
|
@ -7,7 +7,8 @@ RabbitMQ provides the messaging broker for various OpenStack services. The
|
||||
OpenStack-Ansible project configures a plaintext listener on port 5672 and
|
||||
a SSL/TLS encrypted listener on port 5671.
|
||||
|
||||
Customize your RabbitMQ deployment in ``/etc/openstack_deploy/user_variables.yml``.
|
||||
Customize your RabbitMQ deployment in
|
||||
``/etc/openstack_deploy/user_variables.yml``.
|
||||
|
||||
Add a TLS encrypted listener to RabbitMQ
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -84,9 +84,10 @@ General Guidelines for Submitting Code
|
||||
* New features, breaking changes and other patches of note must include a
|
||||
release note generated using `the reno tool`_. Please see the
|
||||
`Documentation and Release Note Guidelines`_ for more information.
|
||||
* All patches including code, documentation and release notes should be built and
|
||||
tested locally with the appropriate test suite before submitting for review.
|
||||
See `Development and Testing`_ for more information.
|
||||
* All patches including code, documentation and release notes should be built
|
||||
and tested locally with the appropriate test suite before submitting for
|
||||
review. See `Development and Testing`_ for more information.
|
||||
|
||||
.. _Git Commit Good Practice: https://wiki.openstack.org/wiki/GitCommitMessages
|
||||
.. _workflow documented here: http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||
.. _advanced gerrit usage: http://www.mediawiki.org/wiki/Gerrit/Advanced_usage
|
||||
@ -160,8 +161,8 @@ Backporting
|
||||
Documentation and Release Note Guidelines
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Documentation is a critical part of ensuring that the deployers of OpenStack-Ansible
|
||||
are appropriately informed about:
|
||||
Documentation is a critical part of ensuring that the deployers of
|
||||
OpenStack-Ansible are appropriately informed about:
|
||||
|
||||
* How to use the project's tooling effectively to deploy OpenStack.
|
||||
* How to implement the right configuration to meet the needs of their specific
|
||||
|
@ -101,17 +101,18 @@ See also `Understanding Host Groups`_ in Appendix H.
|
||||
user\_*.yml files
|
||||
-----------------
|
||||
|
||||
Files in ``/etc/openstack_deploy`` beginning with ``user_`` will be automatically
|
||||
sourced in any ``openstack-ansible`` command. Alternatively, the files can be
|
||||
sourced with the ``-e`` parameter of the ``ansible-playbook`` command.
|
||||
Files in ``/etc/openstack_deploy`` beginning with ``user_`` will be
|
||||
automatically sourced in any ``openstack-ansible`` command. Alternatively,
|
||||
the files can be sourced with the ``-e`` parameter of the ``ansible-playbook``
|
||||
command.
|
||||
|
||||
``user_variables.yml`` and ``user_secrets.yml`` are used directly by
|
||||
OpenStack-Ansible. Adding custom variables used by your own roles and playbooks
|
||||
to these files is not recommended. Doing so will complicate your upgrade path
|
||||
by making comparison of your existing files with later versions of these files
|
||||
more arduous. Rather, recommended practice is to place your own variables in files
|
||||
named following the ``user_*.yml`` pattern so they will be sourced alongside
|
||||
those used exclusively by OpenStack-Ansible.
|
||||
OpenStack-Ansible. Adding custom variables used by your own roles and
|
||||
playbooks to these files is not recommended. Doing so will complicate your
|
||||
upgrade path by making comparison of your existing files with later versions
|
||||
of these files more arduous. Rather, recommended practice is to place your own
|
||||
variables in files named following the ``user_*.yml`` pattern so they will be
|
||||
sourced alongside those used exclusively by OpenStack-Ansible.
|
||||
|
||||
Ordering and Precedence
|
||||
+++++++++++++++++++++++
|
||||
@ -136,9 +137,9 @@ All of the services that use YAML, JSON, or INI for configuration can receive
|
||||
overrides through the use of a Ansible action plugin named ``config_template``.
|
||||
The configuration template engine allows a deployer to use a simple dictionary
|
||||
to modify or add items into configuration files at run time that may not have a
|
||||
preset template option. All OpenStack-Ansible roles allow for this functionality
|
||||
where applicable. Files available to receive overrides can be seen in the
|
||||
``defaults/main.yml`` file as standard empty dictionaries (hashes).
|
||||
preset template option. All OpenStack-Ansible roles allow for this
|
||||
functionality where applicable. Files available to receive overrides can be
|
||||
seen in the ``defaults/main.yml`` file as standard empty dictionaries (hashes).
|
||||
|
||||
Practical guidance for using this feature is available in the `Install Guide`_.
|
||||
|
||||
@ -157,12 +158,12 @@ git based or PyPi installable package. When the package is built the repo-build
|
||||
role will create the sources as Python wheels to extend the base system and
|
||||
requirements.
|
||||
|
||||
While the packages pre-built in the repository-infrastructure are comprehensive,
|
||||
it may be needed to change the source locations and versions of packages to suit
|
||||
different deployment needs. Adding additional repositories as overrides is as
|
||||
simple as listing entries within the variable file of your choice. Any
|
||||
``user_.*.yml`` file within the "/etc/openstack_deployment" directory will work
|
||||
to facilitate the addition of a new packages.
|
||||
While the packages pre-built in the repository-infrastructure are
|
||||
comprehensive, it may be needed to change the source locations and versions of
|
||||
packages to suit different deployment needs. Adding additional repositories as
|
||||
overrides is as simple as listing entries within the variable file of your
|
||||
choice. Any ``user_.*.yml`` file within the "/etc/openstack_deployment"
|
||||
directory will work to facilitate the addition of a new packages.
|
||||
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -171,8 +172,8 @@ to facilitate the addition of a new packages.
|
||||
swift_git_install_branch: master
|
||||
|
||||
|
||||
Additional lists of python packages can also be overridden using a ``user_.*.yml``
|
||||
variable file.
|
||||
Additional lists of python packages can also be overridden using a
|
||||
``user_.*.yml`` variable file.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -191,8 +192,8 @@ deploy your overridden source code.
|
||||
Module documentation
|
||||
++++++++++++++++++++
|
||||
|
||||
These are the options available as found within the virtual module documentation
|
||||
section.
|
||||
These are the options available as found within the virtual module
|
||||
documentation section.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -16,8 +16,8 @@ cluster.
|
||||
If necessary, also modify the ``used_ips`` stanza.
|
||||
|
||||
#. If the cluster is utilizing Telemetry/Metering (Ceilometer),
|
||||
edit the ``/etc/openstack_deploy/conf.d/ceilometer.yml`` file and add the host to
|
||||
the ``metering-compute_hosts`` stanza.
|
||||
edit the ``/etc/openstack_deploy/conf.d/ceilometer.yml`` file and add the
|
||||
host to the ``metering-compute_hosts`` stanza.
|
||||
|
||||
#. Run the following commands to add the host. Replace
|
||||
``NEW_HOST_NAME`` with the name of the new host.
|
||||
|
@ -148,9 +148,9 @@ recover cannot join the cluster because it no longer exists.
|
||||
Complete failure
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Restore from backup if all of the nodes in a Galera cluster fail (do not shutdown
|
||||
gracefully). Run the following command to determine if all nodes in the
|
||||
cluster have failed:
|
||||
Restore from backup if all of the nodes in a Galera cluster fail (do not
|
||||
shutdown gracefully). Run the following command to determine if all nodes in
|
||||
the cluster have failed:
|
||||
|
||||
.. code-block:: shell-session
|
||||
|
||||
|
@ -22,7 +22,8 @@ deployment:
|
||||
directories named after the container or physical host.
|
||||
* Each physical host has the logs from its service containers mounted at
|
||||
``/openstack/log/``.
|
||||
* Each service container has its own logs stored at ``/var/log/<service_name>``.
|
||||
* Each service container has its own logs stored at
|
||||
``/var/log/<service_name>``.
|
||||
|
||||
--------------
|
||||
|
||||
|
@ -106,8 +106,8 @@ documentation on `fact caching`_ for more details.
|
||||
Forcing regeneration of cached facts
|
||||
------------------------------------
|
||||
|
||||
Cached facts may be incorrect if the host receives a kernel upgrade or new network
|
||||
interfaces. Newly created bridges also disrupt cache facts.
|
||||
Cached facts may be incorrect if the host receives a kernel upgrade or new
|
||||
network interfaces. Newly created bridges also disrupt cache facts.
|
||||
|
||||
This can lead to unexpected errors while running playbooks, and
|
||||
require that the cached facts be regenerated.
|
||||
|
@ -118,11 +118,11 @@ options:
|
||||
:git-clone: Clone all of the role dependencies using native git
|
||||
|
||||
Notes:
|
||||
When doing role development it may be useful to set ``ANSIBLE_ROLE_FETCH_MODE``
|
||||
to *git-clone*. This will provide you the ability to develop roles within the
|
||||
environment by modifying, patching, or committing changes using an intact
|
||||
git tree while the *galaxy* option scrubs the ``.git`` directory when
|
||||
it resolves a dependency.
|
||||
When doing role development it may be useful to set
|
||||
``ANSIBLE_ROLE_FETCH_MODE`` to *git-clone*. This will provide you the
|
||||
ability to develop roles within the environment by modifying, patching, or
|
||||
committing changes using an intact git tree while the *galaxy* option scrubs
|
||||
the ``.git`` directory when it resolves a dependency.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -145,8 +145,8 @@ for the OpenStack Deployment. This preparation is completed by executing:
|
||||
|
||||
# scripts/bootstrap-aio.sh
|
||||
|
||||
If you wish to add any additional configuration entries for the OpenStack configuration
|
||||
then this can be done now by editing
|
||||
If you wish to add any additional configuration entries for the OpenStack
|
||||
configuration then this can be done now by editing
|
||||
``/etc/openstack_deploy/user_variables.yml``. Please see the `Install Guide`_
|
||||
for more details.
|
||||
|
||||
@ -173,9 +173,9 @@ general estimates:
|
||||
* Virtual machines with SSD storage: ~ 45-60 minutes
|
||||
* Systems with traditional hard disks: ~ 90-120 minutes
|
||||
|
||||
Once the playbooks have fully executed, it is possible to experiment with various
|
||||
settings changes in ``/etc/openstack_deploy/user_variables.yml`` and only
|
||||
run individual playbooks. For example, to run the playbook for the
|
||||
Once the playbooks have fully executed, it is possible to experiment with
|
||||
various settings changes in ``/etc/openstack_deploy/user_variables.yml`` and
|
||||
only run individual playbooks. For example, to run the playbook for the
|
||||
Keystone service, execute:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -247,10 +247,11 @@ AIOs are best run inside of some form of virtual machine or cloud guest.
|
||||
Quick AIO build on Rackspace Cloud
|
||||
----------------------------------
|
||||
|
||||
You can automate the AIO build process with a virtual machine from the Rackspace Cloud.
|
||||
You can automate the AIO build process with a virtual machine from the
|
||||
Rackspace Cloud.
|
||||
|
||||
First, we will need a cloud-config file that will allow us to run the build as soon as the
|
||||
instance starts. Save this file as ``user_data.yml``:
|
||||
First, we will need a cloud-config file that will allow us to run the build as
|
||||
soon as the instance starts. Save this file as ``user_data.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -51,7 +51,8 @@ On the deployment host, copy the Nuage user variables file from
|
||||
|
||||
# cp /opt/nuage-openstack-ansible/etc/user_nuage_vars.yml /etc/openstack_deploy/
|
||||
|
||||
Also modify the following parameters in this file as per your Nuage VCS environment:
|
||||
Also modify the following parameters in this file as per your Nuage VCS
|
||||
environment:
|
||||
|
||||
#. Replace *VSD Enterprise Name* parameter with user desired name of VSD
|
||||
Enterprise:
|
||||
|
@ -56,8 +56,8 @@ parameters.
|
||||
PLUMgrid configurations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
On the deployment host, create a PLUMgrid user variables file using the sample in
|
||||
``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to
|
||||
On the deployment host, create a PLUMgrid user variables file using the sample
|
||||
in ``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to
|
||||
``/etc/openstack_deploy/user_pg_vars.yml``. You must configure the
|
||||
following parameters.
|
||||
|
||||
@ -119,8 +119,8 @@ to as ``gateway_devs`` in the configuration files.
|
||||
Replace ``*_IP_ADDRESS`` with the IP address of the ``br-mgmt`` container management
|
||||
bridge on each Gateway host.
|
||||
|
||||
#. Add a ``gateway_hosts`` section to the end of the PLUMgrid ``user_pg_vars.yml``
|
||||
file:
|
||||
#. Add a ``gateway_hosts`` section to the end of the PLUMgrid
|
||||
``user_pg_vars.yml`` file:
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -11,10 +11,10 @@ swift backend or some form of shared storage.
|
||||
Configuring default and additional stores
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
OpenStack-Ansible provides two configurations for controlling where glance stores
|
||||
files: the default store and additional stores. glance stores images in file-based
|
||||
storage by default. Two additional stores, ``http`` and ``cinder`` (Block Storage),
|
||||
are also enabled by default.
|
||||
OpenStack-Ansible provides two configurations for controlling where glance
|
||||
stores files: the default store and additional stores. glance stores images in
|
||||
file-based storage by default. Two additional stores, ``http`` and ``cinder``
|
||||
(Block Storage), are also enabled by default.
|
||||
|
||||
You can choose alternative default stores and alternative additional stores.
|
||||
For example, a deployer that uses Ceph may configure the following Ansible
|
||||
@ -165,8 +165,8 @@ Special considerations
|
||||
|
||||
If the swift password or key contains a dollar sign (``$``), it must
|
||||
be escaped with an additional dollar sign (``$$``). For example, a password of
|
||||
``super$ecure`` would need to be entered as ``super$$ecure``. This is necessary
|
||||
due to the way `oslo.config formats strings`_.
|
||||
``super$ecure`` would need to be entered as ``super$$ecure``. This is
|
||||
necessary due to the way `oslo.config formats strings`_.
|
||||
|
||||
.. _oslo.config formats strings: https://bugs.launchpad.net/oslo-incubator/+bug/1259729
|
||||
|
||||
|
@ -10,6 +10,12 @@ for Ansible. Start by getting those files into the correct places:
|
||||
``/opt/openstack-ansible/etc/openstack_deploy`` directory to the
|
||||
``/etc/openstack_deploy`` directory.
|
||||
|
||||
.. note::
|
||||
|
||||
As of Newton, the ``env.d`` directory has been moved from this source
|
||||
directory to ``playbooks/inventory/``. See `Appendix H`_ for more
|
||||
details on this change.
|
||||
|
||||
#. Change to the ``/etc/openstack_deploy`` directory.
|
||||
|
||||
#. Copy the ``openstack_user_config.yml.example`` file to
|
||||
@ -40,7 +46,8 @@ OpenStack-Ansible's dynamic inventory generation has a concept called
|
||||
`affinity`. This determines how many containers of a similar type are deployed
|
||||
onto a single physical host.
|
||||
|
||||
Using `shared-infra_hosts` as an example, consider this ``openstack_user_config.yml``:
|
||||
Using `shared-infra_hosts` as an example, consider this
|
||||
``openstack_user_config.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -123,6 +130,7 @@ fine-tune certain security configurations.
|
||||
.. _openstack-ansible-security: http://docs.openstack.org/developer/openstack-ansible-security/
|
||||
.. _Security Technical Implementation Guide (STIG): https://en.wikipedia.org/wiki/Security_Technical_Implementation_Guide
|
||||
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
|
||||
.. _Appendix H: ../install-guide/app-custom-layouts.html
|
||||
|
||||
--------------
|
||||
|
||||
|
@ -26,8 +26,8 @@ Overriding .conf files
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The most common use-case for implementing overrides are for the
|
||||
``<service>.conf`` files (for example, ``nova.conf``). These files use a standard
|
||||
``INI`` file format.
|
||||
``<service>.conf`` files (for example, ``nova.conf``). These files use a
|
||||
standard ``INI`` file format.
|
||||
|
||||
For example, if you add the following parameters to ``nova.conf``:
|
||||
|
||||
|
@ -31,8 +31,8 @@ many services as possible.
|
||||
Self-signed certificates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Self-signed certificates ensure you are able to start quickly and you are able to
|
||||
encrypt data in transit. However, they do not provide a high level of trust
|
||||
Self-signed certificates ensure you are able to start quickly and you are able
|
||||
to encrypt data in transit. However, they do not provide a high level of trust
|
||||
for highly secure environments. The use of self-signed certificates is
|
||||
currently the default in OpenStack-Ansible. When self-signed certificates are
|
||||
being used, certificate verification must be disabled using the following
|
||||
@ -82,9 +82,9 @@ To force a self-signed certificate to regenerate, you can pass the variable to
|
||||
|
||||
To force a self-signed certificate to regenerate with every playbook run,
|
||||
set the appropriate regeneration option to ``true``. For example, if
|
||||
you have already run the ``os-horizon`` playbook, but you want to regenerate the
|
||||
self-signed certificate, set the ``horizon_ssl_self_signed_regen`` variable to
|
||||
``true`` in ``/etc/openstack_deploy/user_variables.yml``:
|
||||
you have already run the ``os-horizon`` playbook, but you want to regenerate
|
||||
the self-signed certificate, set the ``horizon_ssl_self_signed_regen`` variable
|
||||
to ``true`` in ``/etc/openstack_deploy/user_variables.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -41,9 +41,10 @@ Install additional software packages and configure NTP.
|
||||
Configuring the network
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Ansible deployments fail if the deployment server is unable to SSH to the containers.
|
||||
Configure the deployment host to be on the same network designated for container management.
|
||||
This configuration reduces the rate of failure due to connectivity issues.
|
||||
Ansible deployments fail if the deployment server is unable to SSH to the
|
||||
containers. Configure the deployment host to be on the same network designated
|
||||
for container management. This configuration reduces the rate of failure due
|
||||
to connectivity issues.
|
||||
|
||||
The following network information is used as an example:
|
||||
|
||||
|
@ -89,8 +89,9 @@ Production environment
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The layout for a production environment involves seven target
|
||||
hosts in total: three control plane and infrastructure hosts, two compute hosts,
|
||||
one storage host and one log aggregation host. It also has the following features:
|
||||
hosts in total: three control plane and infrastructure hosts, two compute
|
||||
hosts, one storage host and one log aggregation host. It also has the
|
||||
following features:
|
||||
|
||||
- Bonded NICs
|
||||
- NFS/Ceph-backed storage for nova, glance, and cinder
|
||||
|
@ -7,7 +7,8 @@ Network architecture
|
||||
====================
|
||||
|
||||
For a production environment, some components are mandatory, such as bridges
|
||||
described below. We recommend other components such as a bonded network interface.
|
||||
described below. We recommend other components such as a bonded network
|
||||
interface.
|
||||
|
||||
.. important::
|
||||
|
||||
@ -23,11 +24,11 @@ particular environment.
|
||||
Bonded network interfaces
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The reference architecture for a production environment includes bonded network
|
||||
interfaces, which use multiple physical network interfaces for better redundancy
|
||||
and throughput. Avoid using two ports on the same multi-port network card for the
|
||||
same bonded interface since a network card failure affects both physical network
|
||||
interfaces used by the bond.
|
||||
The reference architecture for a production environment includes bonded
|
||||
network interfaces, which use multiple physical network interfaces for better
|
||||
redundancy and throughput. Avoid using two ports on the same multi-port
|
||||
network card for the same bonded interface since a network card failure
|
||||
affects both physical network interfaces used by the bond.
|
||||
|
||||
The ``bond0`` interface carries traffic from the containers
|
||||
running your OpenStack infrastructure. Configure a static IP address on the
|
||||
|
@ -64,15 +64,15 @@ Logging hosts
|
||||
logs on the logging hosts.
|
||||
|
||||
Hosts that provide Block Storage (cinder) volumes must have logical volume
|
||||
manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume group
|
||||
that OpenStack-Ansible can configure for use with cinder.
|
||||
manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume
|
||||
group that OpenStack-Ansible can configure for use with cinder.
|
||||
|
||||
Each control plane host runs services inside LXC containers. The container
|
||||
filesystems are deployed by default onto the root filesystem of each control
|
||||
plane hosts. You have the option to deploy those container filesystems
|
||||
into logical volumes by creating a volume group called ``lxc``. OpenStack-Ansible
|
||||
creates a 5GB logical volume for the filesystem of each container running
|
||||
on the host.
|
||||
into logical volumes by creating a volume group called ``lxc``.
|
||||
OpenStack-Ansible creates a 5GB logical volume for the filesystem of each
|
||||
container running on the host.
|
||||
|
||||
Network requirements
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
@ -83,8 +83,9 @@ Network requirements
|
||||
network interface. This works for small environments, but it can cause
|
||||
problems when your environment grows.
|
||||
|
||||
For the best performance, reliability and scalability in a production environment,
|
||||
deployers should consider a network configuration that contains the following features:
|
||||
For the best performance, reliability and scalability in a production
|
||||
environment, deployers should consider a network configuration that contains
|
||||
the following features:
|
||||
|
||||
* Bonded network interfaces: Increases performance and/or reliability
|
||||
(dependent on bonding architecture).
|
||||
|
@ -45,10 +45,10 @@ and how to configure SSL certificates, see
|
||||
Host security hardening
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Security hardening is applied by default to OpenStack infrastructure and compute
|
||||
hosts using the ``openstack-ansible-security`` role. The purpose of the role is to
|
||||
apply as many security configurations as possible without disrupting the
|
||||
operation of an OpenStack deployment.
|
||||
Security hardening is applied by default to OpenStack infrastructure and
|
||||
compute hosts using the ``openstack-ansible-security`` role. The purpose of
|
||||
the role is to apply as many security configurations as possible without
|
||||
disrupting the operation of an OpenStack deployment.
|
||||
|
||||
Refer to the documentation on :ref:`security_hardening` for more information
|
||||
on the role and how to enable it in OpenStack-Ansible.
|
||||
@ -102,19 +102,19 @@ The resources within an OpenStack environment can be divided into two groups:
|
||||
* MariaDB
|
||||
* RabbitMQ
|
||||
|
||||
Configure firewalls to limit network access to all services that users must access
|
||||
directly.
|
||||
Configure firewalls to limit network access to all services that users must
|
||||
access directly.
|
||||
|
||||
Other services, such as MariaDB and RabbitMQ, must be segmented away from
|
||||
direct user access. Configure a firewall to only allow connectivity to
|
||||
these services within the OpenStack environment itself. This
|
||||
reduces an attacker's ability to query or manipulate data in OpenStack's
|
||||
critical database and queuing services, especially if one of these services has
|
||||
a known vulnerability.
|
||||
critical database and queuing services, especially if one of these services
|
||||
has a known vulnerability.
|
||||
|
||||
For more details on recommended network policies for OpenStack clouds, refer to
|
||||
the `API endpoint process isolation and policy`_ section from the `OpenStack
|
||||
Security Guide`_
|
||||
For more details on recommended network policies for OpenStack clouds, refer
|
||||
to the `API endpoint process isolation and policy`_ section from the
|
||||
`OpenStack Security Guide`_
|
||||
|
||||
.. _API endpoint process isolation and policy: http://docs.openstack.org/security-guide/api-endpoints/api-endpoint-configuration-recommendations.html#network-policy
|
||||
.. _OpenStack Security Guide: http://docs.openstack.org/security-guide
|
||||
|
@ -8,10 +8,8 @@ Overview
|
||||
~~~~~~~~
|
||||
|
||||
This example uses the following parameters to configure networking on a
|
||||
single target host. See `Figure 3.2, "Target hosts for infrastructure,
|
||||
networking, compute, and storage
|
||||
services" <targethosts-networkexample.html#fig_hosts-target-network-containerexample>`_
|
||||
for a visual representation of these parameters in the architecture.
|
||||
single target host. See `Figure 3.2`_ for a visual representation of these
|
||||
parameters in the architecture.
|
||||
|
||||
- VLANs:
|
||||
|
||||
@ -47,6 +45,7 @@ for a visual representation of these parameters in the architecture.
|
||||
|
||||
- Storage: 172.29.244.11
|
||||
|
||||
.. _Figure 3.2: targethosts-networkexample.html#fig_hosts-target-network-containerexample
|
||||
|
||||
**Figure 3.2. Target host for infrastructure, networking, compute, and
|
||||
storage services**
|
||||
|
@ -9,8 +9,8 @@ Overview
|
||||
|
||||
This example allows you to use your own parameters for the deployment.
|
||||
|
||||
The following is a table of the bridges that are be configured on hosts, if you followed the
|
||||
previously proposed design.
|
||||
The following is a table of the bridges that are be configured on hosts, if
|
||||
you followed the previously proposed design.
|
||||
|
||||
+-------------+-----------------------+-------------------------------------+
|
||||
| Bridge name | Best configured on | With a static IP |
|
||||
@ -188,8 +188,10 @@ Example for 3 controller nodes and 2 compute nodes
|
||||
- Host management gateway: 10.240.0.1
|
||||
- DNS servers: 69.20.0.164 69.20.0.196
|
||||
- Container management: 172.29.236.11 - 172.29.236.13
|
||||
- Tunnel: no IP (because IP exist in the containers, when the components aren't deployed directly on metal)
|
||||
- Storage: no IP (because IP exist in the containers, when the components aren't deployed directly on metal)
|
||||
- Tunnel: no IP (because IP exist in the containers, when the components
|
||||
aren't deployed directly on metal)
|
||||
- Storage: no IP (because IP exist in the containers, when the components
|
||||
aren't deployed directly on metal)
|
||||
|
||||
- Addresses for the compute nodes:
|
||||
|
||||
|
@ -79,10 +79,11 @@ practices, refer to `GitHub's documentation on generating SSH keys`_.
|
||||
Configuring LVM
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
`Logical Volume Manager (LVM)`_ allows a single device to be split into multiple
|
||||
logical volumes which appear as a physical storage device to the operating
|
||||
system. The Block Storage (cinder) service, as well as the LXC containers that
|
||||
run the OpenStack infrastructure, can optionally use LVM for their data storage.
|
||||
`Logical Volume Manager (LVM)`_ allows a single device to be split into
|
||||
multiple logical volumes which appear as a physical storage device to the
|
||||
operating system. The Block Storage (cinder) service, as well as the LXC
|
||||
containers that run the OpenStack infrastructure, can optionally use LVM for
|
||||
their data storage.
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -135,9 +135,9 @@ Omit a service or component from the deployment
|
||||
To omit a component from a deployment, several options exist.
|
||||
|
||||
- You could remove the ``physical_skel`` link between the container group and
|
||||
the host group. The simplest way to do this is to simply copy the relevant file
|
||||
to the ``/etc/openstack_deploy/env.d/`` directory, and set the following
|
||||
information:
|
||||
the host group. The simplest way to do this is to simply copy the relevant
|
||||
file to the ``/etc/openstack_deploy/env.d/`` directory, and set the
|
||||
following information:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -50,7 +50,8 @@ On the deployment host, copy the Nuage user variables file from
|
||||
|
||||
# cp /opt/nuage-openstack-ansible/etc/user_nuage_vars.yml /etc/openstack_deploy/
|
||||
|
||||
Also modify the following parameters in this file as per your Nuage VCS environment:
|
||||
Also modify the following parameters in this file as per your Nuage VCS
|
||||
environment:
|
||||
|
||||
#. Replace *VSD Enterprise Name* parameter with user desired name of VSD
|
||||
Enterprise:
|
||||
|
@ -56,8 +56,8 @@ parameters.
|
||||
PLUMgrid configurations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
On the deployment host, create a PLUMgrid user variables file using the sample in
|
||||
``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to
|
||||
On the deployment host, create a PLUMgrid user variables file using the sample
|
||||
in ``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to
|
||||
``/etc/openstack_deploy/user_pg_vars.yml``. You must configure the
|
||||
following parameters.
|
||||
|
||||
@ -116,11 +116,11 @@ to as ``gateway_devs`` in the configuration files.
|
||||
gateway2:
|
||||
ip: GW02_IP_ADDRESS
|
||||
|
||||
Replace ``*_IP_ADDRESS`` with the IP address of the ``br-mgmt`` container management
|
||||
bridge on each Gateway host.
|
||||
Replace ``*_IP_ADDRESS`` with the IP address of the ``br-mgmt`` container
|
||||
management bridge on each Gateway host.
|
||||
|
||||
#. Add a ``gateway_hosts`` section to the end of the PLUMgrid ``user_pg_vars.yml``
|
||||
file:
|
||||
#. Add a ``gateway_hosts`` section to the end of the PLUMgrid
|
||||
``user_pg_vars.yml`` file:
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -7,7 +7,8 @@ The Telemetry (ceilometer) alarming services perform the following functions:
|
||||
|
||||
- Creates an API endpoint for controlling alarms.
|
||||
|
||||
- Allows you to set alarms based on threshold evaluation for a collection of samples.
|
||||
- Allows you to set alarms based on threshold evaluation for a collection of
|
||||
samples.
|
||||
|
||||
|
||||
|
||||
|
@ -7,9 +7,11 @@ The Telemetry module (ceilometer) performs the following functions:
|
||||
|
||||
- Efficiently polls metering data related to OpenStack services.
|
||||
|
||||
- Collects event and metering data by monitoring notifications sent from services.
|
||||
- Collects event and metering data by monitoring notifications sent from
|
||||
services.
|
||||
|
||||
- Publishes collected data to various targets including data stores and message queues.
|
||||
- Publishes collected data to various targets including data stores and
|
||||
message queues.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -32,8 +34,8 @@ Setting up a MongoDB database for ceilometer
|
||||
|
||||
# apt-get install mongodb-server mongodb-clients python-pymongo
|
||||
|
||||
2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the management
|
||||
interface:
|
||||
2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the
|
||||
management interface:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
|
@ -57,11 +57,11 @@ Ceph configuration:
|
||||
auth_service_required = cephx
|
||||
auth_client_required = cephx
|
||||
|
||||
The use of the ``ceph_conf_file`` variable is optional. By default, OpenStack-Ansible
|
||||
obtains a copy of ``ceph.conf`` from one of your Ceph monitors. This
|
||||
transfer of ``ceph.conf`` requires the OpenStack-Ansible deployment host public key
|
||||
to be deployed to all of the Ceph monitors. More details are available
|
||||
here: `Deploying SSH Keys`_.
|
||||
The use of the ``ceph_conf_file`` variable is optional. By default,
|
||||
OpenStack-Ansible obtains a copy of ``ceph.conf`` from one of your Ceph
|
||||
monitors. This transfer of ``ceph.conf`` requires the OpenStack-Ansible
|
||||
deployment host public key to be deployed to all of the Ceph monitors. More
|
||||
details are available here: `Deploying SSH Keys`_.
|
||||
|
||||
The following minimal example configuration sets nova and glance
|
||||
to use ceph pools: ``ephemeral-vms`` and ``images`` respectively.
|
||||
|
@ -3,8 +3,8 @@
|
||||
Configuring the Block (cinder) storage service (optional)
|
||||
=========================================================
|
||||
|
||||
By default, the Block (cinder) storage service installs on the host itself using
|
||||
the LVM backend.
|
||||
By default, the Block (cinder) storage service installs on the host itself
|
||||
using the LVM backend.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -181,7 +181,8 @@ By default, no horizon configuration is set.
|
||||
Configuring cinder to use LVM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. List the ``container_vars`` that contain the storage options for the target host.
|
||||
#. List the ``container_vars`` that contain the storage options for the target
|
||||
host.
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -1,9 +1,9 @@
|
||||
`Home <index.html>`__ OpenStack-Ansible Installation Guide
|
||||
|
||||
Configuring Active Directory Federation Services (ADFS) 3.0 as an identity provider
|
||||
===================================================================================
|
||||
Configuring ADFS 3.0 as an identity provider
|
||||
============================================
|
||||
|
||||
To install ADFS:
|
||||
To install Active Directory Federation Services (ADFS):
|
||||
|
||||
* `Prerequisites for ADFS from Microsoft Technet <https://technet.microsoft.com/library/bf7f9cf4-6170-40e8-83dd-e636cb4f9ecb>`_
|
||||
* `ADFS installation procedure from Microsoft Technet <https://technet.microsoft.com/en-us/library/dn303423>`_
|
||||
@ -35,9 +35,9 @@ Configuring ADFS
|
||||
References
|
||||
~~~~~~~~~~
|
||||
|
||||
* `http://blogs.technet.com/b/rmilne/archive/2014/04/28/how-to-install-adfs-2012-r2-for-office-365.aspx`_
|
||||
* `http://blog.kloud.com.au/2013/08/14/powershell-deployment-of-web-application-proxy-and-adfs-in-under-10-minutes/`_
|
||||
* `https://ethernuno.wordpress.com/2014/04/20/install-adds-on-windows-server-2012-r2-with-powershell/`_
|
||||
* http://blogs.technet.com/b/rmilne/archive/2014/04/28/how-to-install-adfs-2012-r2-for-office-365.aspx
|
||||
* http://blog.kloud.com.au/2013/08/14/powershell-deployment-of-web-application-proxy-and-adfs-in-under-10-minutes/
|
||||
* https://ethernuno.wordpress.com/2014/04/20/install-adds-on-windows-server-2012-r2-with-powershell/
|
||||
|
||||
--------------
|
||||
|
||||
|
@ -47,13 +47,14 @@ The following list is a reference of allowed settings:
|
||||
this IdP.
|
||||
|
||||
* ``regen_cert`` by default is set to ``False``. When set to ``True``, the
|
||||
next Ansible run replaces the existing signing certificate with a new one. This
|
||||
setting is added as a convenience mechanism to renew a certificate when it
|
||||
is close to its expiration date.
|
||||
next Ansible run replaces the existing signing certificate with a new one.
|
||||
This setting is added as a convenience mechanism to renew a certificate when
|
||||
it is close to its expiration date.
|
||||
|
||||
* ``idp_entity_id`` is the entity ID. The service providers
|
||||
use this as a unique identifier for each IdP. ``<keystone-public-endpoint>/OS-FEDERATION/saml2/idp``
|
||||
is the value we recommend for this setting.
|
||||
use this as a unique identifier for each IdP.
|
||||
``<keystone-public-endpoint>/OS-FEDERATION/saml2/idp`` is the value we
|
||||
recommend for this setting.
|
||||
|
||||
* ``idp_sso_endpoint`` is the single sign-on endpoint for this IdP.
|
||||
``<keystone-public-endpoint>/OS-FEDERATION/saml2/sso>`` is the value
|
||||
@ -72,7 +73,8 @@ The following list is a reference of allowed settings:
|
||||
* ``organization_name``, ``organization_display_name``, ``organization_url``,
|
||||
``contact_company``, ``contact_name``, ``contact_surname``,
|
||||
``contact_email``, ``contact_telephone`` and ``contact_type`` are
|
||||
settings that describe the identity provider. These settings are all optional.
|
||||
settings that describe the identity provider. These settings are all
|
||||
optional.
|
||||
|
||||
--------------
|
||||
|
||||
|
@ -54,9 +54,9 @@ containers:
|
||||
|
||||
References
|
||||
----------
|
||||
* `http://docs.openstack.org/developer/keystone/configure_federation.html`_
|
||||
* `http://docs.openstack.org/developer/keystone/extensions/shibboleth.html`_
|
||||
* `https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPConfiguration`_
|
||||
* http://docs.openstack.org/developer/keystone/configure_federation.html
|
||||
* http://docs.openstack.org/developer/keystone/extensions/shibboleth.html
|
||||
* https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPConfiguration
|
||||
|
||||
--------------
|
||||
|
||||
|
@ -104,8 +104,9 @@ The following settings must be set to configure a service provider (SP):
|
||||
#. ``metadata_reload`` is the number of seconds between metadata
|
||||
refresh polls.
|
||||
|
||||
#. ``federated_identities`` is a mapping list of domain, project, group, and users.
|
||||
See `Configure Identity Service (keystone) Domain-Project-Group-Role mappings <configure-federation-mapping.html>`_
|
||||
#. ``federated_identities`` is a mapping list of domain, project, group, and
|
||||
users. See
|
||||
`Configure Identity Service (keystone) Domain-Project-Group-Role mappings`_
|
||||
for more information.
|
||||
|
||||
#. ``protocols`` is a list of protocols supported for the IdP and the set
|
||||
@ -113,7 +114,11 @@ The following settings must be set to configure a service provider (SP):
|
||||
with the name ``saml2``.
|
||||
|
||||
#. ``mapping`` is the local to remote mapping configuration for federated
|
||||
users. For more information, see `Configure Identity Service (keystone) Domain-Project-Group-Role mappings. <configure-federation-mapping.html>`_
|
||||
users. See
|
||||
`Configure Identity Service (keystone) Domain-Project-Group-Role mappings`_
|
||||
for more information.
|
||||
|
||||
.. _Configure Identity Service (keystone) Domain-Project-Group-Role mappings: configure-federation-mapping.html
|
||||
|
||||
--------------
|
||||
|
||||
|
@ -48,8 +48,8 @@ is "cloud2".
|
||||
Keystone service provider (SP) configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The configuration for keystone SP needs to define the remote-to-local user mappings.
|
||||
The following is the complete configuration:
|
||||
The configuration for keystone SP needs to define the remote-to-local user
|
||||
mappings. The following is the complete configuration:
|
||||
|
||||
.. code::
|
||||
|
||||
@ -142,9 +142,9 @@ user based on the attributes exposed by the IdP in the SAML2 assertion. The
|
||||
use case for this scenario calls for mapping users in "Group A" and "Group B",
|
||||
but the group or groups a user belongs to are not exported in the SAML2
|
||||
assertion. To make the example work, the groups A and B in the use case are
|
||||
projects. Export projects A and B in the assertion under the ``openstack_project`` attribute.
|
||||
The two rules above select the corresponding project using the ``any_one_of``
|
||||
selector.
|
||||
projects. Export projects A and B in the assertion under the
|
||||
``openstack_project`` attribute. The two rules above select the corresponding
|
||||
project using the ``any_one_of`` selector.
|
||||
|
||||
The ``local`` part of the mapping rule specifies how keystone represents
|
||||
the remote user in the local SP cloud. Configuring the two federated identities
|
||||
@ -157,10 +157,10 @@ role.
|
||||
Keystone creates a ephemeral user in the specified group as
|
||||
you cannot specify user names.
|
||||
|
||||
The IdP exports the final setting of the configuration defines the SAML2 ``attributes``.
|
||||
For a keystone IdP, these are the five attributes
|
||||
shown above. Configure the attributes above into the Shibboleth service. This
|
||||
ensures they are available to use in the mappings.
|
||||
The IdP exports the final setting of the configuration defines the SAML2
|
||||
``attributes``. For a keystone IdP, these are the five attributes shown above.
|
||||
Configure the attributes above into the Shibboleth service. This ensures they
|
||||
are available to use in the mappings.
|
||||
|
||||
Reviewing or modifying the configuration with the OpenStack client
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -48,8 +48,8 @@ The following steps above involve manually sending API requests.
|
||||
The infrastructure for the command line utilities that performs these steps
|
||||
for the user does not exist.
|
||||
|
||||
To obtain access to a SP cloud, OpenStack-Ansible provides a script that wraps the
|
||||
above steps. The script is called ``federated-login.sh`` and is
|
||||
To obtain access to a SP cloud, OpenStack-Ansible provides a script that wraps
|
||||
the above steps. The script is called ``federated-login.sh`` and is
|
||||
used as follows:
|
||||
|
||||
.. code::
|
||||
|
@ -11,10 +11,10 @@ swift backend or some form of shared storage.
|
||||
Configuring default and additional stores
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
OpenStack-Ansible provides two configurations for controlling where glance stores
|
||||
files: the default store and additional stores. glance stores images in file-based
|
||||
storage by default. Two additional stores, ``http`` and ``cinder`` (Block Storage),
|
||||
are also enabled by default.
|
||||
OpenStack-Ansible provides two configurations for controlling where glance
|
||||
stores files: the default store and additional stores. glance stores images in
|
||||
file-based storage by default. Two additional stores, ``http`` and ``cinder``
|
||||
(Block Storage), are also enabled by default.
|
||||
|
||||
You can choose alternative default stores and alternative additional stores.
|
||||
For example, a deployer that uses Ceph may configure the following Ansible
|
||||
@ -165,8 +165,8 @@ Special considerations
|
||||
|
||||
If the swift password or key contains a dollar sign (``$``), it must
|
||||
be escaped with an additional dollar sign (``$$``). For example, a password of
|
||||
``super$ecure`` would need to be entered as ``super$$ecure``. This is necessary
|
||||
due to the way `oslo.config formats strings`_.
|
||||
``super$ecure`` would need to be entered as ``super$$ecure``. This is
|
||||
necessary due to the way `oslo.config formats strings`_.
|
||||
|
||||
.. _oslo.config formats strings: https://bugs.launchpad.net/oslo-incubator/+bug/1259729
|
||||
|
||||
|
@ -3,13 +3,14 @@
|
||||
Configuring the Dashboard (horizon) (optional)
|
||||
==============================================
|
||||
|
||||
Customize your horizon deployment in ``/etc/openstack_deploy/user_variables.yml``.
|
||||
Customize your horizon deployment in
|
||||
``/etc/openstack_deploy/user_variables.yml``.
|
||||
|
||||
Securing horizon communication with SSL certificates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The OpenStack-Ansible project provides the ability to secure Dashboard (horizon)
|
||||
communications with self-signed or user-provided SSL certificates.
|
||||
The OpenStack-Ansible project provides the ability to secure Dashboard
|
||||
(horizon) communications with self-signed or user-provided SSL certificates.
|
||||
|
||||
Refer to `Securing services with SSL certificates`_ for available configuration
|
||||
options.
|
||||
@ -20,8 +21,8 @@ Configuring a horizon customization module
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Openstack-Ansible supports deployment of a horizon `customization module`_.
|
||||
After building your customization module, configure the ``horizon_customization_module`` variable
|
||||
with a path to your module.
|
||||
After building your customization module, configure the
|
||||
``horizon_customization_module`` variable with a path to your module.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -46,7 +46,8 @@ OpenStack-Ansible's dynamic inventory generation has a concept called
|
||||
`affinity`. This determines how many containers of a similar type are deployed
|
||||
onto a single physical host.
|
||||
|
||||
Using `shared-infra_hosts` as an example, consider this ``openstack_user_config.yml``:
|
||||
Using `shared-infra_hosts` as an example, consider this
|
||||
``openstack_user_config.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -5,14 +5,14 @@ Configuring the Bare Metal (ironic) service (optional)
|
||||
|
||||
.. note::
|
||||
|
||||
This feature is experimental at this time and it has not been fully production
|
||||
tested yet. These implementation instructions assume that ironic is being deployed
|
||||
as the sole hypervisor for the region.
|
||||
This feature is experimental at this time and it has not been fully
|
||||
production tested yet. These implementation instructions assume that
|
||||
ironic is being deployed as the sole hypervisor for the region.
|
||||
|
||||
Ironic is an OpenStack project which provisions bare metal (as opposed to virtual)
|
||||
machines by leveraging common technologies such as PXE boot and IPMI to cover a wide
|
||||
range of hardware, while supporting pluggable drivers to allow vendor-specific
|
||||
functionality to be added.
|
||||
Ironic is an OpenStack project which provisions bare metal (as opposed to
|
||||
virtual) machines by leveraging common technologies such as PXE boot and IPMI
|
||||
to cover a wide range of hardware, while supporting pluggable drivers to allow
|
||||
vendor-specific functionality to be added.
|
||||
|
||||
OpenStack's ironic project makes physical servers as easy to provision as
|
||||
virtual machines in a cloud.
|
||||
@ -140,8 +140,8 @@ Creating an ironic flavor
|
||||
|
||||
After successfully deploying the ironic node on subsequent boots, the instance
|
||||
boots from your local disk as first preference. This speeds up the deployed
|
||||
node's boot time. Alternatively, if this is not set, the ironic node PXE boots first and
|
||||
allows for operator-initiated image updates and other operations.
|
||||
node's boot time. Alternatively, if this is not set, the ironic node PXE boots
|
||||
first and allows for operator-initiated image updates and other operations.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -151,7 +151,8 @@ allows for operator-initiated image updates and other operations.
|
||||
Enroll ironic nodes
|
||||
-------------------
|
||||
|
||||
#. From the utility container, enroll a new baremetal node by executing the following:
|
||||
#. From the utility container, enroll a new baremetal node by executing the
|
||||
following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
@ -3,7 +3,8 @@
|
||||
Configuring the Identity service (keystone) (optional)
|
||||
======================================================
|
||||
|
||||
Customize your keystone deployment in ``/etc/openstack_deploy/user_variables.yml``.
|
||||
Customize your keystone deployment in
|
||||
``/etc/openstack_deploy/user_variables.yml``.
|
||||
|
||||
|
||||
Securing keystone communication with SSL certificates
|
||||
|
@ -94,31 +94,16 @@ Deploying LBaaS v2
|
||||
Ensure that ``neutron_plugin_base`` includes all of the plugins that you
|
||||
want to deploy with neutron in addition to the LBaaS plugin.
|
||||
|
||||
#. Run the neutron playbook to deploy the LBaaS v2 agent:
|
||||
#. Run the neutron and horizon playbooks to deploy the LBaaS v2 agent and
|
||||
enable the LBaaS v2 panels in horizon:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cd /opt/openstack-ansible/playbooks
|
||||
# openstack-ansible os-neutron-install.yml
|
||||
|
||||
Enabling Horizon panels for LBaaS v2
|
||||
------------------------------------
|
||||
|
||||
#. Set the ``horizon_enable_neutron_lbaas`` variable to ``True`` in
|
||||
``/etc/openstack_deploy/user_variables.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
horizon_enable_neutron_lbaas: True
|
||||
|
||||
#. Run the Horizon playbook to activate the panel:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cd /opt/openstack-ansible/playbooks
|
||||
# openstack-ansible os-horizon-install.yml
|
||||
|
||||
.. _lbaas-special-notes
|
||||
.. _lbaas-special-notes:
|
||||
|
||||
Special notes about LBaaS
|
||||
-------------------------
|
||||
@ -170,8 +155,8 @@ The following procedure describes how to modify the
|
||||
- { name: "ah4", pattern: "CONFIG_INET_AH=", group: "network_hosts" }
|
||||
- { name: "ipcomp", pattern: "CONFIG_INET_IPCOMP=", group: "network_hosts" }
|
||||
|
||||
#. Execute the openstack hosts setup in order to load the kernel modules at boot
|
||||
and runtime in the network hosts
|
||||
#. Execute the openstack hosts setup in order to load the kernel modules at
|
||||
boot and runtime in the network hosts
|
||||
|
||||
.. code-block:: shell-session
|
||||
|
||||
|
@ -52,8 +52,8 @@ You can disable discard by setting ``nova_libvirt_hw_disk_discard`` to
|
||||
string to disable ``network=writeback``.
|
||||
|
||||
The following minimal example configuration sets nova to use the
|
||||
``ephemeral-vms`` Ceph pool. The following example uses cephx authentication, and
|
||||
requires an existing ``cinder`` account for the ``ephemeral-vms`` pool:
|
||||
``ephemeral-vms`` Ceph pool. The following example uses cephx authentication,
|
||||
and requires an existing ``cinder`` account for the ``ephemeral-vms`` pool:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -70,7 +70,8 @@ If you have a different Ceph username for the pool, use it as:
|
||||
|
||||
cinder_ceph_client: <ceph-username>
|
||||
|
||||
* The `Ceph documentation for OpenStack`_ has additional information about these settings.
|
||||
* The `Ceph documentation for OpenStack`_ has additional information about
|
||||
these settings.
|
||||
* `OpenStack-Ansible and Ceph Working Example`_
|
||||
|
||||
|
||||
|
@ -26,8 +26,8 @@ Overriding .conf files
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The most common use-case for implementing overrides are for the
|
||||
``<service>.conf`` files (for example, ``nova.conf``). These files use a standard
|
||||
``INI`` file format.
|
||||
``<service>.conf`` files (for example, ``nova.conf``). These files use a
|
||||
standard ``INI`` file format.
|
||||
|
||||
For example, if you add the following parameters to ``nova.conf``:
|
||||
|
||||
|
@ -7,7 +7,8 @@ RabbitMQ provides the messaging broker for various OpenStack services. The
|
||||
OpenStack-Ansible project configures a plaintext listener on port 5672 and
|
||||
a SSL/TLS encrypted listener on port 5671.
|
||||
|
||||
Customize your RabbitMQ deployment in ``/etc/openstack_deploy/user_variables.yml``.
|
||||
Customize your RabbitMQ deployment in
|
||||
``/etc/openstack_deploy/user_variables.yml``.
|
||||
|
||||
Add a TLS encrypted listener to RabbitMQ
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -31,8 +31,8 @@ many services as possible.
|
||||
Self-signed certificates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Self-signed certificates ensure you are able to start quickly and you are able to
|
||||
encrypt data in transit. However, they do not provide a high level of trust
|
||||
Self-signed certificates ensure you are able to start quickly and you are able
|
||||
to encrypt data in transit. However, they do not provide a high level of trust
|
||||
for highly secure environments. The use of self-signed certificates is
|
||||
currently the default in OpenStack-Ansible. When self-signed certificates are
|
||||
being used, certificate verification must be disabled using the following
|
||||
@ -82,9 +82,9 @@ To force a self-signed certificate to regenerate, you can pass the variable to
|
||||
|
||||
To force a self-signed certificate to regenerate with every playbook run,
|
||||
set the appropriate regeneration option to ``true``. For example, if
|
||||
you have already run the ``os-horizon`` playbook, but you want to regenerate the
|
||||
self-signed certificate, set the ``horizon_ssl_self_signed_regen`` variable to
|
||||
``true`` in ``/etc/openstack_deploy/user_variables.yml``:
|
||||
you have already run the ``os-horizon`` playbook, but you want to regenerate
|
||||
the self-signed certificate, set the ``horizon_ssl_self_signed_regen`` variable
|
||||
to ``true`` in ``/etc/openstack_deploy/user_variables.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -41,9 +41,10 @@ Install additional software packages and configure NTP.
|
||||
Configuring the network
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Ansible deployments fail if the deployment server is unable to SSH to the containers.
|
||||
Configure the deployment host to be on the same network designated for container management.
|
||||
This configuration reduces the rate of failure due to connectivity issues.
|
||||
Ansible deployments fail if the deployment server is unable to SSH to the
|
||||
containers. Configure the deployment host to be on the same network designated
|
||||
for container management. This configuration reduces the rate of failure due
|
||||
to connectivity issues.
|
||||
|
||||
The following network information is used as an example:
|
||||
|
||||
|
@ -64,15 +64,15 @@ Logging hosts
|
||||
|
||||
|
||||
Hosts that provide Block Storage (cinder) volumes must have logical volume
|
||||
manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume group
|
||||
that OpenStack-Ansible can configure for use with cinder.
|
||||
manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume
|
||||
group that OpenStack-Ansible can configure for use with cinder.
|
||||
|
||||
Each control plane host runs services inside LXC containers. The container
|
||||
filesystems are deployed by default onto the root filesystem of each control
|
||||
plane hosts. You have the option to deploy those container filesystems
|
||||
into logical volumes by creating a volume group called ``lxc``. OpenStack-Ansible
|
||||
creates a 5GB logical volume for the filesystem of each container running
|
||||
on the host.
|
||||
into logical volumes by creating a volume group called ``lxc``.
|
||||
OpenStack-Ansible creates a 5GB logical volume for the filesystem of each
|
||||
container running on the host.
|
||||
|
||||
Network requirements
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -47,8 +47,8 @@ Host security hardening
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Deployers can apply security hardening to OpenStack infrastructure and compute
|
||||
hosts using the ``openstack-ansible-security`` role. The purpose of the role is to
|
||||
apply as many security configurations as possible without disrupting the
|
||||
hosts using the ``openstack-ansible-security`` role. The purpose of the role
|
||||
is to apply as many security configurations as possible without disrupting the
|
||||
operation of an OpenStack deployment.
|
||||
|
||||
Refer to the documentation on :ref:`security_hardening` for more information
|
||||
@ -103,8 +103,8 @@ The resources within an OpenStack environment can be divided into two groups:
|
||||
* MariaDB
|
||||
* RabbitMQ
|
||||
|
||||
To manage instances, you are able to access certain public API endpoints, such as
|
||||
the Nova or Neutron API. Configure firewalls to limit network access to
|
||||
To manage instances, you are able to access certain public API endpoints, such
|
||||
as the Nova or Neutron API. Configure firewalls to limit network access to
|
||||
these services.
|
||||
|
||||
Other services, such as MariaDB and RabbitMQ, must be segmented away from
|
||||
|
@ -8,10 +8,8 @@ Overview
|
||||
~~~~~~~~
|
||||
|
||||
This example uses the following parameters to configure networking on a
|
||||
single target host. See `Figure 3.2, "Target hosts for infrastructure,
|
||||
networking, compute, and storage
|
||||
services" <targethosts-networkexample.html#fig_hosts-target-network-containerexample>`_
|
||||
for a visual representation of these parameters in the architecture.
|
||||
single target host. See `Figure 3.2`_ for a visual representation of these
|
||||
parameters in the architecture.
|
||||
|
||||
- VLANs:
|
||||
|
||||
@ -47,6 +45,7 @@ for a visual representation of these parameters in the architecture.
|
||||
|
||||
- Storage: 172.29.244.11
|
||||
|
||||
.. _Figure 3.2: targethosts-networkexample.html#fig_hosts-target-network-containerexample
|
||||
|
||||
**Figure 3.2. Target host for infrastructure, networking, compute, and
|
||||
storage services**
|
||||
|
@ -9,8 +9,8 @@ Overview
|
||||
|
||||
This example allows you to use your own parameters for the deployment.
|
||||
|
||||
The following is a table of the bridges that are be configured on hosts, if you followed the
|
||||
previously proposed design.
|
||||
The following is a table of the bridges that are be configured on hosts, if
|
||||
you followed the previously proposed design.
|
||||
|
||||
+-------------+-----------------------+-------------------------------------+
|
||||
| Bridge name | Best configured on | With a static IP |
|
||||
@ -188,8 +188,10 @@ Example for 3 controller nodes and 2 compute nodes
|
||||
- Host management gateway: 10.240.0.1
|
||||
- DNS servers: 69.20.0.164 69.20.0.196
|
||||
- Container management: 172.29.236.11 - 172.29.236.13
|
||||
- Tunnel: no IP (because IP exist in the containers, when the components aren't deployed directly on metal)
|
||||
- Storage: no IP (because IP exist in the containers, when the components aren't deployed directly on metal)
|
||||
- Tunnel: no IP (because IP exist in the containers, when the components
|
||||
aren't deployed directly on metal)
|
||||
- Storage: no IP (because IP exist in the containers, when the components
|
||||
aren't deployed directly on metal)
|
||||
|
||||
- Addresses for the compute nodes:
|
||||
|
||||
|
@ -79,10 +79,11 @@ practices, refer to `GitHub's documentation on generating SSH keys`_.
|
||||
Configuring LVM
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
`Logical Volume Manager (LVM)`_ allows a single device to be split into multiple
|
||||
logical volumes which appear as a physical storage device to the operating
|
||||
system. The Block Storage (cinder) service, as well as the LXC containers that
|
||||
run the OpenStack infrastructure, can optionally use LVM for their data storage.
|
||||
`Logical Volume Manager (LVM)`_ allows a single device to be split into
|
||||
multiple logical volumes which appear as a physical storage device to the
|
||||
operating system. The Block Storage (cinder) service, as well as the LXC
|
||||
containers that run the OpenStack infrastructure, can optionally use LVM for
|
||||
their data storage.
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -15,7 +15,8 @@ script. Any of these steps can safely be run multiple times.
|
||||
Check out the Newton release
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Ensure your OpenStack-Ansible code is on the latest Newton release tag (14.x.x).
|
||||
Ensure your OpenStack-Ansible code is on the latest Newton release tag
|
||||
(14.x.x).
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -19,7 +19,8 @@ A minor upgrade typically requires the following steps:
|
||||
|
||||
# cd /opt/openstack-ansible
|
||||
|
||||
#. Ensure your OpenStack-Ansible code is on the latest Newton release tag (14.x.x):
|
||||
#. Ensure your OpenStack-Ansible code is on the latest Newton release tag
|
||||
(14.x.x):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -114,7 +115,8 @@ tasks will execute.
|
||||
|
||||
# cd /opt/openstack-ansible/playbooks
|
||||
|
||||
#. See the hosts in the ``nova_compute`` group which a playbook executes against:
|
||||
#. See the hosts in the ``nova_compute`` group which a playbook executes
|
||||
against:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -35,23 +35,23 @@ variable override files matching the pattern
|
||||
Variable names within comments are updated.
|
||||
|
||||
This script creates files of the form
|
||||
``/etc/openstack_deploy.NEWTON/VARS_MIGRATED_file``. For example, once the script has
|
||||
processed the file ``/etc/openstack_deploy/user_variables.yml``, it creates
|
||||
``/etc/openstack_deploy.NEWTON/VARS_MIGRATED_user_variables``. This indicates to
|
||||
OpenStack-Ansible to skip this step on successive runs. The script itself does
|
||||
not check for this file.
|
||||
``/etc/openstack_deploy.NEWTON/VARS_MIGRATED_file``. For example, once the
|
||||
script has processed the file ``/etc/openstack_deploy/user_variables.yml``, it
|
||||
creates ``/etc/openstack_deploy.NEWTON/VARS_MIGRATED_user_variables``. This
|
||||
indicates to OpenStack-Ansible to skip this step on successive runs. The script
|
||||
itself does not check for this file.
|
||||
|
||||
The variable changes are shown in the following table.
|
||||
|
||||
.. This table was made with the output of
|
||||
``scripts/upgrade-utilities/scripts/make_rst_table.py``. Insertion needs to be
|
||||
done manually since the OpenStack publish jobs do not use `make` and there
|
||||
is not yet a Sphinx extension that runs an abitrary script on build.
|
||||
``scripts/upgrade-utilities/scripts/make_rst_table.py``. Insertion needs to
|
||||
be done manually since the OpenStack publish jobs do not use `make` and
|
||||
there is not yet a Sphinx extension that runs an abitrary script on build.
|
||||
|
||||
+------------------------------------------+------------------------------------------+
|
||||
| Old Value | New Value |
|
||||
+==========================================+==========================================+
|
||||
+------------------------------------------+------------------------------------------+
|
||||
+--------------------------------------+--------------------------------------+
|
||||
| Old Value | New Value |
|
||||
+======================================+======================================+
|
||||
+--------------------------------------+--------------------------------------+
|
||||
|
||||
Called by :ref:`config-change-playbook`
|
||||
|
||||
|
@ -27,7 +27,8 @@ the cleanup.
|
||||
``ansible_fact_cleanup.yml``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This calls a script to removes files in ``/etc/openstack_deploy/ansible_facts/``
|
||||
This calls a script to removes files in
|
||||
``/etc/openstack_deploy/ansible_facts/``
|
||||
|
||||
.. _config-change-playbook:
|
||||
|
||||
@ -44,31 +45,32 @@ changing the configuration.
|
||||
``user-secrets-adjustment.yml``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This playbook ensures that the user secrets file is updated based on the example
|
||||
file in the main repository, making it possible to guarantee all secrets move
|
||||
into the upgraded environment and generate appropriately.
|
||||
This adds only new secrets, such as those necessary for new services or new settings
|
||||
added to existing services. Values set previously are not changed.
|
||||
This playbook ensures that the user secrets file is updated based on the
|
||||
example file in the main repository, making it possible to guarantee all
|
||||
secrets move into the upgraded environment and generate appropriately.
|
||||
This adds only new secrets, such as those necessary for new services or new
|
||||
settings added to existing services. Values set previously are not changed.
|
||||
|
||||
.. _repo-server-pip-conf-removal:
|
||||
|
||||
``repo-server-pip-conf-removal.yml``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The presence of ``pip.conf`` locks down all Python installations to packages on the
|
||||
repo server. If ``pip.conf`` exists on the repo server, it creates a circular
|
||||
dependency, causing build failures.
|
||||
The presence of ``pip.conf`` locks down all Python installations to packages
|
||||
on the repo server. If ``pip.conf`` exists on the repo server, it creates a
|
||||
circular dependency, causing build failures.
|
||||
|
||||
.. _old-hostname-compatibility:
|
||||
|
||||
``old-hostname-compatibility.yml``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This playbook ensures an alias is created for old hostnames that may not be RFC
|
||||
1034 or 1035 compatible. Using a hostname alias allows agents to continue working
|
||||
in cases where the hostname is also the registered agent name. This playbook is
|
||||
only needed for upgrades of in-place upgrades of existing nodes or if a node is replaced or
|
||||
rebuilt it will be brought into the cluster using a compliant hostname.
|
||||
This playbook ensures an alias is created for old hostnames that may not be
|
||||
RFC 1034 or 1035 compatible. Using a hostname alias allows agents to continue
|
||||
working in cases where the hostname is also the registered agent name. This
|
||||
playbook is only needed for upgrades of in-place upgrades of existing nodes or
|
||||
if a node is replaced or rebuilt it will be brought into the cluster using a
|
||||
compliant hostname.
|
||||
|
||||
.. _setup-infra-playbook:
|
||||
|
||||
@ -79,7 +81,8 @@ The ``playbooks`` directory contains the ``setup-infrastructure.yml`` playbook.
|
||||
The ``run-upgrade.sh`` script calls ``setup-insfrastructure.yml`` with specific
|
||||
arguments to upgrade MariaDB and RabbitMQ.
|
||||
|
||||
For example, to run an upgrade for both components at once, run the following commands:
|
||||
For example, to run an upgrade for both components at once, run the following
|
||||
commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -95,8 +98,8 @@ upgrade RabbitMQ.
|
||||
controls upgrading the major or minor versions.
|
||||
|
||||
Upgrading RabbitMQ in the Newton release is optional. The
|
||||
``run-upgrade.sh`` script does not automatically upgrade it. To upgrade RabbitMQ,
|
||||
insert the ``rabbitmq_upgrade: true``
|
||||
``run-upgrade.sh`` script does not automatically upgrade it. To upgrade
|
||||
RabbitMQ, insert the ``rabbitmq_upgrade: true``
|
||||
line into a file, such as: ``/etc/openstack_deploy/user_variables.yml``.
|
||||
|
||||
The ``galera_upgrade`` variable tells the ``galera_server`` role to remove the
|
||||
|
Loading…
Reference in New Issue
Block a user