[Docs] Restructure inventory documentation

The documentation of the inventory started to be spread out, but
also massive to be inside one page.

This moves it to the inventory section of the reference, but at the
same time, to improve readability, moves the previous content into
sub pages.

Change-Id: If2c6b83abafdc66879d818df4c9690142389a965
This commit is contained in:
Jean-Philippe Evrard 2018-03-12 16:29:32 +00:00
parent 80905d232d
commit 68e3f202b4
13 changed files with 260 additions and 217 deletions

View File

@ -30,7 +30,6 @@ Contents:
code-rules
testing
periodic-work
inventory-and-vars
scripts
core-reviewers
distributions

View File

@ -1,51 +0,0 @@
=======================
Inventory and variables
=======================
Our dynamic Inventory
^^^^^^^^^^^^^^^^^^^^^
OpenStack-Ansible ships with its own dynamic inventory. You can
find more explanations on the `inventory here`_.
Variable precedence
^^^^^^^^^^^^^^^^^^^
Role defaults
-------------
Every role has a file, ``defaults/main.yml`` which holds the
usual variables overridable by a deployer, like a regular Ansible
role. This defaults are the closest possible to OpenStack standards.
Group vars and host vars
------------------------
OpenStack-Ansible provides safe defaults for deployers in its
group_vars folder. They take care of the wiring between different
roles, like for example storing information on how to reach
RabbitMQ from nova role.
You can override the existing group vars (and host vars) by creating
your own folder in /etc/openstack_deploy/group_vars (and
/etc/openstack_deploy/host_vars respectively).
If you want to change the location of the override folder, you
can adapt your openstack-ansible.rc file, or export
``GROUP_VARS_PATH`` and ``HOST_VARS_PATH`` during your shell session.
Role vars
---------
Because OpenStack-Ansible is following Ansible precedence, every role
``vars/`` will take precedence over group vars. This is intentional.
You should avoid overriding these variables.
User variables
--------------
If you want to override a playbook or a role variable, you can define
the variable you want to override in a
``/etc/openstack_deploy/user_*.yml`` file.
.. _inventory here: ../reference/index.html

View File

@ -14,8 +14,11 @@ the interfaces for instance traffic, please see the
.. _OpenStack Networking Guide: http://docs.openstack.org/networking-guide/
Bonded network interfaces
~~~~~~~~~~~~~~~~~~~~~~~~~
For details on the configuration of networking for your
environment, please have a look at :ref:`openstack-user-config-reference`.
Physical host interfaces
~~~~~~~~~~~~~~~~~~~~~~~~
In a typical production environment, physical network interfaces are combined
in bonded pairs for better redundancy and throughput. Avoid using two ports on
@ -101,17 +104,3 @@ The following diagram shows how virtual machines connect to the ``br-vlan`` and
``br-vxlan`` bridges and send traffic to the network outside the host:
.. image:: ../figures/networking-compute.png
.. _openstack-user-config-reference:
Reference for openstack_user_config settings
--------------------------------------------
The ``openstack_user_config.yml.example`` file is heavily commented with the
details of how to do more advanced container networking configuration. The
contents of the file are shown here for reference.
.. literalinclude:: ../../../../etc/openstack_deploy/openstack_user_config.yml.example
:language: yaml
:start-after: under the License.

View File

@ -14,4 +14,5 @@ installation of OpenStack that may include their own software.
:maxdepth: 1
using-overrides
extra-python-software
extending-osa

View File

@ -1,5 +1,5 @@
Extending OpenStack-Ansible
===========================
Using OpenStack-Ansible within your project
===========================================
Including OpenStack-Ansible in your project
-------------------------------------------

View File

@ -0,0 +1,37 @@
Adding extra python software
============================
The system will allow you to install and build any package that is a python
installable. The repository infrastructure will look for and create any
git based or PyPi installable package. When the package is built the repo-build
role will create the sources as Python wheels to extend the base system and
requirements.
While the pre-built packages in the repository-infrastructure are
comprehensive, it may be needed to change the source locations and versions of
packages to suit different deployment needs. Adding additional repositories as
overrides is as simple as listing entries within the variable file of your
choice. Any ``user_.*.yml`` file within the "/etc/openstack_deployment"
directory will work to facilitate the addition of a new packages.
.. code-block:: yaml
swift_git_repo: https://private-git.example.org/example-org/swift
swift_git_install_branch: master
Additional lists of python packages can also be overridden using a
``user_.*.yml`` variable file.
.. code-block:: yaml
swift_requires_pip_packages:
- virtualenv
- python-keystoneclient
- NEW-SPECIAL-PACKAGE
Once the variables are set call the play ``repo-build.yml`` to build all of the
wheels within the repository infrastructure. When ready run the target plays to
deploy your overridden source code.

View File

@ -3,8 +3,54 @@
Overriding default configuration
================================
user_*.yml files
~~~~~~~~~~~~~~~~
Variable precedence
~~~~~~~~~~~~~~~~~~~
Role defaults
-------------
Every role has a file, ``defaults/main.yml`` which holds the
usual variables overridable by a deployer, like a regular Ansible
role. This defaults are the closest possible to OpenStack standards.
They can be overriden at multiple levels.
Group vars and host vars
------------------------
OpenStack-Ansible provides safe defaults for deployers in its
group_vars folder. They take care of the wiring between different
roles, like for example storing information on how to reach
RabbitMQ from nova role.
You can override the existing group vars (and host vars) by creating
your own folder in /etc/openstack_deploy/group_vars (and
/etc/openstack_deploy/host_vars respectively).
If you want to change the location of the override folder, you
can adapt your openstack-ansible.rc file, or export
``GROUP_VARS_PATH`` and ``HOST_VARS_PATH`` during your shell session.
Role vars
---------
Every role makes use of additional variables in ``vars/`` which take
precedence over group vars.
These variables are typically internal to the role and are not
designed to be overridden. However, deployers may choose to override
them using extra-vars by placing the overrides into the user variables
file.
User variables
--------------
If you want to globally override variable, you can define
the variable you want to override in a
``/etc/openstack_deploy/user_*.yml`` file. It will apply on all hosts.
user_*.yml files in more details
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Files in ``/etc/openstack_deploy`` beginning with ``user_`` will be
automatically sourced in any ``openstack-ansible`` command. Alternatively,
@ -19,53 +65,12 @@ of these files more arduous. Rather, recommended practice is to place your own
variables in files named following the ``user_*.yml`` pattern so they will be
sourced alongside those used exclusively by OpenStack-Ansible.
Ordering and precedence
^^^^^^^^^^^^^^^^^^^^^^^
``user_*.yml`` files contain YAML variables which are applied as extra-vars
when executing ``openstack-ansible`` to run playbooks. They will be sourced
in alphanumeric order by ``openstack-ansible``. If duplicate variables occur
in the ``user_*.yml`` files, the variable in the last file read will take
precedence.
Adding extra python packages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The system will allow you to install and build any package that is a python
installable. The repository infrastructure will look for and create any
git based or PyPi installable package. When the package is built the repo-build
role will create the sources as Python wheels to extend the base system and
requirements.
While the packages pre-built in the repository-infrastructure are
comprehensive, it may be needed to change the source locations and versions of
packages to suit different deployment needs. Adding additional repositories as
overrides is as simple as listing entries within the variable file of your
choice. Any ``user_.*.yml`` file within the "/etc/openstack_deployment"
directory will work to facilitate the addition of a new packages.
.. code-block:: yaml
swift_git_repo: https://private-git.example.org/example-org/swift
swift_git_install_branch: master
Additional lists of python packages can also be overridden using a
``user_.*.yml`` variable file.
.. code-block:: yaml
swift_requires_pip_packages:
- virtualenv
- python-keystoneclient
- NEW-SPECIAL-PACKAGE
Once the variables are set call the play ``repo-build.yml`` to build all of the
wheels within the repository infrastructure. When ready run the target plays to
deploy your overridden source code.
Setting overrides in configuration files with config_template
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -84,7 +89,7 @@ will never be.
.. _PR2: https://github.com/ansible/ansible/pull/35453
config_template documentation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-----------------------------
These are the options available as found within the virtual module
documentation section.
@ -133,7 +138,7 @@ documentation section.
Example task using the config_template module
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
---------------------------------------------
In this task the ``test.ini.j2`` file is a template which will be rendered and
written to disk at ``/tmp/test.ini``. The **config_overrides** entry is a
@ -181,7 +186,7 @@ this:
Discovering available overrides
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-------------------------------
All of these options can be specified in any way that suits your deployment.
In terms of ease of use and flexibility it's recommended that you define your
@ -297,7 +302,7 @@ configuration entries in the ``/etc/openstack_deploy/user_variables.yml``.
.. _OpenStack Configuration Reference: http://docs.openstack.org/draft/config-reference/
Overriding .conf files
^^^^^^^^^^^^^^^^^^^^^^
----------------------
Most often, overrides are implemented for the ``<service>.conf`` files
(for example, ``nova.conf``). These files use a standard INI file format.
@ -364,7 +369,7 @@ Use this method for any files with the ``INI`` format for in OpenStack projects
deployed in OpenStack-Ansible.
Overriding .json files
^^^^^^^^^^^^^^^^^^^^^^
----------------------
To implement access controls that are different from the ones in a standard
OpenStack environment, you can adjust the default policies applied by services.
@ -404,7 +409,7 @@ overrides, the general format for the variable name is
``<service>_policy_overrides``.
Overriding .yml files
^^^^^^^^^^^^^^^^^^^^^
---------------------
You can override ``.yml`` file values by supplying replacement YAML content.

View File

@ -3,6 +3,12 @@
Configuring the inventory
=========================
In this chapter, you can find the information on how to configure
the openstack-ansible dynamic inventory to your needs.
Introduction
~~~~~~~~~~~~
Common OpenStack services and their configuration are defined by
OpenStack-Ansible in the
``/etc/openstack_deploy/openstack_user_config.yml`` settings file.
@ -12,7 +18,6 @@ Additional services should be defined with a YAML file in
The ``/etc/openstack_deploy/env.d`` directory sources all YAML files into the
deployed environment, allowing a deployer to define additional group mappings.
This directory is used to extend the environment skeleton, or modify the
defaults defined in the ``inventory/env.d`` directory.
@ -51,11 +56,8 @@ which the container resides is added to the ``lxc_hosts`` inventory group.
Using this name for a group in the configuration will result in a runtime
error.
Customizing existing components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Deploying directly on hosts
---------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~
To deploy a component directly on the host instead of within a container, set
the ``is_metal`` property to ``true`` for the container group in the
@ -69,27 +71,8 @@ is the same for a service deployed directly onto the host.
The ``cinder-volume`` component is deployed directly on the host by
default. See the ``env.d/cinder.yml`` file for this example.
Omit a service or component from the deployment
-----------------------------------------------
To omit a component from a deployment, you can use one of several options:
- Remove the ``physical_skel`` link between the container group and
the host group by deleting the related file located in the ``env.d/``
directory.
- Do not run the playbook that installs the component.
Unless you specify the component to run directly on a host by using the
``is_metal`` property, a container is created for this component.
- Adjust the :ref:`affinity`
to 0 for the host group. Similar to the second option listed here, Unless
you specify the component to run directly on a host by using the ``is_metal``
property, a container is created for this component.
Deploy existing components on dedicated hosts
---------------------------------------------
To deploy a ``shared-infra`` component to dedicated hosts, modify the
files that specify the host groups and container groups for the component.
Example: Running galera on dedicated hosts
------------------------------------------
For example, to run Galera directly on dedicated hosts, you would perform the
following steps:
@ -147,27 +130,108 @@ following steps:
and ``db_hosts``) are arbitrary. Choose your own group names,
but ensure the references are consistent among all relevant files.
.. _affinity:
Checking inventory configuration for errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Deploying 0 (or more than one) of component type per host
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using the ``--check`` flag when running ``dynamic_inventory.py`` will run the
inventory build process and look for known errors, but not write any files to
disk.
When OpenStack-Ansible generates its dynamic inventory, the affinity
setting determines how many containers of a similar type are deployed on a
single physical host.
If any groups defined in the ``openstack_user_config.yml`` or ``conf.d`` files
are not found in the environment, a warning will be raised.
Using ``shared-infra_hosts`` as an example, consider this
``openstack_user_config.yml`` configuration:
This check does not do YAML syntax validation, though it will fail if there
are unparseable errors.
.. code-block:: yaml
Writing debug logs
~~~~~~~~~~~~~~~~~~~
shared-infra_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
The ``--debug/-d`` parameter allows writing of a detailed log file for
debugging the inventory script's behavior. The output is written to
``inventory.log`` in the current working directory.
Three hosts are assigned to the `shared-infra_hosts` group,
OpenStack-Ansible ensures that each host runs a single database container,
a single Memcached container, and a single RabbitMQ container. Each host has
an affinity of 1 by default, which means that each host runs one of each
container type.
The ``inventory.log`` file is appended to, not overwritten.
If you are deploying a stand-alone Object Storage (swift) environment,
you can skip the deployment of RabbitMQ. If you use this configuration,
your ``openstack_user_config.yml`` file would look as follows:
Like ``--check``, this flag is not invoked when running from ansible.
.. code-block:: yaml
shared-infra_hosts:
infra1:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.101
infra2:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.102
infra3:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.103
This configuration deploys a Memcached container and a database container
on each host, but no RabbitMQ containers.
Omit a service or component from the deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To omit a component from a deployment, you can use one of several options:
- Remove the ``physical_skel`` link between the container group and
the host group by deleting the related file located in the ``env.d/``
directory.
- Do not run the playbook that installs the component.
Unless you specify the component to run directly on a host by using the
``is_metal`` property, a container is created for this component.
- Adjust the :ref:`affinity`
to 0 for the host group. Similar to the second option listed here, Unless
you specify the component to run directly on a host by using the ``is_metal``
property, a container is created for this component.
Deploying using a different container technology
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note::
While nspawn is an available containerization technology it should be
considered experemental at this time. Even though this subsystem is not yet
recommended for production, it is stable enough to introduce to the community
and something we'd like feedback on as we improve it over the next cycle.
OpenStack-Ansible presently supports two different container technologies, LXC
and nspawn. These two container technologies can be used separately or together
within the same cluster but has a limitation of only one setting per host.
Using ``shared-infra_hosts`` as an example, consider this
``openstack_user_config.yml`` configuration:
.. code-block:: yaml
shared-infra_hosts:
infra1:
ip: 172.29.236.101
container_vars:
container_tech: lxc
infra2:
ip: 172.29.236.102
container_vars:
container_tech: nspawn
infra3:
ip: 172.29.236.103
In this example the three hosts are assigned to the `shared-infra_hosts` group,
and will deploy containerized workloads using ``lxc`` on **infra1**, ``nspawn``
on **infra2**, and ``lxc`` on **infra3**. Notice **infra3** does not define the
``container_tech`` option because it not required. If this option is undefined
the value will automatically be set to ``lxc`` within the generated inventory.
The two supported options for the ``container_tech`` configuration variable are
``lxc`` or ``nspawn``.

View File

@ -4,11 +4,15 @@ Generating the Inventory
The script that creates the inventory is located at
``inventory/dynamic_inventory.py``.
Executing the dynamic_inventory.py script
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section explains how ansible runs the inventory, and how
you can run it manually to see its behavior.
Executing the dynamic_inventory.py script manually
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When running an Ansible command (such as ``ansible``, ``ansible-playbook`` or
``openstack-ansible``) Ansible executes the ``dynamic_inventory.py`` script
``openstack-ansible``) Ansible automatically executes the
``dynamic_inventory.py`` script
and use its output as inventory.
Run the following command:
@ -97,3 +101,27 @@ source of truth for repeated runs.
The same JSON structure is printed to stdout, which is consumed by Ansible as
the inventory for the playbooks.
Checking inventory configuration for errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using the ``--check`` flag when running ``dynamic_inventory.py`` will run the
inventory build process and look for known errors, but not write any files to
disk.
If any groups defined in the ``openstack_user_config.yml`` or ``conf.d`` files
are not found in the environment, a warning will be raised.
This check does not do YAML syntax validation, though it will fail if there
are unparseable errors.
Writing debug logs
~~~~~~~~~~~~~~~~~~~
The ``--debug/-d`` parameter allows writing of a detailed log file for
debugging the inventory script's behavior. The output is written to
``inventory.log`` in the current working directory.
The ``inventory.log`` file is appended to, not overwritten.
Like ``--check``, this flag is not invoked when running from ansible.

View File

@ -15,5 +15,6 @@ for OpenStack-Ansible.
generate-inventory
configure-inventory
understanding-inventory
openstack-user-config-reference
manage-inventory
advanced-topics

View File

@ -1,5 +1,14 @@
Inspecting and managing the inventory
=====================================
Inspecting and manipulating the inventory
=========================================
.. warning::
Never edit or delete the files
``/etc/openstack_deploy/openstack_inventory.json`` or
``/etc/openstack_deploy/openstack_hostnames_ips.yml``. This can
lead to file corruptions, and problems with the inventory: hosts
and container could disappear and new ones would appear,
breaking your existing deployment.
The file ``scripts/inventory-manage.py`` is used to produce human readable
output based on the ``/etc/openstack_deploy/openstack_inventory.json`` file.

View File

@ -0,0 +1,12 @@
.. _openstack-user-config-reference:
openstack_user_config settings reference
========================================
The ``openstack_user_config.yml.example`` file is heavily commented with the
details of how to do more advanced container networking configuration. The
contents of the file are shown here for reference.
.. literalinclude:: ../../../../etc/openstack_deploy/openstack_user_config.yml.example
:language: yaml
:start-after: under the License.

View File

@ -97,54 +97,3 @@ playbook, you see that the playbook applies to hosts in the ``memcached``
group. Other services might have more complex deployment needs. They define and
consume inventory container groups differently. Mapping components to several
groups in this way allows flexible targeting of roles and tasks.
.. _affinity:
Affinity
~~~~~~~~
When OpenStack-Ansible generates its dynamic inventory, the affinity
setting determines how many containers of a similar type are deployed on a
single physical host.
Using ``shared-infra_hosts`` as an example, consider this
``openstack_user_config.yml`` configuration:
.. code-block:: yaml
shared-infra_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
Three hosts are assigned to the `shared-infra_hosts` group,
OpenStack-Ansible ensures that each host runs a single database container,
a single Memcached container, and a single RabbitMQ container. Each host has
an affinity of 1 by default, which means that each host runs one of each
container type.
If you are deploying a stand-alone Object Storage (swift) environment,
you can skip the deployment of RabbitMQ. If you use this configuration,
your ``openstack_user_config.yml`` file would look as follows:
.. code-block:: yaml
shared-infra_hosts:
infra1:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.101
infra2:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.102
infra3:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.103
This configuration deploys a Memcached container and a database container
on each host, but no RabbitMQ containers.