[docs] Add draft install guide directory

Add draft directory but hide the link on the
index.rst so a preview build is available for patch
reviews

Change-Id: I40fdd4e6c0301c05641fc63643b40f409d5361c3
Implements: blueprint osa-install-guide-overhaul
This commit is contained in:
daz 2016-06-09 15:07:41 +10:00
parent da5da7b773
commit 49c5d1daf9
85 changed files with 8007 additions and 0 deletions

View File

@ -18,6 +18,11 @@ review.
upgrade-guide/index
developer-docs/index
.. toctree::
:hidden:
install-guide-revised-draft/index
.. _Newton Series Timeline: https://launchpad.net/openstack-ansible/trunk
.. _Newton Series Release Notes: http://docs.openstack.org/releasenotes/openstack-ansible/unreleased.html

View File

@ -0,0 +1,24 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===============================
Appendix A: Configuration files
===============================
`openstack_user_config.yml
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/openstack_user_config.yml.example>`_
`user_variables.yml
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/user_variables.yml>`_
`user_secrets.yml
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/user_secrets.yml>`_
`swift.yml
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/conf.d/swift.yml.example>`_
`extra_container.yml
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/env.d/extra_container.yml.example>`_
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,195 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
================================================
Appendix H: Customizing host and service layouts
================================================
Understanding the default layout
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The default layout of containers and services in OpenStack-Ansible is driven
by the ``/etc/openstack_deploy/openstack_user_config.yml`` file and the
contents of both the ``/etc/openstack_deploy/conf.d/`` and
``/etc/openstack_deploy/env.d/`` directories. Use these sources to define the
group mappings used by the playbooks to target hosts and containers for roles
used in the deploy.
Conceptually, these can be thought of as mapping from two directions. You
define host groups, which gather the target hosts into inventory groups,
through the ``/etc/openstack_deploy/openstack_user_config.yml`` file and the
contents of the ``/etc/openstack_deploy/conf.d/`` directory. You define
container groups, which can map from the service components to be deployed up
to host groups, through files in the ``/etc/openstack_deploy/env.d/``
directory.
To customize the layout of components for your deployment, modify the
host groups and container groups appropriately to represent the layout you
desire before running the installation playbooks.
Understanding host groups
-------------------------
As part of initial configuration, each target host appears in either the
``/etc/openstack_deploy/openstack_user_config.yml`` file or in files within
the ``/etc/openstack_deploy/conf.d/`` directory. We use a format for files in
``conf.d/`` which is identical to the syntax used in the
``openstack_user_config.yml`` file. These hosts are listed under one or more
headings such as ``shared-infra_hosts`` or ``storage_hosts`` which serve as
Ansible group mappings. We treat these groupings as mappings to the physical
hosts.
The example file ``haproxy.yml.example`` in the ``conf.d/`` directory provides
a simple example of defining a host group (``haproxy_hosts``) with two hosts
(``infra1`` and ``infra2``).
A more complex example file is ``swift.yml.example``. Here, in addition, we
specify host variables for a target host using the ``container_vars`` key.
OpenStack-Ansible applies all entries under this key as host-specific
variables to any component containers on the specific host.
.. note::
Our current recommendation is for new inventory groups, particularly for new
services, to be defined using a new file in the ``conf.d/`` directory in
order to manage file size.
Understanding container groups
------------------------------
Additional group mappings can be found within files in the
``/etc/openstack_deploy/env.d/`` directory. These groupings are treated as
virtual mappings from the host groups (described above) onto the container
groups which define where each service deploys. By reviewing files within the
``env.d/`` directory, you can begin to see the nesting of groups represented
in the default layout.
We begin our review with ``shared-infra.yml``. In this file we define a
new container group (``shared-infra_containers``) as a subset of the
``all_containers`` group. This new container group is mapped to a new
(``shared-infra_hosts``) host group. This means you deploy all service
components under the new (``shared-infra_containers``) container group to each
target host in the host group (``shared-infra_hosts``).
Within a ``physical_skel`` segment, the OpenStack-Ansible dynamic inventory
expects to find a pair of keys. The first key maps to items in the
``container_skel`` and the second key maps to the target host groups
(described above) which are responsible for hosting the service component.
Next, we review ``memcache.yml``. Here, we define the new group
``memcache_container``. In this case we identify the new group as a
subset of the ``shared-infra_containers`` group, which is itself a subset of
the ``all_containers`` inventory group.
.. note::
The ``all_containers`` group is automatically defined by OpenStack-Ansible.
Any service component managed by OpenStack-Ansible maps to a subset of the
``all_containers`` inventory group, whether directly or indirectly through
another intermediate container group.
The default layout does not rely exclusively on groups being subsets of other
groups. The ``memcache`` component group is part of the ``memcache_container``
group, as well as the ``memcache_all`` group and also contains a ``memcached``
component group. If you review the ``playbooks/memcached-install.yml``
playbook you see that the playbook applies to hosts in the ``memcached``
group. Other services may have more complex deployment needs. They define and
consume inventory container groups differently. Mapping components to several
groups in this way allows flexible targeting of roles and tasks.
Customizing existing components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Numerous customization scenarios are possible, but three popular ones are
presented here as starting points and also as common recipes.
Deploying directly on hosts
---------------------------
To deploy a component directly on the host instead of within a container, set
the ``is_metal`` property to ``true`` for the container group under the
``container_skel`` in the appropriate file.
The use of ``container_vars`` and mapping from container groups to host groups
is the same for a service deployed directly onto the host.
.. note::
The ``cinder_volume`` component is also deployed directly on the host by
default. See the ``env.d/cinder.yml`` file for this example.
Omit a service or component from the deployment
-----------------------------------------------
To omit a component from a deployment, several options exist.
- You could remove the ``physical_skel`` link between the container group and
the host group. The simplest way to do this is to simply delete the related
file located in the ``env.d/`` directory.
- You could choose to not run the playbook which installs the component.
Unless you specify the component to run directly on a host using is_metal, a
container creates for this component.
- You could adjust the ``affinity`` to 0 for the host group. Unless you
specify the component to run directly on a host using is_metal, a container
creates for this component. `Affinity`_ is discussed in the initial
environment configuration section of the install guide.
.. _Affinity: configure-initial.html#affinity
Deploying existing components on dedicated hosts
------------------------------------------------
To deploy a shared-infra component onto dedicated hosts, modify both the
files specifying the host groups and container groups for the component.
For example, to run Galera directly on dedicated hosts the ``container_skel``
segment of the ``env.d/galera.yml`` file might look like:
.. code-block:: yaml
container_skel:
galera_container:
belongs_to:
- db_containers
contains:
- galera
properties:
log_directory: mysql_logs
service_name: galera
is_metal: true
.. note::
If you want to deploy within containers on these dedicated hosts, omit the
``is_metal: true`` property. We include it here as a recipe for the more
commonly requested layout.
Since we define the new container group (``db_containers`` above) we must
assign that container group to a host group. To assign the new container
group to a new host group, provide a ``physical_skel`` for the new host group
(in a new or existing file, such as ``env.d/galera.yml``) like the following:
.. code-block:: yaml
physical_skel:
db_containers:
belongs_to:
- all_containers
db_hosts:
belongs_to:
- hosts
Lastly, define the host group (db_hosts above) in a ``conf.d/`` file (such as
``galera.yml``).
.. code-block:: yaml
db_hosts:
db-host1:
ip: 172.39.123.11
db-host2:
ip: 172.39.123.12
db-host3:
ip: 172.39.123.13
.. note::
Each of the custom group names in this example (``db_containers``
and ``db_hosts``) were arbitrary. You can choose your own group names
but be sure the references are consistent between all relevant files.

View File

@ -0,0 +1,140 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
==========================
Appendix C: Minor upgrades
==========================
Upgrades between minor versions of OpenStack-Ansible are handled by
updating the repository clone to the latest tag, then executing playbooks
against the target hosts.
.. note:: In order to avoid issues and ease the troubleshooting if an
issue appears during the upgrade, disable the security
hardening role before running the following steps. Set your
variable ``apply_security_hardening`` to ``False``.
A minor upgrade typically requires the execution of the following:
#. Change directory into the repository clone root directory:
.. code-block:: shell-session
# cd /opt/openstack-ansible
#. Update the git remotes:
.. code-block:: shell-session
# git fetch --all
#. Checkout the latest tag (the below tag is an example):
.. code-block:: shell-session
# git checkout 13.0.1
#. Update all the dependent roles to the latest versions:
.. code-block:: shell-session
# ./scripts/bootstrap-ansible.sh
#. Change into the playbooks directory:
.. code-block:: shell-session
# cd playbooks
#. Update the hosts:
.. code-block:: shell-session
# openstack-ansible setup-hosts.yml
#. Update the infrastructure:
.. code-block:: shell-session
# openstack-ansible -e rabbitmq_upgrade=true \
setup-infrastructure.yml
#. Update all OpenStack services:
.. code-block:: shell-session
# openstack-ansible setup-openstack.yml
.. note::
Scope upgrades to specific OpenStack components by
executing each of the component playbooks using groups.
For example:
#. Update only the Compute hosts:
.. code-block:: shell-session
# openstack-ansible os-nova-install.yml --limit nova_compute
#. Update only a single Compute host:
.. note::
Skipping the ``nova-key`` tag is necessary as the keys on
all Compute hosts will not be gathered.
.. code-block:: shell-session
# openstack-ansible os-nova-install.yml --limit <node-name> \
--skip-tags 'nova-key'
To see which hosts belong to which groups, the
``inventory-manage.py`` script shows all groups and their hosts.
For example:
#. Change directory into the repository clone root directory:
.. code-block:: shell-session
# cd /opt/openstack-ansible
#. Show all groups and which hosts belong to them:
.. code-block:: shell-session
# ./scripts/inventory-manage.py -G
#. Show all hosts and which groups they belong:
.. code-block:: shell-session
# ./scripts/inventory-manage.py -g
To see which hosts a playbook will execute against, and to see which
tasks will execute.
#. Change directory into the repository clone playbooks directory:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
#. See the hosts in the ``nova_compute`` group which a playbook executes against:
.. code-block:: shell-session
# openstack-ansible os-nova-install.yml --limit nova_compute \
--list-hosts
#. See the tasks which will be executed on hosts in the ``nova_compute`` group:
.. code-block:: shell-session
# openstack-ansible os-nova-install.yml --limit nova_compute \
--skip-tags 'nova-key' \
--list-tasks
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,165 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
===========================================================
Appendix G. Installation on hosts with limited connectivity
===========================================================
Introduction
~~~~~~~~~~~~
Many playbooks and roles in OpenStack-Ansible retrieve dependencies from the
public Internet by default. Many deployers block direct outbound connectivity
to the Internet when implementing network security measures. We recommend a
set of practices and configuration overrides deployers can use when running
OpenStack-Ansible in network environments that block Internet connectivity.
The options below are not mutually exclusive and may be combined if desired.
Example internet dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Software packages
- LXC container images
- Source code repositories
- GPG keys for package validation
Practice A: Mirror internet resources locally
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You may choose to operate and maintain mirrors of OpenStack-Ansible and
OpenStack dependencies. Mirrors often provide a great deal of risk mitigation
by reducing dependencies on resources and systems outside of your direct
control. Mirrors can also provide greater stability, performance and security.
Software package repositories
-----------------------------
Many packages used to run OpenStack are installed using `pip`. We advise
mirroring the PyPi package index used by `pip`.
Many software packages are installed on the target hosts using `.deb`
packages. We advise mirroring the repositories that host these packages.
Ubuntu repositories to mirror:
- trusty
- trusty-updates
- trusty-backports
Galera-related repositories to mirror:
- https://mirror.rackspace.com/mariadb/repo/10.0/ubuntu
- https://repo.percona.com/apt
These lists are intentionally not exhaustive. Consult the OpenStack-Ansible
playbooks and role documentation for further repositories and the variables
that may be used to override the repository location.
LXC container images
--------------------
OpenStack-Ansible relies upon community built LXC images when building
containers for OpenStack services. Deployers may choose to create, maintain,
and host their own container images. Consult the
``openstack-ansible-lxc_container_create`` role for details on configuration
overrides for this scenario.
Source code repositories
------------------------
OpenStack-Ansible relies upon Ansible Galaxy to download Ansible roles when
bootstrapping a deployment host. Deployers may wish to mirror the dependencies
that are downloaded by the ``bootstrap-ansible.sh`` script.
Deployers can configure the script to source Ansible from an alternate Git
repository by setting the environment variable ``ANSIBLE_GIT_REPO``.
Deployers can configure the script to source Ansible role dependencies from
alternate locations by providing a custom role requirements file and specifying
the path to that file using the environment variable ``ANSIBLE_ROLE_FILE``.
Practice B: Proxy access to internet resources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configure target and deployment hosts to reach public internet resources via
HTTP or SOCKS proxy server(s). OpenStack-Ansible may be used to configure
target hosts to use the proxy server(s). OpenStack-Ansible does not provide
automation for creating the proxy server(s).
Basic proxy configuration
-------------------------
The following configuration configures most network clients on the target
hosts to connect via the specified proxy. For example, these settings
affect:
- Most Python network modules
- `curl`
- `wget`
- `openstack`
Use the ``no_proxy`` environment variable to specify hosts that you cannot
reach through the proxy. These often are the hosts in the management network.
In the example below, ``no_proxy`` is set to localhost only, but the default
configuration file suggests using variables to list all the hosts/containers'
management addresses as well as the load balancer internal/external addresses.
Configuration changes are made in ``/etc/openstack_deploy/user_variables.yml``.
.. code-block:: yaml
# Used to populate /etc/environment
global_environment_variables:
HTTP_PROXY: "http://proxy.example.com:3128"
HTTPS_PROXY: "http://proxy.example.com:3128"
NO_PROXY: "localhost,127.0.0.1"
http_proxy: "http://proxy.example.com:3128"
https_proxy: "http://proxy.example.com:3128"
no_proxy: "localhost,127.0.0.1"
``apt-get`` proxy configuration
-------------------------------
See `Setting up apt-get to use a http-proxy`_
.. _Setting up apt-get to use a http-proxy: https://help.ubuntu.com/community/AptGet/Howto#Setting_up_apt-get_to_use_a_http-proxy
Deployment host proxy configuration for bootstrapping Ansible
-------------------------------------------------------------
Configure the ``bootstrap-ansible.sh`` script used to install Ansible and
Ansible role dependencies on the deployment host to use a proxy by setting the
environment variables ``HTTPS_PROXY`` or ``HTTP_PROXY``.
Considerations when proxying TLS traffic
----------------------------------------
Proxying TLS traffic often interferes with the clients ability to perform
successful validation of the certificate chain. Various configuration
variables exist within the OpenStack-Ansible playbooks and roles that allow a
deployer to ignore these validation failures. Find an example
``/etc/openstack_deploy/user_variables.yml`` configuration below:
.. code-block:: yaml
pip_validate_certs: false
galera_package_download_validate_certs: false
The list above is intentionally not exhaustive. Additional variables may exist
within the project and will be named using the `*_validate_certs` pattern.
Disable certificate chain validation on a case by case basis and only after
encountering failures that are known to only be caused by the proxy server(s).
Ansible support for proxy servers
---------------------------------
The `get_url` and `uri` modules in Ansible 1.9.x have inconsistent and buggy
behavior when used in concert with many popular proxy servers and
configurations. An example Launchpad bug can be found `here
<https://bugs.launchpad.net/openstack-ansible/+bug/1556975/>`_. The comments
contain a workaround that has been effective for some deployers.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,146 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Appendix F. Using Nuage Neutron Plugin
--------------------------------------
Introduction
============
This document describes the steps required to deploy Nuage Networks VCS
with OpenStack-Ansible (OSA). These steps include:
- Install prerequisites.
- Configure Neutron to use the Nuage Networks Neutron plugin.
- Configure the Nuage Networks Neutron plugin.
- Download Nuage Networks VCS components and playbooks.
- Execute the playbooks.
Prerequisites
=============
#. The deployment environment has been configured according to OSA
best-practices. This includes cloning OSA software and bootstrapping
Ansible. See `OpenStack-Ansible Install Guide <index.html>`_
#. VCS stand-alone components, VSD and VSC, have been configured and
deployed. (See Nuage Networks VSD and VSC Install Guides.)
#. Nuage VRS playbooks have been cloned to the deployment host from
`https://github.com/nuagenetworks/nuage-openstack-ansible <https://github.com/nuagenetworks/nuage-openstack-ansible>`_.
This guide assumes a deployment host path of
/opt/nuage-openstack-ansible
Configure Nuage Neutron Plugin
==============================
Configuring the Neutron plugin requires creating/editing of parameters
in following two files:
- ``/etc/openstack_deploy/user_nuage_vars.yml``
- ``/etc/openstack_deploy/user_variables.yml``
On the deployment host, copy the Nuage user variables file from
``/opt/nuage-openstack-ansible/etc/user_nuage_vars.yml`` to
``/etc/openstack_deploy/`` folder.
.. code-block:: shell-session
# cp /opt/nuage-openstack-ansible/etc/user_nuage_vars.yml /etc/openstack_deploy/
Also modify the following parameters in this file as per your Nuage VCS environment:
#. Replace *VSD Enterprise Name* parameter with user desired name of VSD
Enterprise:
.. code-block:: yaml
nuage_net_partition_name: "<VSD Enterprise Name>"
#. Replace *VSD IP* and *VSD GUI Port* parameters as per your VSD
configuration:
.. code-block:: yaml
nuage_vsd_ip: "<VSD IP>:<VSD GUI Port>"
#. Replace *VSD Username, VSD Password* and *VSD Organization Name* with
login credentials for VSD GUI as per your environment:
.. code-block:: yaml
nuage_vsd_username: "<VSD Username>"
nuage_vsd_password: "<VSD Password>"
nuage_vsd_organization: "<VSD Organization Name>"
#. Replace *Nuage VSP Version* with the Nuage VSP release you plan on
using for Integration; For eg: If you seem to use Nuage VSP release 3.2;
this value would be *v3\_2*
.. code-block:: yaml
nuage_base_uri_version: "<Nuage VSP Version>"
#. Replace *Nuage VSD CMS Id* with the CMS-Id generated by VSD to manage
your OpenStack cluster:
.. code-block:: yaml
nuage_cms_id: "<Nuage VSD CMS Id>"
#. Replace *Active VSC-IP* with the IP address of your active VSC node
and *Standby VSC-IP* with the IP address of your standby VSC node.
.. code-block:: yaml
active_controller: "<Active VSC-IP>"
standby_controller: "<Standby VSC-IP>"
#. Replace *Local Package Repository* with the link of your local
repository hosting the Nuage VRS packages, e.g. ``http://192.0.2.10/debs/3.2/vrs/``
.. code-block:: yaml
nuage_vrs_debs_repo: "deb <Local Package Repository>"
#. On the Deployment host, add the following lines to your
``/etc/openstack_deploy/user_variables.yml`` file, replacing the
*Local PyPi Mirror URL* with the link to the pypi server hosting your
Nuage OpenStack Python packages in “.whl” format.
.. code-block:: yaml
neutron_plugin_type: "nuage"
nova_network_type: "nuage"
pip_links:
- { name: "openstack_release", link: "{{ openstack_repo_url }}/os-releases/{{ openstack_release }}/" }
- { name: "nuage_repo", link: "<Local PyPi Mirror URL>" }
Installation
============
#. After multi-node OpenStack cluster is setup as detailed above; start
the OpenStack deployment as listed in the OpenStack-Ansible Install guide by
running all playbooks in sequence on the deployment host
#. After OpenStack deployment is complete; run the Nuage VRS playbooks
in ``/opt/nuage-openstack-ansible/nuage_playbook`` on
your deployment host to deploy Nuage VRS on all compute target hosts in
the OpenStack cluster:
.. code-block:: shell-session
# cd /opt/nuage-openstack-ansible/nuage_playbooks
# openstack-ansible nuage_all.yml
Note: For Nuage Networks VSP software packages, user documentation and licenses
please reach out with a query to info@nuagenetworks.net
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,157 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
=========================================
Appendix E: Using PLUMgrid Neutron plugin
=========================================
Installing source and host networking
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Clone the PLUMgrid ansible repository under the ``/opt/`` directory:
.. code-block:: shell-session
# git clone -b TAG https://github.com/plumgrid/plumgrid-ansible.git /opt/plumgrid-ansible
Replace *``TAG``* with the current stable release tag.
#. PLUMgrid will take over networking for the entire cluster. The
bridges ``br-vxlan`` and ``br-vlan`` only need to be present to avoid
relevant containers from erroring out on infra hosts. They do not
need to be attached to any host interface or a valid network.
#. PLUMgrid requires two networks: a `Management` and a `Fabric` network.
Management is typically shared via the standard ``br-mgmt`` and Fabric
must be specified in the PLUMgrid configuration file described below.
The Fabric interface must be untagged and unbridged.
Neutron configurations
~~~~~~~~~~~~~~~~~~~~~~
To setup the neutron configuration to install PLUMgrid as the
core neutron plugin, create a user space variable file
``/etc/openstack_deploy/user_pg_neutron.yml`` and insert the following
parameters.
#. Set the ``neutron_plugin_type`` parameter to ``plumgrid``:
.. code-block:: yaml
# Neutron Plugins
neutron_plugin_type: plumgrid
#. In the same file, disable the installation of unnecessary ``neutron-agents``
in the ``neutron_services`` dictionary, by setting their ``service_en``
parameters to ``False``:
.. code-block:: yaml
neutron_metering: False
neutron_l3: False
neutron_lbaas: False
neutron_lbaasv2: False
neutron_vpnaas: False
PLUMgrid configurations
~~~~~~~~~~~~~~~~~~~~~~~
On the deployment host, create a PLUMgrid user variables file using the sample in
``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to
``/etc/openstack_deploy/user_pg_vars.yml``. You must configure the
following parameters.
#. Replace ``PG_REPO_HOST`` with a valid repo URL hosting PLUMgrid
packages:
.. code-block:: yaml
plumgrid_repo: PG_REPO_HOST
#. Replace ``INFRA_IPs`` with comma separated Infrastructure Node IPs and
``PG_VIP`` with an unallocated IP on the management network. This will
be used to access the PLUMgrid UI:
.. code-block:: yaml
plumgrid_ip: INFRA_IPs
pg_vip: PG_VIP
#. Replace ``FABRIC_IFC`` with the name of the interface that will be used
for PLUMgrid Fabric.
.. note::
PLUMgrid Fabric must be an untagged unbridged raw interface such as ``eth0``.
.. code-block:: yaml
fabric_interface: FABRIC_IFC
#. Fill in the ``fabric_ifc_override`` and ``mgmt_override`` dicts with
node ``hostname: interface_name`` to override the default interface
names.
#. Obtain a PLUMgrid License file, rename to ``pg_license`` and place it under
``/var/lib/plumgrid/pg_license`` on the deployment host.
Gateway Hosts
~~~~~~~~~~~~~
PLUMgrid-enabled OpenStack clusters contain one or more gateway nodes
that are used for providing connectivity with external resources, such as
external networks, bare-metal servers, or network service
appliances. In addition to the `Management` and `Fabric` networks required
by PLUMgrid nodes, gateways require dedicated external interfaces referred
to as ``gateway_devs`` in the configuration files.
#. Add a ``gateway_hosts`` section to
``/etc/openstack_deploy/openstack_user_config.yml``:
.. code-block:: yaml
gateway_hosts:
gateway1:
ip: GW01_IP_ADDRESS
gateway2:
ip: GW02_IP_ADDRESS
Replace ``*_IP_ADDRESS`` with the IP address of the ``br-mgmt`` container management
bridge on each Gateway host.
#. Add a ``gateway_hosts`` section to the end of the PLUMgrid ``user_pg_vars.yml``
file:
.. note::
This must contain hostnames and ``gateway_dev`` names for each
gateway in the cluster.
.. code-block:: yaml
gateway_hosts:
- hostname: gateway1
gateway_devs:
- eth3
- eth4
Installation
~~~~~~~~~~~~
#. Run the PLUMgrid playbooks (do this before the ``openstack-setup.yml``
playbook is run):
.. code-block:: shell-session
# cd /opt/plumgrid-ansible/plumgrid_playbooks
# openstack-ansible plumgrid_all.yml
.. note::
Contact PLUMgrid for an Installation Pack: info@plumgrid.com
This includes a full trial commercial license, packages, deployment documentation,
and automation scripts for the entire workflow described above.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,33 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
================================
Appendix B: Additional resources
================================
The following Ansible resources are useful to reference:
- `Ansible Documentation
<http://docs.ansible.com/ansible/>`_
- `Ansible Best Practices
<http://docs.ansible.com/ansible/playbooks_best_practices.html>`_
- `Ansible Configuration
<http://docs.ansible.com/ansible/intro_configuration.html>`_
The following OpenStack resources are useful to reference:
- `OpenStack Documentation <http://docs.openstack.org>`_
- `OpenStack SDK, CLI and API Documentation
<http://developer.openstack.org/>`_
- `OpenStack API Guide
<http://developer.openstack.org/api-guide/quick-start>`_
- `OpenStack Project Developer Documentation
<http://docs.openstack.org/developer/>`_
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,44 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
===========================
Appendix D: Tips and tricks
===========================
Ansible forks
~~~~~~~~~~~~~
The default MaxSessions setting for the OpenSSH Daemon is 10. Each Ansible
fork makes use of a Session. By default, Ansible sets the number of forks to
5. However, you can increase the number of forks used in order to improve
deployment performance in large environments.
Note that a number of forks larger than 10 will cause issues for any playbooks
which make use of ``delegate_to`` or ``local_action`` in the tasks. It is
recommended that the number of forks are not raised when executing against the
Control Plane, as this is where delegation is most often used.
The number of forks used may be changed on a permanent basis by including
the appropriate change to the ``ANSIBLE_FORKS`` in your ``.bashrc`` file.
Alternatively it can be changed for a particular playbook execution by using
the ``--forks`` CLI parameter. For example, the following executes the nova
playbook against the control plane with 10 forks, then against the compute
nodes with 50 forks.
.. code-block:: shell-session
# openstack-ansible --forks 10 os-nova-install.yml --limit compute_containers
# openstack-ansible --forks 50 os-nova-install.yml --limit compute_hosts
For more information about forks, please see the following references:
* OpenStack-Ansible `Bug 1479812`_
* Ansible `forks`_ entry for ansible.cfg
* `Ansible Performance Tuning`_
.. _Bug 1479812: https://bugs.launchpad.net/openstack-ansible/+bug/1479812
.. _forks: http://docs.ansible.com/ansible/intro_configuration.html#forks
.. _Ansible Performance Tuning: https://www.ansible.com/blog/ansible-performance-tuning
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,142 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Aodh service (optional)
=======================================
The Telemetry (ceilometer) alarming services perform the following functions:
- Creates an API endpoint for controlling alarms.
- Allows you to set alarms based on threshold evaluation for a collection of samples.
Aodh on OpenStack-Ansible requires a configured MongoDB backend prior to running
the Aodh playbooks. To specify the connection data, edit the
``user_variables.yml`` file (see section `Configuring the user data`_
below).
Setting up a MongoDB database for Aodh
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Install the MongoDB package:
.. code-block:: console
# apt-get install mongodb-server mongodb-clients python-pymongo
2. Edit the ``/etc/mongodb.conf`` file and change ``bind_ip`` to the
management interface of the node running Aodh:
.. code-block:: ini
bind_ip = 10.0.0.11
3. Edit the ``/etc/mongodb.conf`` file and enable ``smallfiles``:
.. code-block:: ini
smallfiles = true
4. Restart the MongoDB service:
.. code-block:: console
# service mongodb restart
5. Create the Aodh database:
.. code-block:: console
# mongo --host controller --eval 'db = db.getSiblingDB("aodh"); db.addUser({user: "aodh", pwd: "AODH_DBPASS", roles: [ "readWrite", "dbAdmin" ]});'
This returns:
.. code-block:: console
MongoDB shell version: 2.4.x
connecting to: controller:27017/test
{
"user" : "aodh",
"pwd" : "72f25aeee7ad4be52437d7cd3fc60f6f",
"roles" : [
"readWrite",
"dbAdmin"
],
"_id" : ObjectId("5489c22270d7fad1ba631dc3")
}
.. note::
Ensure ``AODH_DBPASS`` matches the
``aodh_container_db_password`` in the
``/etc/openstack_deploy/user_secrets.yml`` file. This
allows Ansible to configure the connection string within
the Aodh configuration files.
Configuring the hosts
~~~~~~~~~~~~~~~~~~~~~
Configure Aodh by specifying the ``metering-alarm_hosts`` directive in
the ``/etc/openstack_deploy/conf.d/aodh.yml`` file. The following shows
the example included in the
``etc/openstack_deploy/conf.d/aodh.yml.example`` file:
.. code-block:: yaml
# The infra nodes that the Aodh services run on.
metering-alarm_hosts:
infra1:
ip: 172.20.236.111
infra2:
ip: 172.20.236.112
infra3:
ip: 172.20.236.113
The ``metering-alarm_hosts`` provides several services:
- An API server (``aodh-api``): Runs on one or more central management
servers to provide access to the alarm information in the
data store.
- An alarm evaluator (``aodh-evaluator``): Runs on one or more central
management servers to determine alarm fire due to the
associated statistic trend crossing a threshold over a sliding
time window.
- A notification listener (``aodh-listener``): Runs on a central
management server and fire alarms based on defined rules against
event captured by ceilometer's module's notification agents.
- An alarm notifier (``aodh-notifier``). Runs on one or more central
management servers to allow the setting of alarms to base on the
threshold evaluation for a collection of samples.
These services communicate by using the OpenStack messaging bus. Only
the API server has access to the data store.
Configuring the user data
~~~~~~~~~~~~~~~~~~~~~~~~~
Specify the following considerations in
``/etc/openstack_deploy/user_variables.yml``:
- The type of database backend Aodh uses. Currently, only MongoDB
is supported: ``aodh_db_type: mongodb``
- The IP address of the MonogoDB host: ``aodh_db_ip: localhost``
- The port of the MongoDB service: ``aodh_db_port: 27017``
Run the ``os-aodh-install.yml`` playbook. If deploying a new OpenStack
(instead of only Aodh), run ``setup-openstack.yml``.
The Aodh playbooks run as part of this playbook.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,222 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Telemetry (ceilometer) service (optional)
=========================================================
The Telemetry module (ceilometer) performs the following functions:
- Efficiently polls metering data related to OpenStack services.
- Collects event and metering data by monitoring notifications sent from services.
- Publishes collected data to various targets including data stores and message queues.
.. note::
As of Liberty, the alarming functionality is in a separate component.
The metering-alarm containers handle the functionality through aodh
services. For configuring these services, see the aodh docs:
http://docs.openstack.org/developer/aodh/
Configure a MongoDB backend prior to running the ceilometer playbooks.
The connection data is in the ``user_variables.yml`` file
(see section `Configuring the user data`_ below).
Setting up a MongoDB database for ceilometer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Install the MongoDB package:
.. code-block:: console
# apt-get install mongodb-server mongodb-clients python-pymongo
2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the management
interface:
.. code-block:: ini
bind_ip = 10.0.0.11
3. Edit the ``/etc/mongodb.conf`` file and enable ``smallfiles``:
.. code-block:: ini
smallfiles = true
4. Restart the MongoDB service:
.. code-block:: console
# service mongodb restart
5. Create the ceilometer database:
.. code-block:: console
# mongo --host controller --eval 'db = db.getSiblingDB("ceilometer"); db.addUser({user: "ceilometer", pwd: "CEILOMETER_DBPASS", roles: [ "readWrite", "dbAdmin" ]})'
This returns:
.. code-block:: console
MongoDB shell version: 2.4.x
connecting to: controller:27017/test
{
"user" : "ceilometer",
"pwd" : "72f25aeee7ad4be52437d7cd3fc60f6f",
"roles" : [
"readWrite",
"dbAdmin"
],
"_id" : ObjectId("5489c22270d7fad1ba631dc3")
}
.. note::
Ensure ``CEILOMETER_DBPASS`` matches the
``ceilometer_container_db_password`` in the
``/etc/openstack_deploy/user_secrets.yml`` file. This is
how Ansible knows how to configure the connection string
within the ceilometer configuration files.
Configuring the hosts
~~~~~~~~~~~~~~~~~~~~~
Configure ceilometer by specifying the ``metering-compute_hosts`` and
``metering-infra_hosts`` directives in the
``/etc/openstack_deploy/conf.d/ceilometer.yml`` file. Below is the
example included in the
``etc/openstack_deploy/conf.d/ceilometer.yml.example`` file:
.. code-block:: bash
# The compute host that the ceilometer compute agent runs on
``metering-compute_hosts``:
compute1:
ip: 172.20.236.110
# The infra node that the central agents runs on
``metering-infra_hosts``:
infra1:
ip: 172.20.236.111
# Adding more than one host requires further configuration for ceilometer
# to work properly.
infra2:
ip: 172.20.236.112
infra3:
ip: 172.20.236.113
The ``metering-compute_hosts`` houses the ``ceilometer-agent-compute``
service. It runs on each compute node and polls for resource
utilization statistics. The ``metering-infra_hosts`` houses serveral
services:
- A central agent (ceilometer-agent-central): Runs on a central
management server to poll for resource utilization statistics for
resources not tied to instances or compute nodes. Multiple agents
can be started to enable workload partitioning (See HA section
below).
- A notification agent (ceilometer-agent-notification): Runs on a
central management server(s) and consumes messages from the
message queue(s) to build event and metering data. Multiple
notification agents can be started to enable workload partitioning
(See HA section below).
- A collector (ceilometer-collector): Runs on central management
server(s) and dispatches data to a data store
or external consumer without modification.
- An API server (ceilometer-api): Runs on one or more central
management servers to provide data access from the data store.
Configuring the hosts for an HA deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ceilometer supports running the polling and notification agents in an
HA deployment.
The Tooz library provides the coordination within the groups of service
instances. Tooz can be used with several backends. At the time of this
writing, the following backends are supported:
- Zookeeper: Recommended solution by the Tooz project.
- Redis: Recommended solution by the Tooz project.
- Memcached: Recommended for testing.
.. important::
The OpenStack-Ansible project does not deploy these backends.
The backends exist before deploying the ceilometer service.
Achieve HA by configuring the proper directives in ``ceilometer.conf`` using
``ceilometer_ceilometer_conf_overrides`` in the ``user_variables.yml`` file.
The ceilometer admin guide[1] details the
options used in ``ceilometer.conf`` for HA deployment. The following is an
example of ``ceilometer_ceilometer_conf_overrides``:
.. code-block:: yaml
ceilometer_ceilometer_conf_overrides:
coordination:
backend_url: "zookeeper://172.20.1.110:2181"
notification:
workload_partitioning: True
Configuring the user data
~~~~~~~~~~~~~~~~~~~~~~~~~
Specify the following configurations in the
``/etc/openstack_deploy/user_variables.yml`` file:
- The type of database backend ceilometer uses. Currently only
MongoDB is supported: ``ceilometer_db_type: mongodb``
- The IP address of the MonogoDB host: ``ceilometer_db_ip:
localhost``
- The port of the MongoDB service: ``ceilometer_db_port: 27017``
- This configures swift to send notifications to the message bus:
``swift_ceilometer_enabled: False``
- This configures heat to send notifications to the message bus:
``heat_ceilometer_enabled: False``
- This configures cinder to send notifications to the message bus:
``cinder_ceilometer_enabled: False``
- This configures glance to send notifications to the message bus:
``glance_ceilometer_enabled: False``
- This configures nova to send notifications to the message bus:
``nova_ceilometer_enabled: False``
- This configures neutron to send notifications to the message bus:
``neutron_ceilometer_enabled: False``
- This configures keystone to send notifications to the message bus:
``keystone_ceilometer_enabled: False``
Run the ``os-ceilometer-install.yml`` playbook. If deploying a new OpenStack
(instead of only ceilometer), run ``setup-openstack.yml``. The
ceilometer playbooks run as part of this playbook.
References
~~~~~~~~~~
[1] `Ceilometer Admin Guide`_
.. _Ceilometer Admin Guide: http://docs.openstack.org/admin-guide/telemetry-data-collection.html
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,97 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Ceph client (optional)
======================================
Ceph is a massively scalable, open source, distributed storage system.
These links provide details on how to use Ceph with OpenStack:
* `Ceph Block Devices and OpenStack`_
* `Ceph - The De Facto Storage Backend for OpenStack`_ *(Hong Kong Summit
talk)*
* `OpenStack Config Reference - Ceph RADOS Block Device (RBD)`_
* `OpenStack-Ansible and Ceph Working Example`_
.. _Ceph Block Devices and OpenStack: http://docs.ceph.com/docs/master/rbd/rbd-openstack/
.. _Ceph - The De Facto Storage Backend for OpenStack: https://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/ceph-the-de-facto-storage-backend-for-openstack
.. _OpenStack Config Reference - Ceph RADOS Block Device (RBD): http://docs.openstack.org/liberty/config-reference/content/ceph-rados.html
.. _OpenStack-Ansible and Ceph Working Example: https://www.openstackfaq.com/openstack-ansible-ceph/
.. note::
Configuring Ceph storage servers is outside the scope of this documentation.
Authentication
~~~~~~~~~~~~~~
We recommend the ``cephx`` authentication method in the `Ceph
config reference`_. OpenStack-Ansible enables ``cephx`` by default for
the Ceph client. You can choose to override this setting by using the
``cephx`` Ansible variable:
.. code-block:: yaml
cephx: False
Deploy Ceph on a trusted network if disabling ``cephx``.
.. _Ceph config reference: http://docs.ceph.com/docs/master/rados/configuration/auth-config-ref/
Configuration file overrides
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible provides the ``ceph_conf_file`` variable. This allows
you to specify configuration file options to override the default
Ceph configuration:
.. code-block:: console
ceph_conf_file: |
[global]
fsid = 4037aa5f-abde-4378-9470-f73dbd6ceaba
mon_initial_members = mon1.example.local,mon2.example.local,mon3.example.local
mon_host = 172.29.244.151,172.29.244.152,172.29.244.153
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
The use of the ``ceph_conf_file`` variable is optional. By default, OpenStack-Ansible
obtains a copy of ``ceph.conf`` from one of your Ceph monitors. This
transfer of ``ceph.conf`` requires the OpenStack-Ansible deployment host public key
to be deployed to all of the Ceph monitors. More details are available
here: `Deploying SSH Keys`_.
The following minimal example configuration sets nova and glance
to use ceph pools: ``ephemeral-vms`` and ``images`` respectively.
The example uses ``cephx`` authentication, and requires existing ``glance`` and
``cinder`` accounts for ``images`` and ``ephemeral-vms`` pools.
.. code-block:: console
glance_default_store: rbd
nova_libvirt_images_rbd_pool: ephemeral-vms
.. _Deploying SSH Keys: targethosts-prepare.html#deploying-ssh-keys
Monitors
~~~~~~~~
The `Ceph Monitor`_ maintains a master copy of the cluster map.
OpenStack-Ansible provides the ``ceph_mons`` variable and expects a list of
IP addresses for the Ceph Monitor servers in the deployment:
.. code-block:: yaml
ceph_mons:
- 172.29.244.151
- 172.29.244.152
- 172.29.244.153
.. _Ceph Monitor: http://docs.ceph.com/docs/master/rados/configuration/mon-config-ref/
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,458 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Block (cinder) storage service (optional)
=========================================================
By default, the Block (cinder) storage service installs on the host itself using
the LVM backend.
.. note::
While this is the default for cinder, using the LVM backend results in a
Single Point of Failure. As a result of the volume service being deployed
directly to the host, ``is_metal`` is ``true`` when using LVM.
NFS backend
~~~~~~~~~~~~
Edit ``/etc/openstack_deploy/openstack_user_config.yml`` and configure
the NFS client on each storage node if the NetApp backend is configured to use
an NFS storage protocol.
#. Add the ``cinder_backends`` stanza (which includes
``cinder_nfs_client``) under the ``container_vars`` stanza for
each storage node:
.. code-block:: yaml
container_vars:
cinder_backends:
cinder_nfs_client:
#. Configure the location of the file that lists shares available to the
block storage service. This configuration file must include
``nfs_shares_config``:
.. code-block:: yaml
nfs_shares_config: SHARE_CONFIG
Replace ``SHARE_CONFIG`` with the location of the share
configuration file. For example, ``/etc/cinder/nfs_shares``.
#. Configure one or more NFS shares:
.. code-block:: yaml
shares:
- { ip: "NFS_HOST", share: "NFS_SHARE" }
Replace ``NFS_HOST`` with the IP address or hostname of the NFS
server, and the ``NFS_SHARE`` with the absolute path to an existing
and accessible NFS share.
Backup
~~~~~~
You can configure cinder to backup volumes to Object
Storage (swift). Enable the default
configuration to back up volumes to a swift installation
accessible within your environment. Alternatively, you can set
``cinder_service_backup_swift_url`` and other variables to
back up to an external swift installation.
#. Add or edit the following line in the
``/etc/openstack_deploy/user_variables.yml`` file and set the value
to ``True``:
.. code-block:: yaml
cinder_service_backup_program_enabled: True
#. By default, cinder uses the access credentials of the user
initiating the backup. Default values are set in the
``/opt/openstack-ansible/playbooks/roles/os_cinder/defaults/main.yml``
file. You can override those defaults by setting variables in
``/etc/openstack_deploy/user_variables.yml`` to change how cinder
performs backups. Add and edit any of the
following variables to the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
...
cinder_service_backup_swift_auth: per_user
# Options include 'per_user' or 'single_user'. We default to
# 'per_user' so that backups are saved to a user's swift
# account.
cinder_service_backup_swift_url:
# This is your swift storage url when using 'per_user', or keystone
# endpoint when using 'single_user'. When using 'per_user', you
# can leave this as empty or as None to allow cinder-backup to
# obtain a storage url from environment.
cinder_service_backup_swift_url:
cinder_service_backup_swift_auth_version: 2
cinder_service_backup_swift_user:
cinder_service_backup_swift_tenant:
cinder_service_backup_swift_key:
cinder_service_backup_swift_container: volumebackups
cinder_service_backup_swift_object_size: 52428800
cinder_service_backup_swift_retry_attempts: 3
cinder_service_backup_swift_retry_backoff: 2
cinder_service_backup_compression_algorithm: zlib
cinder_service_backup_metadata_version: 2
During installation of cinder, the backup service is configured.
Using Ceph for cinder backups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can deploy Ceph to hold cinder volume backups.
To get started, set the ``cinder_service_backup_driver`` Ansible
variable:
.. code-block:: yaml
cinder_service_backup_driver: cinder.backup.drivers.ceph
Configure the Ceph user and the pool to use for backups. The defaults
are shown here:
.. code-block:: yaml
cinder_service_backup_ceph_user: cinder-backup
cinder_service_backup_ceph_pool: backups
Availability zones
~~~~~~~~~~~~~~~~~~
Create multiple availability zones to manage cinder storage hosts. Edit the
``/etc/openstack_deploy/openstack_user_config.yml`` and
``/etc/openstack_deploy/user_variables.yml`` files to set up
availability zones.
#. For each cinder storage host, configure the availability zone under
the ``container_vars`` stanza:
.. code-block:: yaml
cinder_storage_availability_zone: CINDERAZ
Replace ``CINDERAZ`` with a suitable name. For example
``cinderAZ_2``.
#. If more than one availability zone is created, configure the default
availability zone for all the hosts by creating a
``cinder_default_availability_zone`` in your
``/etc/openstack_deploy/user_variables.yml``
.. code-block:: yaml
cinder_default_availability_zone: CINDERAZ_DEFAULT
Replace ``CINDERAZ_DEFAULT`` with a suitable name. For example,
``cinderAZ_1``. The default availability zone should be the same
for all cinder hosts.
OpenStack Dashboard (horizon) configuration for cinder
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can configure variables to set the behavior for cinder
volume management in OpenStack Dashboard (horizon).
By default, no horizon configuration is set.
#. The default destination availability zone is ``nova`` if you use
multiple availability zones and ``cinder_default_availability_zone``
has no definition. Volume creation with
horizon might fail if there is no availability zone named ``nova``.
Set ``cinder_default_availability_zone`` to an appropriate
availability zone name so that :guilabel:`Any availability zone`
works in horizon.
#. horizon does not populate the volume type by default. On the new
volume page, a request for the creation of a volume with the
default parameters fails. Set ``cinder_default_volume_type`` so
that a volume creation request without an explicit volume type
succeeds.
Configuring cinder to use LVM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. List the ``container_vars`` that contain the storage options for the target host.
.. note::
The vars related to the cinder availability zone and the
``limit_container_types`` are optional.
To configure an LVM, utilize the following example:
.. code-block:: yaml
storage_hosts:
Infra01:
ip: 172.29.236.16
container_vars:
cinder_storage_availability_zone: cinderAZ_1
cinder_default_availability_zone: cinderAZ_1
cinder_backends:
lvm:
volume_backend_name: LVM_iSCSI
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
iscsi_ip_address: "{{ storage_address }}"
limit_container_types: cinder_volume
To use another backend in a
container instead of bare metal, edit
the ``/etc/openstack_deploy/env.d/cinder.yml`` and remove the
``is_metal: true`` stanza under the ``cinder_volumes_container`` properties.
Configuring cinder to use Ceph
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order for cinder to use Ceph, it is necessary to configure for both
the API and backend. When using any forms of network storage
(iSCSI, NFS, Ceph) for cinder, the API containers can be considered
as backend servers. A separate storage host is not required.
In ``env.d/cinder.yml`` remove ``is_metal: true``
#. List of target hosts on which to deploy the cinder API. We recommend
that a minumum of three target hosts are used for this service.
.. code-block:: yaml
storage-infra_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
To configure an RBD backend, utilize the following example:
.. code-block:: yaml
container_vars:
cinder_storage_availability_zone: cinderAZ_3
cinder_default_availability_zone: cinderAZ_1
cinder_backends:
limit_container_types: cinder_volume
volumes_hdd:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_pool: volumes_hdd
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot: 'false'
rbd_max_clone_depth: 5
rbd_store_chunk_size: 4
rados_connect_timeout: -1
volume_backend_name: volumes_hdd
rbd_user: "{{ cinder_ceph_client }}"
rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
The following example sets cinder to use the ``cinder_volumes`` pool.
The example uses cephx authentication and requires existing ``cinder``
account for ``cinder_volumes`` pool.
In ``user_variables.yml``:
.. code-block:: yaml
ceph_mons:
- 172.29.244.151
- 172.29.244.152
- 172.29.244.153
In ``openstack_user_config.yml``:
.. code-block:: yaml
storage_hosts:
infra1:
ip: 172.29.236.101
container_vars:
cinder_backends:
limit_container_types: cinder_volume
rbd:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: rbd
rbd_pool: cinder-volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
infra2:
ip: 172.29.236.102
container_vars:
cinder_backends:
limit_container_types: cinder_volume
rbd:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: rbd
rbd_pool: cinder-volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
infra3:
ip: 172.29.236.103
container_vars:
cinder_backends:
limit_container_types: cinder_volume
rbd:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: rbd
rbd_pool: cinder-volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
This link provides a complete working example of Ceph setup and
integration with cinder (nova and glance included):
* `OpenStack-Ansible and Ceph Working Example`_
.. _OpenStack-Ansible and Ceph Working Example: https://www.openstackfaq.com/openstack-ansible-ceph/
Configuring cinder to use a NetApp appliance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To use a NetApp storage appliance back end, edit the
``/etc/openstack_deploy/openstack_user_config.yml`` file and configure
each storage node that will use it.
.. note::
Ensure that the NAS Team enables ``httpd.admin.access``.
#. Add the ``netapp`` stanza under the ``cinder_backends`` stanza for
each storage node:
.. code-block:: yaml
cinder_backends:
netapp:
The options in subsequent steps fit under the ``netapp`` stanza.
The backend name is arbitrary and becomes a volume type within cinder.
#. Configure the storage family:
.. code-block:: yaml
netapp_storage_family: STORAGE_FAMILY
Replace ``STORAGE_FAMILY`` with ``ontap_7mode`` for Data ONTAP
operating in 7-mode or ``ontap_cluster`` for Data ONTAP operating as
a cluster.
#. Configure the storage protocol:
.. code-block:: yaml
netapp_storage_protocol: STORAGE_PROTOCOL
Replace ``STORAGE_PROTOCOL`` with ``iscsi`` for iSCSI or ``nfs``
for NFS.
For the NFS protocol, specify the location of the
configuration file that lists the shares available to cinder:
.. code-block:: yaml
nfs_shares_config: SHARE_CONFIG
Replace ``SHARE_CONFIG`` with the location of the share
configuration file. For example, ``/etc/cinder/nfs_shares``.
#. Configure the server:
.. code-block:: yaml
netapp_server_hostname: SERVER_HOSTNAME
Replace ``SERVER_HOSTNAME`` with the hostnames for both netapp
controllers.
#. Configure the server API port:
.. code-block:: yaml
netapp_server_port: PORT_NUMBER
Replace ``PORT_NUMBER`` with 80 for HTTP or 443 for HTTPS.
#. Configure the server credentials:
.. code-block:: yaml
netapp_login: USER_NAME
netapp_password: PASSWORD
Replace ``USER_NAME`` and ``PASSWORD`` with the appropriate
values.
#. Select the NetApp driver:
.. code-block:: yaml
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
#. Configure the volume back end name:
.. code-block:: yaml
volume_backend_name: BACKEND_NAME
Replace ``BACKEND_NAME`` with a value that provides a hint
for the cinder scheduler. For example, ``NETAPP_iSCSI``.
#. Ensure the ``openstack_user_config.yml`` configuration is
accurate:
.. code-block:: yaml
storage_hosts:
Infra01:
ip: 172.29.236.16
container_vars:
cinder_backends:
limit_container_types: cinder_volume
netapp:
netapp_storage_family: ontap_7mode
netapp_storage_protocol: nfs
netapp_server_hostname: 111.222.333.444
netapp_server_port: 80
netapp_login: openstack_cinder
netapp_password: password
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name: NETAPP_NFS
For ``netapp_server_hostname``, specify the IP address of the Data
ONTAP server. Include iSCSI or NFS for the
``netapp_storage_family`` depending on the configuration. Add 80 if
using HTTP or 443 if using HTTPS for ``netapp_server_port``.
The ``cinder-volume.yml`` playbook will automatically install the
``nfs-common`` file across the hosts, transitioning from an LVM to a
NetApp back end.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,30 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Checking the integrity of your configuration files
==================================================
Execute the following steps before running any playbook:
#. Ensure all files edited in ``/etc/`` are Ansible
YAML compliant. Guidelines can be found here:
`<http://docs.ansible.com/ansible/YAMLSyntax.html>`_
#. Check the integrity of your YAML files:
.. note:: Here is an online linter: `<http://www.yamllint.com/>`_
#. Run your command with ``syntax-check``:
.. code-block:: shell-session
# openstack-ansible setup-infrastructure.yml --syntax-check
#. Recheck that all indentation is correct.
.. note::
The syntax of the configuration files can be correct
while not being meaningful for OpenStack-Ansible.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,40 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring service credentials
===============================
Configure credentials for each service in the
``/etc/openstack_deploy/*_secrets.yml`` files. Consider using `Ansible
Vault <http://docs.ansible.com/playbooks_vault.html>`_ to increase
security by encrypting any files containing credentials.
Adjust permissions on these files to restrict access by non-privileged
users.
Note that the following options configure passwords for the web
interfaces:
- ``keystone_auth_admin_password`` configures the ``admin`` tenant
password for both the OpenStack API and dashboard access.
.. note::
We recommend using the ``pw-token-gen.py`` script to generate random
values for the variables in each file that contains service credentials:
.. code-block:: shell-session
# cd /opt/openstack-ansible/scripts
# python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
To regenerate existing passwords, add the ``--regen`` flag.
.. warning::
The playbooks do not currently manage changing passwords in an existing
environment. Changing passwords and re-running the playbooks will fail
and may break your OpenStack environment.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,44 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Configuring Active Directory Federation Services (ADFS) 3.0 as an identity provider
===================================================================================
To install ADFS:
* `Prerequisites for ADFS from Microsoft Technet <https://technet.microsoft.com/library/bf7f9cf4-6170-40e8-83dd-e636cb4f9ecb>`_
* `ADFS installation procedure from Microsoft Technet <https://technet.microsoft.com/en-us/library/dn303423>`_
Configuring ADFS
~~~~~~~~~~~~~~~~
#. Ensure the ADFS Server trusts the service provider's (SP) keystone
certificate. We recommend to have the ADFS CA (or a
public CA) sign a certificate request for the keystone service.
#. In the ADFS Management Console, choose ``Add Relying Party Trust``.
#. Select ``Import data about the relying party published online or on a
local network`` and enter the URL for the SP Metadata (
for example, ``https://<SP_IP_ADDRESS or DNS_NAME>:5000/Shibboleth.sso/Metadata``)
.. note::
ADFS may give a warning message. The message states that ADFS skipped
some of the content gathered from metadata because it is not supported by ADFS
#. Continuing the wizard, select ``Permit all users to access this
relying party``.
#. In the ``Add Transform Claim Rule Wizard``, select ``Pass Through or
Filter an Incoming Claim``.
#. Name the rule (for example, ``Pass Through UPN``) and select the ``UPN``
Incoming claim type.
#. Click :guilabel:`OK` to apply the rule and finalize the setup.
References
~~~~~~~~~~
* `http://blogs.technet.com/b/rmilne/archive/2014/04/28/how-to-install-adfs-2012-r2-for-office-365.aspx`_
* `http://blog.kloud.com.au/2013/08/14/powershell-deployment-of-web-application-proxy-and-adfs-in-under-10-minutes/`_
* `https://ethernuno.wordpress.com/2014/04/20/install-adds-on-windows-server-2012-r2-with-powershell/`_
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,79 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Configure Identity service (keystone) as a federated identity provider
======================================================================
The Identity Provider (IdP) configuration for keystone provides a
dictionary attribute with the key ``keystone_idp``. The following is a
complete example:
.. code::
keystone_idp:
certfile: "/etc/keystone/ssl/idp_signing_cert.pem"
keyfile: "/etc/keystone/ssl/idp_signing_key.pem"
self_signed_cert_subject: "/C=US/ST=Texas/L=San Antonio/O=IT/CN={{ external_lb_vip_address }}"
regen_cert: false
idp_entity_id: "{{ keystone_service_publicurl_v3 }}/OS-FEDERATION/saml2/idp"
idp_sso_endpoint: "{{ keystone_service_publicurl_v3 }}/OS-FEDERATION/saml2/sso"
idp_metadata_path: /etc/keystone/saml2_idp_metadata.xml
service_providers:
- id: "sp_1"
auth_url: https://example.com:5000/v3/OS-FEDERATION/identity_providers/idp/protocols/saml2/auth
sp_url: https://example.com:5000/Shibboleth.sso/SAML2/ECP
organization_name: example_company
organization_display_name: Example Corp.
organization_url: example.com
contact_company: example_company
contact_name: John
contact_surname: Smith
contact_email: jsmith@example.com
contact_telephone: 555-55-5555
contact_type: technical
The following list is a reference of allowed settings:
* ``certfile`` defines the location and filename of the SSL certificate that
the IdP uses to sign assertions. This file must be in a location that is
accessible to the keystone system user.
* ``keyfile`` defines the location and filename of the SSL private key that
the IdP uses to sign assertions. This file must be in a location that is
accessible to the keystone system user.
* ``self_signed_cert_subject`` is the subject in the SSL signing
certificate. The common name of the certificate
must match the hostname configuration in the service provider(s) for
this IdP.
* ``regen_cert`` by default is set to ``False``. When set to ``True``, the
next Ansible run replaces the existing signing certificate with a new one. This
setting is added as a convenience mechanism to renew a certificate when it
is close to its expiration date.
* ``idp_entity_id`` is the entity ID. The service providers
use this as a unique identifier for each IdP. ``<keystone-public-endpoint>/OS-FEDERATION/saml2/idp``
is the value we recommend for this setting.
* ``idp_sso_endpoint`` is the single sign-on endpoint for this IdP.
``<keystone-public-endpoint>/OS-FEDERATION/saml2/sso>`` is the value
we recommend for this setting.
* ``idp_metadata_path`` is the location and filename where the metadata for
this IdP is cached. The keystone system user must have access to this
location.
* ``service_providers`` is a list of the known service providers (SP) that
use the keystone instance as identity provider. For each SP, provide
three values: ``id`` as a unique identifier,
``auth_url`` as the authentication endpoint of the SP, and ``sp_url``
endpoint for posting SAML2 assertions.
* ``organization_name``, ``organization_display_name``, ``organization_url``,
``contact_company``, ``contact_name``, ``contact_surname``,
``contact_email``, ``contact_telephone`` and ``contact_type`` are
settings that describe the identity provider. These settings are all optional.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,164 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Configure Identity Service (keystone) Domain-Project-Group-Role mappings
========================================================================
The following is an example service provider (SP) mapping configuration
for an ADFS identity provider (IdP):
.. code-block:: yaml
federated_identities:
- domain: Default
project: fedproject
group: fedgroup
role: _member_
Each IdP trusted by an SP must have the following configuration:
#. ``project``: The project that federation users have access to.
If the project does not already exist, create it in the
domain with the name, ``domain``.
#. ``group``: The keystone group that federation users
belong. If the group does not already exist, create it in
the domain with the name, ``domain``.
#. ``role``: The role that federation users use in that project.
Create the role if it does not already exist.
#. ``domain``: The domain where the ``project`` lives, and where
the you assign roles. Create the domain if it does not already exist.
Ansible implements the equivalent of the following OpenStack CLI commands:
.. code-block:: shell-session
# if the domain does not already exist
openstack domain create Default
# if the group does not already exist
openstack group create fedgroup --domain Default
# if the role does not already exist
openstack role create _member_
# if the project does not already exist
openstack project create --domain Default fedproject
# map the role to the project and user group in the domain
openstack role add --project fedproject --group fedgroup _member_
To add more mappings, add options to the list.
For example:
.. code-block:: yaml
federated_identities:
- domain: Default
project: fedproject
group: fedgroup
role: _member_
- domain: Default
project: fedproject2
group: fedgroup2
role: _member_
Identity service federation attribute mapping
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Attribute mapping adds a set of rules to map federation attributes to keystone
users and groups. IdP specifies one mapping per protocol.
Use mapping objects multiple times by different combinations of
IdP and protocol.
The details of how the mapping engine works, the schema, and various rule
examples are in the `keystone developer documentation <http://docs.openstack.org/developer/keystone/mapping_combinations.html>`_.
For example, SP attribute mapping configuration for an ADFS IdP:
.. code-block:: yaml
mapping:
name: adfs-IdP-mapping
rules:
- remote:
- type: upn
local:
- group:
name: fedgroup
domain:
name: Default
- user:
name: '{0}'
attributes:
- name: 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn'
id: upn
Each IdP for an SP needs to be set up with a mapping. This tells the SP how
to interpret the attributes provided to the SP from the IdP.
In this example, the IdP publishes the ``upn`` attribute. As this
is not in the standard Shibboleth attribute map (see
``/etc/shibboleth/attribute-map.xml`` in the keystone containers), the configuration
of the IdP has extra mapping through the ``attributes`` dictionary.
The ``mapping`` dictionary is a YAML representation similar to the
keystone mapping property which Ansible uploads. The above mapping
produces the following in keystone.
.. code-block:: shell-session
root@aio1_keystone_container-783aa4c0:~# openstack mapping list
+------------------+
| ID |
+------------------+
| adfs-IdP-mapping |
+------------------+
root@aio1_keystone_container-783aa4c0:~# openstack mapping show adfs-IdP-mapping
+-------+---------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------+---------------------------------------------------------------------------------------------------------------------------------------+
| id | adfs-IdP-mapping |
| rules | [{"remote": [{"type": "upn"}], "local": [{"group": {"domain": {"name": "Default"}, "name": "fedgroup"}}, {"user": {"name": "{0}"}}]}] |
+-------+---------------------------------------------------------------------------------------------------------------------------------------+
root@aio1_keystone_container-783aa4c0:~# openstack mapping show adfs-IdP-mapping | awk -F\| '/rules/ {print $3}' | python -mjson.tool
[
{
"remote": [
{
"type": "upn"
}
],
"local": [
{
"group": {
"domain": {
"name": "Default"
},
"name": "fedgroup"
}
},
{
"user": {
"name": "{0}"
}
}
]
}
]
The interpretation of the above mapping rule is that any federation user
authenticated by the IdP maps to an ``ephemeral`` (non-existant) user in
keystone. The user is a member of a group named ``fedgroup``. This is
in a domain called ``Default``. The user's ID and Name (federation uses
the same value for both properties) for all OpenStack services is
the value of ``upn``.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,63 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Identity Service (keystone) service provider background
=======================================================
In OpenStack-Ansible, the Identity Service (keystone) is set up to
use Apache with ``mod_wsgi``. The additional configuration of
keystone as a federation service provider adds Apache ``mod_shib``
and configures it to respond to specific locations requests
from a client.
.. note::
There are alternative methods of implementing
federation, but at this time only SAML2-based federation using
the Shibboleth SP is instrumented in OA.
When requests are sent to those locations, Apache hands off the
request to the ``shibd`` service.
.. note::
Handing off happens only with requests pertaining to authentication.
Handle the ``shibd`` service configuration through
the following files in ``/etc/shibboleth/`` in the keystone
containers:
* ``sp-cert.pem``, ``sp-key.pem``: The ``os-keystone-install.yml`` playbook
uses these files generated on the first keystone container to replicate
them to the other keystone containers. The SP and the IdP use these files
as signing credentials in communications.
* ``shibboleth2.xml``: The ``os-keystone-install.yml`` playbook writes the
file's contents, basing on the structure of the configuration
of the ``keystone_sp`` attribute in the
``/etc/openstack_deploy/user_variables.yml`` file. It contains
the list of trusted IdP's, the entityID by which the SP is known,
and other facilitating configurations.
* ``attribute-map.xml``: The ``os-keystone-install.yml`` playbook writes
the file's contents, basing on the structure of the configuration
of the ``keystone_sp`` attribute in the
``/etc/openstack_deploy/user_variables.yml`` file. It contains
the default attribute mappings that work for any basic
Shibboleth-type IDP setup, but also contains any additional
attribute mappings set out in the structure of the ``keystone_sp``
attribute.
* ``shibd.logger``: This file is left alone by Ansible. It is useful
when troubleshooting issues with federated authentication, or
when discovering what attributes published by an IdP
are not currently being understood by your SP's attribute map.
To enable debug logging, change ``log4j.rootCategory=INFO`` to
``log4j.rootCategory=DEBUG`` at the top of the file. The
log file is output to ``/var/log/shibboleth/shibd.log``.
References
----------
* `http://docs.openstack.org/developer/keystone/configure_federation.html`_
* `http://docs.openstack.org/developer/keystone/extensions/shibboleth.html`_
* `https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPConfiguration`_
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,120 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Configure Identity Service (keystone) as a federated service provider
=====================================================================
The following settings must be set to configure a service provider (SP):
#. ``keystone_public_endpoint`` is automatically set by default
to the public endpoint's URI. This performs redirections and
ensures token references refer to the public endpoint.
#. ``horizon_keystone_endpoint`` is automatically set by default
to the public v3 API endpoint URL for keystone. Web-based single
sign-on for horizon requires the use of the keystone v3 API.
The value for this must use the same DNS name or IP address
registered in the SSL certificate used for the endpoint.
#. It is a requirement to have a HTTPS public endpoint for the
keystone endpoint if the IdP is ADFS.
Keystone or an SSL offloading load balancer provides the endpoint.
#. Set ``keystone_service_publicuri_proto`` to https.
This ensures keystone publishes https in its references
and ensures that Shibboleth is configured to know that it
expects SSL URL's in the assertions (otherwise it will invalidate
the assertions).
#. ADFS requires that a trusted SP have a trusted certificate that
is not self-signed.
#. Ensure the endpoint URI and the certificate match when using SSL for the
keystone endpoint. For example, if the certificate does not have
the IP address of the endpoint, then the endpoint must be published with
the appropriate name registered on the certificate. When
using a DNS name for the keystone endpoint, both
``keystone_public_endpoint`` and ``horizon_keystone_endpoint`` must
be set to use the DNS name.
#. ``horizon_endpoint_type`` must be set to ``publicURL`` to ensure that
horizon uses the public endpoint for all its references and
queries.
#. ``keystone_sp`` is a dictionary attribute which contains various
settings that describe both the SP and the IDP's it trusts. For example:
.. code-block:: yaml
keystone_sp:
cert_duration_years: 5
trusted_dashboard_list:
- "https://{{ external_lb_vip_address }}/auth/websso/"
trusted_idp_list:
- name: 'testshib-idp'
entity_ids:
- 'https://idp.testshib.org/idp/shibboleth'
metadata_uri: 'http://www.testshib.org/metadata/testshib-providers.xml'
metadata_file: 'metadata-testshib-idp.xml'
metadata_reload: 1800
federated_identities:
- domain: Default
project: fedproject
group: fedgroup
role: _member_
protocols:
- name: saml2
mapping:
name: testshib-idp-mapping
rules:
- remote:
- type: eppn
local:
- group:
name: fedgroup
domain:
name: Default
- user:
name: '{0}'
#. ``cert_duration_years`` designates the valid duration for the SP's
signing certificate (for example, ``/etc/shibboleth/sp-key.pem``).
#. ``trusted_dashboard_list`` designates the list of trusted URLs that
keystone accepts redirects for Web Single-Sign. This
list contains all URLs that horizon is presented on,
suffixed by ``/auth/websso/``. This is the path for horizon's WebSSO
component.
#. ``trusted_idp_list`` is a dictionary attribute containing the list
of settings which pertain to each trusted IdP for the SP.
#. ``trusted_idp_list.name`` is IDP's name. Configure this in
in keystone and list in horizon's login selection.
#. ``entity_ids`` is a list of reference entity IDs. This specify's the
redirection of the login request to the SP when authenticating to
IdP.
#. ``metadata_uri`` is the location of the IdP's metadata. This provides
the SP with the signing key and all the IdP's supported endpoints.
#. ``metadata_file`` is the file name of the local cached version of
the metadata which will be stored in ``/var/cache/shibboleth/``.
#. ``metadata_reload`` is the number of seconds between metadata
refresh polls.
#. ``federated_identities`` is a mapping list of domain, project, group, and users.
See `Configure Identity Service (keystone) Domain-Project-Group-Role mappings <configure-federation-mapping.html>`_
for more information.
#. ``protocols`` is a list of protocols supported for the IdP and the set
of mappings and attributes for each protocol. This only supports protocols
with the name ``saml2``.
#. ``mapping`` is the local to remote mapping configuration for federated
users. For more information, see `Configure Identity Service (keystone) Domain-Project-Group-Role mappings. <configure-federation-mapping.html>`_
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,326 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Identity Service to Identity Service federation example use-case
================================================================
The following is the configuration steps necessary to reproduce the
federation scenario described below:
* Federate Cloud 1 and Cloud 2.
* Create mappings between Cloud 1 Group A and Cloud 2 Project X and Role R.
* Create mappings between Cloud 1 Group B and Cloud 2 Project Y and Role S.
* Create User U in Cloud 1, assign to Group A.
* Authenticate with Cloud 2 and confirm scope to Role R in Project X.
* Assign User U to Group B, confirm scope to Role S in Project Y.
Keystone identity provider (IdP) configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following is the configuration for the keystone IdP instance:
.. code::
keystone_idp:
certfile: "/etc/keystone/ssl/idp_signing_cert.pem"
keyfile: "/etc/keystone/ssl/idp_signing_key.pem"
self_signed_cert_subject: "/C=US/ST=Texas/L=San Antonio/O=IT/CN={{ external_lb_vip_address }}"
regen_cert: false
idp_entity_id: "{{ keystone_service_publicurl_v3 }}/OS-FEDERATION/saml2/idp"
idp_sso_endpoint: "{{ keystone_service_publicurl_v3 }}/OS-FEDERATION/saml2/sso"
idp_metadata_path: /etc/keystone/saml2_idp_metadata.xml
service_providers:
- id: "cloud2"
auth_url: https://cloud2.com:5000/v3/OS-FEDERATION/identity_providers/cloud1/protocols/saml2/auth
sp_url: https://cloud2.com:5000/Shibboleth.sso/SAML2/ECP
In this example, the last three lines are specific to a particular
installation, as they reference the service provider cloud (referred to as
"Cloud 2" in the original scenario). In the example, the
cloud is located at https://cloud2.com, and the unique ID for this cloud
is "cloud2".
.. note::
In the ``auth_url`` there is a reference to the IdP cloud (or
"Cloud 1"), as known by the service provider (SP). The ID used for the IdP
cloud in this example is "cloud1".
Keystone service provider (SP) configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The configuration for keystone SP needs to define the remote-to-local user mappings.
The following is the complete configuration:
.. code::
keystone_sp:
cert_duration_years: 5
trusted_dashboard_list:
- "https://{{ external_lb_vip_address }}/auth/websso/"
trusted_idp_list:
- name: "cloud1"
entity_ids:
- 'https://cloud1.com:5000/v3/OS-FEDERATION/saml2/idp'
metadata_uri: 'https://cloud1.com:5000/v3/OS-FEDERATION/saml2/metadata'
metadata_file: 'metadata-cloud1.xml'
metadata_reload: 1800
federated_identities:
- domain: Default
project: X
role: R
group: federated_group_1
- domain: Default
project: Y
role: S
group: federated_group_2
protocols:
- name: saml2
mapping:
name: cloud1-mapping
rules:
- remote:
- any_one_of:
- A
type: openstack_project
local:
- group:
name: federated_group_1
domain:
name: Default
- remote:
- any_one_of:
- B
type: openstack_project
local:
- group:
name: federated_group_2
domain:
name: Default
attributes:
- name: openstack_user
id: openstack_user
- name: openstack_roles
id: openstack_roles
- name: openstack_project
id: openstack_project
- name: openstack_user_domain
id: openstack_user_domain
- name: openstack_project_domain
id: openstack_project_domain
``cert_duration_years`` is for the self-signed certificate used by
Shibboleth. Only implement the ``trusted_dashboard_list`` if horizon SSO
login is necessary. When given, it works as a security measure,
as keystone will only redirect to these URLs.
Configure the IdPs known to SP in ``trusted_idp_list``. In
this example there is only one IdP, the "Cloud 1". Configure "Cloud 1" with
the ID "cloud1". This matches the reference in the IdP configuration shown in the
previous section.
The ``entity_ids`` is given the unique URL that represents the "Cloud 1" IdP.
For this example, it is hosted at: https://cloud1.com.
The ``metadata_file`` needs to be different for each IdP. This is
a filename in the keystone containers of the SP cloud that holds cached
metadata for each registered IdP.
The ``federated_identities`` list defines the sets of identities in use
for federated users. In this example there are two sets, Project X/Role R
and Project Y/Role S. A user group is created for each set.
The ``protocols`` section is where the federation protocols are specified.
The only supported protocol is ``saml2``.
The ``mapping`` dictionary is where the assignments of remote to local
users is defined. A keystone mapping is given a ``name`` and a set of
``rules`` that keystone applies to determine how to map a given user. Each
mapping rule has a ``remote`` and a ``local`` component.
The ``remote`` part of the mapping rule specifies the criteria for the remote
user based on the attributes exposed by the IdP in the SAML2 assertion. The
use case for this scenario calls for mapping users in "Group A" and "Group B",
but the group or groups a user belongs to are not exported in the SAML2
assertion. To make the example work, the groups A and B in the use case are
projects. Export projects A and B in the assertion under the ``openstack_project`` attribute.
The two rules above select the corresponding project using the ``any_one_of``
selector.
The ``local`` part of the mapping rule specifies how keystone represents
the remote user in the local SP cloud. Configuring the two federated identities
with their own user group maps the user to the
corresponding group. This exposes the correct domain, project, and
role.
.. note::
Keystone creates a ephemeral user in the specified group as
you cannot specify user names.
The IdP exports the final setting of the configuration defines the SAML2 ``attributes``.
For a keystone IdP, these are the five attributes
shown above. Configure the attributes above into the Shibboleth service. This
ensures they are available to use in the mappings.
Reviewing or modifying the configuration with the OpenStack client
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use OpenStack command line client to review or make modifications to an
existing federation configuration. The following commands can be used for
the previous configuration.
Service providers on the identity provider
------------------------------------------
To see the list of known SPs:
.. code::
$ openstack service provider list
+--------+---------+-------------+-----------------------------------------------------------------------------------------+
| ID | Enabled | Description | Auth URL |
+--------+---------+-------------+-----------------------------------------------------------------------------------------+
| cloud2 | True | None | https://cloud2.com:5000/v3/OS-FEDERATION/identity_providers/cloud1/protocols/saml2/auth |
+--------+---------+-------------+-----------------------------------------------------------------------------------------+
To view the information for a specific SP:
.. code::
$ openstack service provider show cloud2
+--------------------+----------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------+----------------------------------------------------------------------------------------------+
| auth_url | http://cloud2.com:5000/v3/OS-FEDERATION/identity_providers/keystone-idp/protocols/saml2/auth |
| description | None |
| enabled | True |
| id | cloud2 |
| relay_state_prefix | ss:mem: |
| sp_url | http://cloud2.com:5000/Shibboleth.sso/SAML2/ECP |
+--------------------+----------------------------------------------------------------------------------------------+
To make modifications, use the ``set`` command. The following are the available
options for this command:
.. code::
$ openstack service provider set
usage: openstack service provider set [-h] [--auth-url <auth-url>]
[--description <description>]
[--service-provider-url <sp-url>]
[--enable | --disable]
<service-provider>
Identity providers on the service provider
------------------------------------------
To see the list of known IdPs:
.. code::
$ openstack identity provider list
+----------------+---------+-------------+
| ID | Enabled | Description |
+----------------+---------+-------------+
| cloud1 | True | None |
+----------------+---------+-------------+
To view the information for a specific IdP:
.. code::
$ openstack identity provider show keystone-idp
+-------------+--------------------------------------------------------+
| Field | Value |
+-------------+--------------------------------------------------------+
| description | None |
| enabled | True |
| id | cloud1 |
| remote_ids | [u'http://cloud1.com:5000/v3/OS-FEDERATION/saml2/idp'] |
+-------------+--------------------------------------------------------+
To make modifications, use the ``set`` command. The following are the available
options for this command:
.. code::
$ openstack identity provider set
usage: openstack identity provider set [-h]
[--remote-id <remote-id> | --remote-id-file <file-name>]
[--enable | --disable]
<identity-provider>
Federated identities on the service provider
--------------------------------------------
You can use the OpenStack commandline client to view or modify
the created domain, project, role, group, and user entities for the
purpose of federation as these are regular keystone entities. For example:
.. code::
$ openstack domain list
$ openstack project list
$ openstack role list
$ openstack group list
$ openstack user list
Add the ``--domain`` option when using a domain other than the default.
Use the ``set`` option to modify these entities.
Federation mappings
-------------------
To view the list of mappings:
.. code::
$ openstack mapping list
+------------------+
| ID |
+------------------+
| cloud1-mapping |
+------------------+
To view a mapping in detail:
..code::
$ openstack mapping show cloud1-mapping
+-------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| id | keystone-idp-mapping-2 |
| rules | [{"remote": [{"type": "openstack_project", "any_one_of": ["A"]}], "local": [{"group": {"domain": {"name": "Default"}, "name": |
| | "federated_group_1"}}]}, {"remote": [{"type": "openstack_project", "any_one_of": ["B"]}], "local": [{"group": {"domain": {"name": "Default"}, |
| | "name": "federated_group_2"}}]}] |
+-------+--------------------------------------------------------------------------------------------------------------------------------------------------+
To edit a mapping, use an auxiliary file. Save the JSON mapping shown above
and make the necessary modifications. Use the``set`` command to trigger
an update. For example:
.. code::
$ openstack mapping show cloud1-mapping -c rules -f value | python -m json.tool > rules.json
$ vi rules.json # <--- make any necessary changes
$ openstack mapping set cloud1-mapping --rules rules.json
Federation protocols
--------------------
To view or change the association between a federation
protocol and a mapping, use the following command:
.. code::
$ openstack federation protocol list --identity-provider keystone-idp
+-------+----------------+
| id | mapping |
+-------+----------------+
| saml2 | cloud1-mapping |
+-------+----------------+
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,93 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Using Identity service to Identity service federation
=====================================================
In Identity service (keystone) to Identity service (keystone)
federation (K2K) the identity provider (IdP) and service provider (SP)
keystone instances exchange information securely to enable a user on
the IdP cloud to access resources of the SP cloud.
.. important::
This section applies only to federation between keystone IdP
and keystone SP. It does not apply to non-keystone IdP.
.. note::
For the Kilo release of OpenStack, K2K is only partially supported.
It is possible to perform a federated login using command line clients and
scripting. However, horizon does not support this functionality.
The K2K authentication flow involves the following steps:
#. You log in to the IdP with your credentials.
#. You sends a request to the IdP to generate an assertion for a given
SP. An assertion is a cryptographically signed XML document that identifies
the user to the SP.
#. You submit the assertion to the SP on the configured ``sp_url``
endpoint. The Shibboleth service running on the SP receives the assertion
and verifies it. If it is valid, a session with the client starts and
returns the session ID in a cookie.
#. You now connect to the SP on the configured ``auth_url`` endpoint,
providing the Shibboleth cookie with the session ID. The SP responds with
an unscoped token that you use to access the SP.
#. You connect to the keystone service on the SP with the unscoped
token, and the desired domain and project, and receive a scoped token
and the service catalog.
#. You, now in possession of a token, can make API requests to the
endpoints in the catalog.
Identity service to Identity service federation authentication wrapper
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following steps above involve manually sending API requests.
.. note::
The infrastructure for the command line utilities that performs these steps
for the user does not exist.
To obtain access to a SP cloud, OpenStack-Ansible provides a script that wraps the
above steps. The script is called ``federated-login.sh`` and is
used as follows:
.. code::
# ./scripts/federated-login.sh -p project [-d domain] sp_id
* ``project`` is the project in the SP cloud that you want to access.
* ``domain`` is the domain in which the project lives (the default domain is
used if this argument is not given).
* ``sp_id`` is the unique ID of the SP. This is given in the IdP configuration.
The script outputs the results of all the steps in the authentication flow to
the console. At the end, it prints the available endpoints from the catalog
and the scoped token provided by the SP.
Use the endpoints and token with the openstack command line client as follows:
.. code::
# openstack --os-token=<token> --os-url=<service-endpoint> [options]
Or, alternatively:
.. code::
# export OS_TOKEN=<token>
# export OS_URL=<service-endpoint>
# openstack [options]
Ensure you select the appropriate endpoint for your operation.
For example, if you want to work with servers, the ``OS_URL``
argument must be set to the compute endpoint.
.. note::
At this time, the OpenStack client is unable to find endpoints in
the service catalog when using a federated login.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,50 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Configuring Identity service (keystone) federation (optional)
=============================================================
.. toctree::
configure-federation-wrapper
configure-federation-sp-overview.rst
configure-federation-sp.rst
configure-federation-idp.rst
configure-federation-idp-adfs.rst
configure-federation-mapping.rst
configure-federation-use-case.rst
In keystone federation, the identity provider (IdP) and service
provider (SP) exchange information securely to enable a user on the IdP cloud
to access resources of the SP cloud.
.. note::
For the Kilo release of OpenStack, federation is only partially supported.
It is possible to perform a federated login using command line clients and
scripting, but Dashboard (horizon) does not support this functionality.
The following procedure describes how to set up federation.
#. `Configure Identity Service (keystone) service providers. <configure-federation-sp.html>`_
#. Configure the identity provider:
* `Configure Identity Service (keystone) as an identity provider. <configure-federation-idp.html>`_
* `Configure Active Directory Federation Services (ADFS) 3.0 as an identity provider. <configure-federation-idp-adfs.html>`_
#. Configure the service provider:
* `Configure Identity Service (keystone) as a federated service provider. <configure-federation-sp.html>`_
* `Configure Identity Service (keystone) Domain-Project-Group-Role mappings. <configure-federation-mapping.html>`_
#. `Run the authentication wrapper to use Identity Service to Identity Service federation. <configure-federation-wrapper.html>`_
For examples of how to set up keystone to keystone federation,
see the `Identity Service to Identity Service
federation example use-case. <configure-federation-use-case.html>`_
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,175 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Image (glance) service
======================================
In an all-in-one deployment with a single infrastructure node, the Image
(glance) service uses the local file system on the target host to store images.
When deploying production clouds, we recommend backing glance with a
swift backend or some form of shared storage.
Configuring default and additional stores
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible provides two configurations for controlling where glance stores
files: the default store and additional stores. glance stores images in file-based
storage by default. Two additional stores, ``http`` and ``cinder`` (Block Storage),
are also enabled by default.
You can choose alternative default stores and alternative additional stores.
For example, a deployer that uses Ceph may configure the following Ansible
variables:
.. code-block:: yaml
glance_default_store = rbd
glance_additional_stores:
- swift
- http
- cinder
The configuration above configures glance to use ``rbd`` (Ceph) by
default, but ``glance_additional_stores`` list enables ``swift``,
``http`` and ``cinder`` stores in the glance
configuration files.
The following example sets glance to use the ``images`` pool.
This example uses cephx authentication and requires an existing ``glance``
account for the ``images`` pool.
In ``user_variables.yml``:
.. code-block:: yaml
glance_default_store: rbd
ceph_mons:
- 172.29.244.151
- 172.29.244.152
- 172.29.244.153
You can use the following variables if you are not using the defaults:
.. code-block:: yaml
glance_ceph_client: <glance-username>
glance_rbd_store_pool: <glance-pool-name>
glance_rbd_store_chunk_size: <chunk-size>
Storing images in Cloud Files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following procedure describes how to modify the
``/etc/openstack_deploy/user_variables.yml`` file to enable Cloud Files
usage.
#. Change the default store to use Object Storage (swift), the
underlying architecture of Cloud Files:
.. code-block:: yaml
glance_default_store: swift
#. Set the appropriate authentication URL and version:
.. code-block:: yaml
glance_swift_store_auth_version: 2
glance_swift_store_auth_address: https://127.0.0.1/v2.0
#. Set the swift account credentials:
.. code-block:: yaml
# Replace this capitalized variables with actual data.
glance_swift_store_user: GLANCE_SWIFT_TENANT:GLANCE_SWIFT_USER
glance_swift_store_key: SWIFT_PASSWORD_OR_KEY
#. Change the ``glance_swift_store_endpoint_type`` from the default
``internalURL`` settings to ``publicURL`` if needed.
.. code-block:: yaml
glance_swift_store_endpoint_type: publicURL
#. Define the store name:
.. code-block:: yaml
glance_swift_store_container: STORE_NAME
Replace ``STORE_NAME`` with the container name in swift to be
used for storing images. If the container does not exist, it is
automatically created.
#. Define the store region:
.. code-block:: yaml
glance_swift_store_region: STORE_REGION
Replace ``STORE_REGION`` if needed.
#. (Optional) Set the paste deploy flavor:
.. code-block:: yaml
glance_flavor: GLANCE_FLAVOR
By default, glance uses caching and authenticates with the
Identity (keystone) service. The default maximum size of the image cache is 10GB.
The default glance container size is 12GB. In some
configurations, glance attempts to cache an image
which exceeds the available disk space. If necessary, you can disable
caching. For example, to use Identity without caching, replace
``GLANCE_FLAVOR`` with ``keystone``:
.. code-block:: yaml
glance_flavor: keystone
Or, to disable both authentication and caching, set
``GLANCE_FLAVOR`` to no value:
.. code-block:: yaml
glance_flavor:
This option is set by default to use authentication and cache
management in the ``playbooks/roles/os_glance/defaults/main.yml``
file. To override the default behavior, set ``glance_flavor`` to a
different value in ``/etc/openstack_deploy/user_variables.yml``.
The possible values for ``GLANCE_FLAVOR`` are:
- (Nothing)
- ``caching``
- ``cachemanagement``
- ``keystone``
- ``keystone+caching``
- ``keystone+cachemanagement`` (default)
- ``trusted-auth``
- ``trusted-auth+cachemanagement``
Special considerations
~~~~~~~~~~~~~~~~~~~~~~
If the swift password or key contains a dollar sign (``$``), it must
be escaped with an additional dollar sign (``$$``). For example, a password of
``super$ecure`` would need to be entered as ``super$$ecure``. This is necessary
due to the way `oslo.config formats strings`_.
.. _oslo.config formats strings: https://bugs.launchpad.net/oslo-incubator/+bug/1259729
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,151 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring HAProxy (optional)
==============================
HAProxy provides load balancing services and SSL termination when hardware
load balancers are not available for high availability architectures deployed
by OpenStack-Ansible. The default HAProxy configuration provides highly-
available load balancing services via keepalived if there is more than one
host in the ``haproxy_hosts`` group.
.. important::
Ensure you review the services exposed by HAProxy and limit access
to these services to trusted users and networks only. For more details,
refer to the :ref:`least-access-openstack-services` section.
.. note::
For a successful installation, you require a load balancer. You may
prefer to make use of hardware load balancers instead of HAProxy. If hardware
load balancers are in use, then implement the load balancing configuration for
services prior to executing the deployment.
To deploy HAProxy within your OpenStack-Ansible environment, define target
hosts to run HAProxy:
.. code-block:: yaml
haproxy_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
There is an example configuration file already provided in
``/etc/openstack_deploy/conf.d/haproxy.yml.example``. Rename the file to
``haproxy.yml`` and configure it with the correct target hosts to use HAProxy
in an OpenStack-Ansible deployment.
Making HAProxy highly-available
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If multiple hosts are found in the inventory, deploy
HAProxy in a highly-available manner by installing keepalived.
Edit the ``/etc/openstack_deploy/user_variables.yml`` to skip the deployment
of keepalived along HAProxy when installing HAProxy on multiple hosts.
To do this, set the following::
.. code-block:: yaml
haproxy_use_keepalived: False
To make keepalived work, edit at least the following variables
in ``user_variables.yml``:
.. code-block:: yaml
haproxy_keepalived_external_vip_cidr: 192.168.0.4/25
haproxy_keepalived_internal_vip_cidr: 172.29.236.54/16
haproxy_keepalived_external_interface: br-flat
haproxy_keepalived_internal_interface: br-mgmt
- ``haproxy_keepalived_internal_interface`` and
``haproxy_keepalived_external_interface`` represent the interfaces on the
deployed node where the keepalived nodes bind the internal and external
vip. By default, use ``br-mgmt``.
- On the interface listed above, ``haproxy_keepalived_internal_vip_cidr`` and
``haproxy_keepalived_external_vip_cidr`` represent the internal and
external (respectively) vips (with their prefix length).
- Set additional variables to adapt keepalived in your deployment.
Refer to the ``user_variables.yml`` for more descriptions.
To always deploy (or upgrade to) the latest stable version of keepalived.
Edit the ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
keepalived_use_latest_stable: True
The HAProxy playbook reads the ``vars/configs/keepalived_haproxy.yml``
variable file and provides content to the keepalived role for
keepalived master and backup nodes.
Keepalived pings a public IP address to check its status. The default
address is ``193.0.14.129``. To change this default,
set the ``keepalived_ping_address`` variable in the
``user_variables.yml`` file.
.. note::
The keepalived test works with IPv4 addresses only.
You can define additional variables to adapt keepalived to your
deployment. Refer to the ``user_variables.yml`` file for
more information. Optionally, you can use your own variable file.
For example:
.. code-block:: yaml
haproxy_keepalived_vars_file: /path/to/myvariablefile.yml
Configuring keepalived ping checks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible configures keepalived with a check script that pings an
external resource and uses that ping to determine if a node has lost network
connectivity. If the pings fail, keepalived fails over to another node and
HAProxy serves requests there.
The destination address, ping count and ping interval are configurable via
Ansible variables in ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
keepalived_ping_address: # IP address to ping
keepalived_ping_count: # ICMP packets to send (per interval)
keepalived_ping_interval: # How often ICMP packets are sent
By default, OpenStack-Ansible configures keepalived to ping one of the root
DNS servers operated by RIPE. You can change this IP address to a different
external address or another address on your internal network.
Securing HAProxy communication with SSL certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure HAProxy
communications with self-signed or user-provided SSL certificates. By default,
self-signed certificates are used with HAProxy. However, you can
provide your own certificates by using the following Ansible variables:
.. code-block:: yaml
haproxy_user_ssl_cert: # Path to certificate
haproxy_user_ssl_key: # Path to private key
haproxy_user_ssl_ca_cert: # Path to CA certificate
Refer to `Securing services with SSL certificates`_ for more information on
these configuration options and how you can provide your own
certificates and keys to use with HAProxy.
.. _Securing services with SSL certificates: configure-sslcertificates.html
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,34 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Dashboard (horizon) (optional)
==============================================
Customize your horizon deployment in ``/etc/openstack_deploy/user_variables.yml``.
Securing horizon communication with SSL certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure Dashboard (horizon)
communications with self-signed or user-provided SSL certificates.
Refer to `Securing services with SSL certificates`_ for available configuration
options.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Configuring a horizon customization module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Openstack-Ansible supports deployment of a horizon `customization module`_.
After building your customization module, configure the ``horizon_customization_module`` variable
with a path to your module.
.. code-block:: yaml
horizon_customization_module: /path/to/customization_module.py
.. _customization module: http://docs.openstack.org/developer/horizon/topics/customizing.html#horizon-customization-module-overrides
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,132 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring target hosts
========================
Modify the ``/etc/openstack_deploy/openstack_user_config.yml`` file to
configure the target hosts.
Do not assign the same IP address to different target hostnames.
Unexpected results may occur. Each IP address and hostname must be a
matching pair. To use the same host in multiple roles, for example
infrastructure and networking, specify the same hostname and IP in each
section.
Use short hostnames rather than fully-qualified domain names (FQDN) to
prevent length limitation issues with LXC and SSH. For example, a
suitable short hostname for a compute host might be:
``123456-Compute001``.
Unless otherwise stated, replace ``*_IP_ADDRESS`` with the IP address of
the ``br-mgmt`` container management bridge on each target host.
#. Configure a list containing at least three infrastructure target
hosts in the ``shared-infra_hosts`` section:
.. code-block:: yaml
shared-infra_hosts:
infra01:
ip: INFRA01_IP_ADDRESS
infra02:
ip: INFRA02_IP_ADDRESS
infra03:
ip: INFRA03_IP_ADDRESS
infra04: ...
#. Configure a list containing at least two infrastructure target
hosts in the ``os-infra_hosts`` section (you can reuse
previous hosts as long as their name and ip is consistent):
.. code-block:: yaml
os-infra_hosts:
infra01:
ip: INFRA01_IP_ADDRESS
infra02:
ip: INFRA02_IP_ADDRESS
infra03:
ip: INFRA03_IP_ADDRESS
infra04: ...
#. Configure a list of at least one keystone target host in the
``identity_hosts`` section:
.. code-block:: yaml
identity_hosts:
infra1:
ip: IDENTITY01_IP_ADDRESS
infra2: ...
#. Configure a list containing at least one network target host in the
``network_hosts`` section:
.. code-block:: yaml
network_hosts:
network01:
ip: NETWORK01_IP_ADDRESS
network02: ...
Providing more than one network host in the ``network_hosts`` block will
enable `L3HA support using VRRP`_ in the ``neutron-agent`` containers.
.. _L3HA support using VRRP: http://docs.openstack.org/liberty/networking-guide/scenario_l3ha_lb.html
#. Configure a list containing at least one compute target host in the
``compute_hosts`` section:
.. code-block:: yaml
compute_hosts:
compute001:
ip: COMPUTE001_IP_ADDRESS
compute002: ...
#. Configure a list containing at least one logging target host in the
``log_hosts`` section:
.. code-block:: yaml
log_hosts:
logging01:
ip: LOGGER1_IP_ADDRESS
logging02: ...
#. Configure a list containing at least one repository target host in the
``repo-infra_hosts`` section:
.. code-block:: yaml
repo-infra_hosts:
repo01:
ip: REPO01_IP_ADDRESS
repo02:
ip: REPO02_IP_ADDRESS
repo03:
ip: REPO03_IP_ADDRESS
repo04: ...
The repository typically resides on one or more infrastructure hosts.
#. Configure a list containing at least one optional storage host in the
``storage_hosts`` section:
.. code-block:: yaml
storage_hosts:
storage01:
ip: STORAGE01_IP_ADDRESS
storage02: ...
Each storage host requires additional configuration to define the back end
driver.
The default configuration includes an optional storage host. To
install without storage hosts, comment out the stanza beginning with
the *storage_hosts:* line.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,18 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the hypervisor (optional)
=====================================
By default, the KVM hypervisor is used. If you are deploying to a host
that does not support KVM hardware acceleration extensions, select a
suitable hypervisor type such as ``qemu`` or ``lxc``. To change the
hypervisor type, uncomment and edit the following line in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
# nova_virt_type: kvm
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,129 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Initial environment configuration
=================================
OpenStack-Ansible depends on various files that are used to build an inventory
for Ansible. Start by getting those files into the correct places:
#. Copy the contents of the
``/opt/openstack-ansible/etc/openstack_deploy`` directory to the
``/etc/openstack_deploy`` directory.
#. Change to the ``/etc/openstack_deploy`` directory.
#. Copy the ``openstack_user_config.yml.example`` file to
``/etc/openstack_deploy/openstack_user_config.yml``.
You can review the ``openstack_user_config.yml`` file and make changes
to the deployment of your OpenStack environment.
.. note::
The file is heavily commented with details about the various options.
There are various types of physical hardware that are able to use containers
deployed by OpenStack-Ansible. For example, hosts listed in the
`shared-infra_hosts` run containers for many of the shared services that
your OpenStack environments requires. Some of these services include databases,
memcached, and RabbitMQ. There are several other host types that contain
other types of containers and all of these are listed in
``openstack_user_config.yml``.
For details about how the inventory is generated from the environment
configuration, see :ref:`developer-inventory`.
Affinity
~~~~~~~~
OpenStack-Ansible's dynamic inventory generation has a concept called
`affinity`. This determines how many containers of a similar type are deployed
onto a single physical host.
Using `shared-infra_hosts` as an example, consider this ``openstack_user_config.yml``:
.. code-block:: yaml
shared-infra_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
Three hosts are assigned to the `shared-infra_hosts` group,
OpenStack-Ansible ensures that each host runs a single database container,
a single memcached container, and a single RabbitMQ container. Each host has
an affinity of 1 by default, and that means each host will run one of each
container type.
You can skip the deployment of RabbitMQ altogether. This is
helpful when deploying a standalone swift environment. If you need
this configuration, your ``openstack_user_config.yml`` would look like this:
.. code-block:: yaml
shared-infra_hosts:
infra1:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.101
infra2:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.102
infra3:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.103
The configuration above deploys a memcached container and a database
container on each host, without the RabbitMQ containers.
.. _security_hardening:
Security hardening
~~~~~~~~~~~~~~~~~~
OpenStack-Ansible automatically applies host security hardening configurations
using the `openstack-ansible-security`_ role. The role uses a version of the
`Security Technical Implementation Guide (STIG)`_ that has been adapted for
Ubuntu 14.04 and OpenStack.
The role is applicable to physical hosts within an OpenStack-Ansible deployment
that are operating as any type of node, infrastructure or compute. By
default, the role is enabled. You can disable it by changing a variable
within ``user_variables.yml``:
.. code-block:: yaml
apply_security_hardening: false
When the variable is set to ``true``, the ``setup-hosts.yml`` playbook applies
the role during deployments.
You can apply security configurations to an existing environment or audit
an environment using a playbook supplied with OpenStack-Ansible:
.. code-block:: bash
# Perform a quick audit using Ansible's check mode
openstack-ansible --check security-hardening.yml
# Apply security hardening configurations
openstack-ansible security-hardening.yml
For more details on the security configurations that will be applied, refer to
the `openstack-ansible-security`_ documentation. Review the `Configuration`_
section of the openstack-ansible-security documentation to find out how to
fine-tune certain security configurations.
.. _openstack-ansible-security: http://docs.openstack.org/developer/openstack-ansible-security/
.. _Security Technical Implementation Guide (STIG): https://en.wikipedia.org/wiki/Security_Technical_Implementation_Guide
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,220 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Bare Metal (ironic) service (optional)
======================================================
.. note::
This feature is experimental at this time and it has not been fully production
tested yet. These implementation instructions assume that ironic is being deployed
as the sole hypervisor for the region.
Ironic is an OpenStack project which provisions bare metal (as opposed to virtual)
machines by leveraging common technologies such as PXE boot and IPMI to cover a wide
range of hardware, while supporting pluggable drivers to allow vendor-specific
functionality to be added.
OpenStacks ironic project makes physical servers as easy to provision as
virtual machines in a cloud.
OpenStack-Ansible deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Modify the environment files and force ``nova-compute`` to run from
within a container:
.. code-block:: bash
sed -i '/is_metal.*/d' /etc/openstack_deploy/env.d/nova.yml
Setup a neutron network for use by ironic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In a general case, neutron networking can be a simple flat network. However,
in a complex case, this can be whatever you need and want. Ensure
you adjust the deployment accordingly. The following is an example:
.. code-block:: bash
neutron net-create cleaning-net --shared \
--provider:network_type flat \
--provider:physical_network ironic-net
neutron subnet-create ironic-net 172.19.0.0/22 --name ironic-subnet
--ip-version=4 \
--allocation-pool start=172.19.1.100,end=172.19.1.200 \
--enable-dhcp \
--dns-nameservers list=true 8.8.4.4 8.8.8.8
Building ironic images
~~~~~~~~~~~~~~~~~~~~~~
Images using the ``diskimage-builder`` must be built outside of a container.
For this process, use one of the physical hosts within the environment.
#. Install the necessary packages:
.. code-block:: bash
apt-get install -y qemu uuid-runtime curl
#. Install the ``disk-imagebuilder`` package:
.. code-block:: bash
pip install diskimage-builder --isolated
.. important::
Only use the ``--isolated`` flag if you are building on a node
deployed by OpenStack-Ansible, otherwise pip will not
resolve the external package.
#. Optional: Force the ubuntu ``image-create`` process to use a modern kernel:
.. code-block:: bash
echo 'linux-image-generic-lts-xenial:' > \
/usr/local/share/diskimage-builder/elements/ubuntu/package-installs.yaml
#. Create Ubuntu ``initramfs``:
.. code-block:: bash
disk-image-create ironic-agent ubuntu -o ${IMAGE_NAME}
#. Upload the created deploy images into the Image (glance) Service:
.. code-block:: bash
# Upload the deploy image kernel
glance image-create --name ${IMAGE_NAME}.kernel --visibility public \
--disk-format aki --container-format aki < ${IMAGE_NAME}.kernel
# Upload the user image initramfs
glance image-create --name ${IMAGE_NAME}.initramfs --visibility public \
--disk-format ari --container-format ari < ${IMAGE_NAME}.initramfs
#. Create Ubuntu user image:
.. code-block:: bash
disk-image-create ubuntu baremetal localboot local-config dhcp-all-interfaces grub2 -o ${IMAGE_NAME}
#. Upload the created user images into the Image (glance) Service:
.. code-block:: bash
# Upload the user image vmlinuz and store uuid
VMLINUZ_UUID="$(glance image-create --name ${IMAGE_NAME}.vmlinuz --visibility public --disk-format aki --container-format aki < ${IMAGE_NAME}.vmlinuz | awk '/\| id/ {print $4}')"
# Upload the user image initrd and store uuid
INITRD_UUID="$(glance image-create --name ${IMAGE_NAME}.initrd --visibility public --disk-format ari --container-format ari < ${IMAGE_NAME}.initrd | awk '/\| id/ {print $4}')"
# Create image
glance image-create --name ${IMAGE_NAME} --visibility public --disk-format qcow2 --container-format bare --property kernel_id=${VMLINUZ_UUID} --property ramdisk_id=${INITRD_UUID} < ${IMAGE_NAME}.qcow2
Creating an ironic flavor
~~~~~~~~~~~~~~~~~~~~~~~~~
#. Create a new flavor called ``my-baremetal-flavor``.
.. note::
The following example sets the CPU architecture for the newly created
flavor to be `x86_64`.
.. code-block:: bash
nova flavor-create ${FLAVOR_NAME} ${FLAVOR_ID} ${FLAVOR_RAM} ${FLAVOR_DISK} ${FLAVOR_CPU}
nova flavor-key ${FLAVOR_NAME} set cpu_arch=x86_64
nova flavor-key ${FLAVOR_NAME} set capabilities:boot_option="local"
.. note::
Ensure the flavor and nodes match when enrolling into ironic.
See the documentation on flavors for more information:
http://docs.openstack.org/openstack-ops/content/flavors.html
After successfully deploying the ironic node on subsequent boots, the instance
boots from your local disk as first preference. This speeds up the deployed
node's boot time. Alternatively, if this is not set, the ironic node PXE boots first and
allows for operator-initiated image updates and other operations.
.. note::
The operational reasoning and building an environment to support this
use case is not covered here.
Enroll ironic nodes
-------------------
#. From the utility container, enroll a new baremetal node by executing the following:
.. code-block:: bash
# Source credentials
. ~/openrc
# Create the node
NODE_HOSTNAME="myfirstnodename"
IPMI_ADDRESS="10.1.2.3"
IPMI_USER="my-ipmi-user"
IPMI_PASSWORD="my-ipmi-password"
KERNEL_IMAGE=$(glance image-list | awk "/${IMAGE_NAME}.kernel/ {print \$2}")
INITRAMFS_IMAGE=$(glance image-list | awk "/${IMAGE_NAME}.initramfs/ {print \$2}")
ironic node-create \
-d agent_ipmitool \
-i ipmi_address="${IPMI_ADDRESS}" \
-i ipmi_username="${IPMI_USER}" \
-i ipmi_password="${IPMI_PASSWORD}" \
-i deploy_ramdisk="${INITRAMFS_IMAGE}" \
-i deploy_kernel="${KERNEL_IMAGE}" \
-n ${NODE_HOSTNAME}
# Create a port for the node
NODE_MACADDRESS="aa:bb:cc:dd:ee:ff"
ironic port-create \
-n $(ironic node-list | awk "/${NODE_HOSTNAME}/ {print \$2}") \
-a ${NODE_MACADDRESS}
# Associate an image to the node
ROOT_DISK_SIZE_GB=40
ironic node-update $(ironic node-list | awk "/${IMAGE_NAME}/ {print \$2}") add \
driver_info/deploy_kernel=$KERNEL_IMAGE \
driver_info/deploy_ramdisk=$INITRAMFS_IMAGE \
instance_info/deploy_kernel=$KERNEL_IMAGE \
instance_info/deploy_ramdisk=$INITRAMFS_IMAGE \
instance_info/root_gb=${ROOT_DISK_SIZE_GB}
# Add node properties
# The property values used here should match the hardware used
ironic node-update $(ironic node-list | awk "/${NODE_HOSTNAME}/ {print \$2}") add \
properties/cpus=48 \
properties/memory_mb=254802 \
properties/local_gb=80 \
properties/size=3600 \
properties/cpu_arch=x86_64 \
properties/capabilities=memory_mb:254802,local_gb:80,cpu_arch:x86_64,cpus:48,boot_option:local
Deploy a baremetal node kicked with ironic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. important::
You will not have access unless you have a key set within nova before
your ironic deployment. If you do not have an ssh key readily
available, set one up with ``ssh-keygen``.
.. code-block:: bash
nova keypair-add --pub-key ~/.ssh/id_rsa.pub admin
Now boot a node:
.. code-block:: bash
nova boot --flavor ${FLAVOR_NAME} --image ${IMAGE_NAME} --key-name admin ${NODE_NAME}

View File

@ -0,0 +1,121 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Identity service (keystone) (optional)
======================================================
Customize your keystone deployment in ``/etc/openstack_deploy/user_variables.yml``.
Securing keystone communication with SSL certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure keystone
communications with self-signed or user-provided SSL certificates. By default,
self-signed certificates are in use. However, you can
provide your own certificates by using the following Ansible variables in
``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
keystone_user_ssl_cert: # Path to certificate
keystone_user_ssl_key: # Path to private key
keystone_user_ssl_ca_cert: # Path to CA certificate
.. note::
If you are providing certificates, keys, and CA file for a
CA without chain of trust (or an invalid/self-generated ca), the variables
``keystone_service_internaluri_insecure`` and
``keystone_service_adminuri_insecure`` should be set to ``True``.
Refer to `Securing services with SSL certificates`_ for more information on
these configuration options and how you can provide your own
certificates and keys to use with keystone.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Implementing LDAP (or Active Directory) backends
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use the built-in keystone support for services if you already have
LDAP or Active Directory (AD) infrastructure on your deployment.
Keystone uses the existing users, groups, and user-group relationships to
handle authentication and access control in an OpenStack deployment.
.. note::
We do not recommend configuring the default domain in keystone to use
LDAP or AD identity backends. Create additional domains
in keystone and configure either LDAP or active directory backends for
that domain.
This is critical in situations where the identity backend cannot
be reached due to network issues or other problems. In those situations,
the administrative users in the default domain would still be able to
authenticate to keystone using the default domain which is not backed by
LDAP or AD.
You can add domains with LDAP backends by adding variables in
``/etc/openstack_deploy/user_variables.yml``. For example, this dictionary
adds a new keystone domain called ``Users`` that is backed by an LDAP server:
.. code-block:: yaml
keystone_ldap:
Users:
url: "ldap://10.10.10.10"
user: "root"
password: "secrete"
Adding the YAML block above causes the keystone playbook to create a
``/etc/keystone/domains/keystone.Users.conf`` file within each keystone service
container that configures the LDAP-backed domain called ``Users``.
You can create more complex configurations that use LDAP filtering and
consume LDAP as a read-only resource. The following example shows how to apply
these configurations:
.. code-block:: yaml
keystone_ldap:
MyCorporation:
url: "ldaps://ldap.example.com"
user_tree_dn: "ou=Users,o=MyCorporation"
group_tree_dn: "cn=openstack-users,ou=Users,o=MyCorporation"
user_objectclass: "inetOrgPerson"
user_allow_create: "False"
user_allow_update: "False"
user_allow_delete: "False"
group_allow_create: "False"
group_allow_update: "False"
group_allow_delete: "False"
user_id_attribute: "cn"
user_name_attribute: "uid"
user_filter: "(groupMembership=cn=openstack-users,ou=Users,o=MyCorporation)"
In the `MyCorporation` example above, keystone uses the LDAP server as a
read-only resource. The configuration also ensures that keystone filters the
list of possible users to the ones that exist in the
``cn=openstack-users,ou=Users,o=MyCorporation`` group.
Horizon offers multi-domain support that can be enabled with an Ansible
variable during deployment:
.. code-block:: yaml
horizon_keystone_multidomain_support: True
Enabling multi-domain support in horizon adds the ``Domain`` input field on
the horizon login page and it adds other domain-specific features in the
keystone section.
More details regarding valid configuration for the LDAP Identity backend can
be found in the `Keystone Developer Documentation`_ and the
`OpenStack Admin Guide`_.
.. _Keystone Developer Documentation: http://docs.openstack.org/developer/keystone/configuration.html#configuring-the-ldap-identity-provider
.. _OpenStack Administrator Guide: http://docs.openstack.org/admin-guide/keystone_integrate_identity_backend_ldap.html
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,188 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Networking service (neutron) (optional)
=======================================================
The OpenStack Networking service (neutron) includes the following services:
Firewall as a Service (FWaaS)
Provides a software-based firewall that filters traffic from the router.
Load Balancer as a Service (LBaaS)
Provides load balancers that direct traffic to OpenStack instances or other
servers outside the OpenStack deployment.
VPN as a Service (VPNaaS)
Provides a method for extending a private network across a public network.
Firewall service (optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following procedure describes how to modify the
``/etc/openstack_deploy/user_variables.yml`` file to enable FWaaS.
#. Override the default list of neutron plugins to include
``firewall``:
.. code-block:: yaml
neutron_plugin_base:
- firewall
- ...
#. ``neutron_plugin_base`` is as follows:
.. code-block:: yaml
neutron_plugin_base:
- router
- firewall
- lbaas
- vpnaas
- metering
- qos
#. Execute the neutron install playbook in order to update the configuration:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-neutron-install.yml
#. Execute the horizon install playbook to show the FWaaS panels:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-horizon-install.yml
The FWaaS default configuration options may be changed through the
`conf override`_ mechanism using the ``neutron_neutron_conf_overrides``
dict.
Load balancing service (optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The `neutron-lbaas`_ plugin for neutron provides a software load balancer
service and can direct traffic to multiple servers. The service runs as an
agent and it manages `HAProxy`_ configuration files and daemons.
The Newton release contains only the LBaaS v2 API. For more details about
transitioning from LBaaS v1 to v2, review the :ref:`lbaas-special-notes`
section below.
Deployers can make changes to the LBaaS default configuration options via the
``neutron_lbaas_agent_ini_overrides`` dictionary. Review the documentation on
the `conf override`_ mechanism for more details.
.. _neutron-lbaas: https://wiki.openstack.org/wiki/Neutron/LBaaS
.. _HAProxy: http://www.haproxy.org/
Deploying LBaaS v2
------------------
#. Add the LBaaS v2 plugin to the ``neutron_plugin_base`` variable
in ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
neutron_plugin_base:
- router
- metering
- neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
Ensure that ``neutron_plugin_base`` includes all of the plugins that you
want to deploy with neutron in addition to the LBaaS plugin.
#. Run the neutron and horizon playbooks to deploy the LBaaS v2 agent and
enable the LBaaS v2 panels in horizon:
.. code-block:: console
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-neutron-install.yml
# openstack-ansible os-horizon-install.yml
.. _lbaas-special-notes
Special notes about LBaaS
-------------------------
**LBaaS v1 was deprecated in the Mitaka release and is not available in the
Newton release.**
LBaaS v1 and v2 agents are unable to run at the same time. If you switch
LBaaS v1 to v2, the v2 agent is the only agent running. The LBaaS v1 agent
stops along with any load balancers provisioned under the v1 agent.
Load balancers are not migrated between LBaaS v1 and v2 automatically. Each
implementation has different code paths and database tables. You need
to manually delete load balancers, pools, and members before switching LBaaS
versions. Recreate these objects afterwards.
Virtual private network service (optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following procedure describes how to modify the
``/etc/openstack_deploy/user_variables.yml`` file to enable VPNaaS.
#. Override the default list of neutron plugins to include
``vpnaas``:
.. code-block:: yaml
neutron_plugin_base:
- router
- metering
#. ``neutron_plugin_base`` is as follows:
.. code-block:: yaml
neutron_plugin_base:
- router
- metering
- vpnaas
#. Override the default list of specific kernel modules
in order to include the necessary modules to run ipsec:
.. code-block:: yaml
openstack_host_specific_kernel_modules:
- { name: "ebtables", pattern: "CONFIG_BRIDGE_NF_EBTABLES=", group: "network_hosts" }
- { name: "af_key", pattern: "CONFIG_NET_KEY=", group: "network_hosts" }
- { name: "ah4", pattern: "CONFIG_INET_AH=", group: "network_hosts" }
- { name: "ipcomp", pattern: "CONFIG_INET_IPCOMP=", group: "network_hosts" }
#. Execute the openstack hosts setup in order to load the kernel modules at boot
and runtime in the network hosts
.. code-block:: shell-session
# openstack-ansible openstack-hosts-setup.yml --limit network_hosts\
--tags "openstack_hosts-config"
#. Execute the neutron install playbook in order to update the configuration:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-neutron-install.yml
#. Execute the horizon install playbook to show the VPNaaS panels:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-horizon-install.yml
The VPNaaS default configuration options are changed through the
`conf override`_ mechanism using the ``neutron_neutron_conf_overrides``
dict.
.. _conf override: http://docs.openstack.org/developer/openstack-ansible/install-guide/configure-openstack.html
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,283 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
.. _network_configuration:
Configuring target host networking
==================================
Edit the ``/etc/openstack_deploy/openstack_user_config.yml`` file to
configure target host networking.
#. Configure the IP address ranges associated with each network in the
``cidr_networks`` section:
.. code-block:: yaml
cidr_networks:
# Management (same range as br-mgmt on the target hosts)
container: CONTAINER_MGMT_CIDR
# Tunnel endpoints for VXLAN tenant networks
# (same range as br-vxlan on the target hosts)
tunnel: TUNNEL_CIDR
#Storage (same range as br-storage on the target hosts)
storage: STORAGE_CIDR
Replace ``*_CIDR`` with the appropriate IP address range in CIDR
notation. For example, 203.0.113.0/24.
Use the same IP address ranges as the underlying physical network
interfaces or bridges in `the section called "Configuring
the network" <targethosts-network.html>`_. For example, if the
container network uses 203.0.113.0/24, the ``CONTAINER_MGMT_CIDR``
also uses 203.0.113.0/24.
The default configuration includes the optional storage and service
networks. To remove one or both of them, comment out the appropriate
network name.
#. Configure the existing IP addresses in the ``used_ips`` section:
.. code-block:: yaml
used_ips:
- EXISTING_IP_ADDRESSES
Replace ``EXISTING_IP_ADDRESSES`` with a list of existing IP
addresses in the ranges defined in the previous step. This list
should include all IP addresses manually configured on target hosts,
internal load balancers, service network bridge, deployment hosts and
any other devices to avoid conflicts during the automatic IP address
generation process.
Add individual IP addresses on separate lines. For example, to
prevent use of 203.0.113.101 and 201:
.. code-block:: yaml
used_ips:
- 203.0.113.101
- 203.0.113.201
Add a range of IP addresses using a comma. For example, to prevent
use of 203.0.113.101-201:
.. code-block:: yaml
used_ips:
- "203.0.113.101,203.0.113.201"
#. Configure load balancing in the ``global_overrides`` section:
.. code-block:: yaml
global_overrides:
# Internal load balancer VIP address
internal_lb_vip_address: INTERNAL_LB_VIP_ADDRESS
# External (DMZ) load balancer VIP address
external_lb_vip_address: EXTERNAL_LB_VIP_ADDRESS
# Container network bridge device
management_bridge: "MGMT_BRIDGE"
# Tunnel network bridge device
tunnel_bridge: "TUNNEL_BRIDGE"
Replace ``INTERNAL_LB_VIP_ADDRESS`` with the internal IP address of
the load balancer. Infrastructure and OpenStack services use this IP
address for internal communication.
Replace ``EXTERNAL_LB_VIP_ADDRESS`` with the external, public, or
DMZ IP address of the load balancer. Users primarily use this IP
address for external API and web interfaces access.
Replace ``MGMT_BRIDGE`` with the container bridge device name,
typically ``br-mgmt``.
Replace ``TUNNEL_BRIDGE`` with the tunnel/overlay bridge device
name, typically ``br-vxlan``.
#. Configure the management network in the ``provider_networks`` subsection:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- all_containers
- hosts
type: "raw"
container_bridge: "br-mgmt"
container_interface: "eth1"
container_type: "veth"
ip_from_q: "container"
is_container_address: true
is_ssh_address: true
#. Configure optional networks in the ``provider_networks`` subsection. For
example, a storage network:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
The default configuration includes the optional storage and service
networks. To remove one or both of them, comment out the entire
associated stanza beginning with the ``- network:`` line.
#. Configure OpenStack Networking VXLAN tunnel/overlay networks in the
``provider_networks`` subsection:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "TUNNEL_ID_RANGE"
net_name: "vxlan"
Replace ``TUNNEL_ID_RANGE`` with the tunnel ID range. For example,
1:1000.
#. Configure OpenStack Networking flat (untagged) and VLAN (tagged) networks
in the ``provider_networks`` subsection:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "PHYSICAL_NETWORK_INTERFACE"
type: "flat"
net_name: "flat"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: VLAN_ID_RANGE
net_name: "vlan"
Replace ``VLAN_ID_RANGE`` with the VLAN ID range for each VLAN network.
For example, 1:1000. Supports more than one range of VLANs on a particular
network. For example, 1:1000,2001:3000. Create a similar stanza for each
additional network.
Replace ``PHYSICAL_NETWORK_INTERFACE`` with the network interface used for
flat networking. Ensure this is a physical interface on the same L2 network
being used with the ``br-vlan`` devices. If no additional network interface is
available, a veth pair plugged into the ``br-vlan`` bridge can provide the
necessary interface.
The following is an example of creating a ``veth-pair`` within an existing bridge:
.. code-block:: text
# Create veth pair, do not abort if already exists
pre-up ip link add br-vlan-veth type veth peer name PHYSICAL_NETWORK_INTERFACE || true
# Set both ends UP
pre-up ip link set br-vlan-veth up
pre-up ip link set PHYSICAL_NETWORK_INTERFACE up
# Delete veth pair on DOWN
post-down ip link del br-vlan-veth || true
bridge_ports br-vlan-veth
Adding static routes to network interfaces
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Optionally, you can add one or more static routes to interfaces within
containers. Each route requires a destination network in CIDR notation
and a gateway. For example:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_interface: "eth2"
container_type: "veth"
ip_from_q: "storage"
static_routes:
- cidr: 10.176.0.0/12
gateway: 172.29.248.1
This example adds the following content to the
``/etc/network/interfaces.d/eth2.cfg`` file in the appropriate
containers:
.. code-block:: shell-session
post-up ip route add 10.176.0.0/12 via 172.29.248.1 || true
The ``cidr`` and ``gateway`` values must *both* be specified, or the
inventory script will raise an error. Accuracy of the network information
is not checked within the inventory script, just that the keys and values
are present.
Setting an MTU on a network interface
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Larger MTU's can be useful on certain networks, especially storage networks.
Add a ``container_mtu`` attribute within the ``provider_networks`` dictionary
to set a custom MTU on the container network interfaces that attach to a
particular network:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_interface: "eth2"
container_type: "veth"
container_mtu: "9000"
ip_from_q: "storage"
static_routes:
- cidr: 10.176.0.0/12
gateway: 172.29.248.1
The example above enables `jumbo frames`_ by setting the MTU on the storage
network to 9000.
.. note:: If you are editing the MTU for your networks, you may also want
to adapt your neutron MTU settings (depending on your needs). Please
refer to the neutron documentation (`Neutron MTU considerations`_).
.. _jumbo frames: https://en.wikipedia.org/wiki/Jumbo_frame
.. _neutron MTU considerations: http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,171 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Compute (nova) service (optional)
=================================================
The Compute service (nova) handles the creation of virtual machines within an
OpenStack environment. Many of the default options used by OpenStack-Ansible
are found within ``defaults/main.yml`` within the nova role.
Availability zones
~~~~~~~~~~~~~~~~~~
Deployers with multiple availability zones can set the
``nova_default_schedule_zone`` Ansible variable to specify an availability zone
for new requests. This is useful in environments with different types
of hypervisors, where builds are sent to certain hardware types based on
their resource requirements.
For example, if you have servers running on two racks without sharing the PDU.
These two racks can be grouped into two availability zones.
When one rack loses power, the other one still works. By spreading
your containers onto the two racks (availability zones), you will
improve your service availability.
Block device tuning for Ceph (RBD)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enabling Ceph and defining ``nova_libvirt_images_rbd_pool`` changes two
libvirt configurations by default:
* hw_disk_discard: ``unmap``
* disk_cachemodes: ``network=writeback``
Setting ``hw_disk_discard`` to ``unmap`` in libvirt enables
discard (sometimes called TRIM) support for the underlying block device. This
allows reclaiming of unused blocks on the underlying disks.
Setting ``disk_cachemodes`` to ``network=writeback`` allows data to be written
into a cache on each change, but those changes are flushed to disk at a regular
interval. This can increase write performance on Ceph block devices.
You have the option to customize these settings using two Ansible
variables (defaults shown here):
.. code-block:: yaml
nova_libvirt_hw_disk_discard: 'unmap'
nova_libvirt_disk_cachemodes: 'network=writeback'
You can disable discard by setting ``nova_libvirt_hw_disk_discard`` to
``ignore``. The ``nova_libvirt_disk_cachemodes`` can be set to an empty
string to disable ``network=writeback``.
The following minimal example configuration sets nova to use the
``ephemeral-vms`` Ceph pool. The following example uses cephx authentication, and
requires an existing ``cinder`` account for the ``ephemeral-vms`` pool:
.. code-block:: console
nova_libvirt_images_rbd_pool: ephemeral-vms
ceph_mons:
- 172.29.244.151
- 172.29.244.152
- 172.29.244.153
If you have a different Ceph username for the pool, use it as:
.. code-block:: console
cinder_ceph_client: <ceph-username>
* The `Ceph documentation for OpenStack`_ has additional information about these settings.
* `OpenStack-Ansible and Ceph Working Example`_
.. _Ceph documentation for OpenStack: http://docs.ceph.com/docs/master/rbd/rbd-openstack/
.. _OpenStack-Ansible and Ceph Working Example: https://www.openstackfaq.com/openstack-ansible-ceph/
Config drive
~~~~~~~~~~~~
By default, OpenStack-Ansible does not configure nova to force config drives
to be provisioned with every instance that nova builds. The metadata service
provides configuration information that is used by ``cloud-init`` inside the
instance. Config drives are only necessary when an instance does not have
``cloud-init`` installed or does not have support for handling metadata.
A deployer can set an Ansible variable to force config drives to be deployed
with every virtual machine:
.. code-block:: yaml
nova_force_config_drive: True
Certain formats of config drives can prevent instances from migrating properly
between hypervisors. If you need forced config drives and the ability
to migrate instances, set the config drive format to ``vfat`` using
the ``nova_nova_conf_overrides`` variable:
.. code-block:: yaml
nova_nova_conf_overrides:
DEFAULT:
config_drive_format: vfat
force_config_drive: True
Libvirtd connectivity and authentication
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, OpenStack-Ansible configures the libvirt daemon in the following
way:
* TLS connections are enabled
* TCP plaintext connections are disabled
* Authentication over TCP connections uses SASL
You can customize these settings using the following Ansible variables:
.. code-block:: yaml
# Enable libvirtd's TLS listener
nova_libvirtd_listen_tls: 1
# Disable libvirtd's plaintext TCP listener
nova_libvirtd_listen_tcp: 0
# Use SASL for authentication
nova_libvirtd_auth_tcp: sasl
Multipath
~~~~~~~~~
Nova supports multipath for iSCSI-based storage. Enable multipath support in
nova through a configuration override:
.. code-block:: yaml
nova_nova_conf_overrides:
libvirt:
iscsi_use_multipath: true
Shared storage and synchronized UID/GID
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Specify a custom UID for the nova user and GID for the nova group
to ensure they are identical on each host. This is helpful when using shared
storage on Compute nodes because it allows instances to migrate without
filesystem ownership failures.
By default, Ansible creates the nova user and group without specifying the
UID or GID. To specify custom values for the UID or GID, set the following
Ansible variables:
.. code-block:: yaml
nova_system_user_uid = <specify a UID>
nova_system_group_gid = <specify a GID>
.. warning::
Setting this value after deploying an environment with
OpenStack-Ansible can cause failures, errors, and general instability. These
values should only be set once before deploying an OpenStack environment
and then never changed.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,207 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Overriding OpenStack configuration defaults
===========================================
OpenStack has many configuration options available in configuration files
which are in the form of ``.conf`` files (in a standard ``INI`` file format),
policy files (in a standard ``JSON`` format) and also ``YAML`` files.
.. note::
``YAML`` files are only in the ceilometer project at this time.
OpenStack-Ansible provides the facility to include reference to any options in
the `OpenStack Configuration Reference`_ through the use of a simple set of
configuration entries in ``/etc/openstack_deploy/user_variables.yml``.
This section provides guidance for how to make use of this facility. Further
guidance is available in the developer documentation in the section titled
`Setting overrides in configuration files`_.
.. _OpenStack Configuration Reference: http://docs.openstack.org/draft/config-reference/
.. _Setting overrides in configuration files: ../developer-docs/extending.html#setting-overrides-in-configuration-files
Overriding .conf files
~~~~~~~~~~~~~~~~~~~~~~
The most common use-case for implementing overrides are for the
``<service>.conf`` files (for example, ``nova.conf``). These files use a standard
``INI`` file format.
For example, if you add the following parameters to ``nova.conf``:
.. code-block:: ini
[DEFAULT]
remove_unused_original_minimum_age_seconds = 43200
[libvirt]
cpu_mode = host-model
disk_cachemodes = file=directsync,block=none
[database]
idle_timeout = 300
max_pool_size = 10
This is accomplished through the use of the following configuration
entry in ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
nova_nova_conf_overrides:
DEFAULT:
remove_unused_original_minimum_age_seconds: 43200
libvirt:
cpu_mode: host-model
disk_cachemodes: file=directsync,block=none
database:
idle_timeout: 300
max_pool_size: 10
Overrides may also be applied on a per host basis with the following
configuration in ``/etc/openstack_deploy/openstack_user_config.yml``:
.. code-block:: yaml
compute_hosts:
900089-compute001:
ip: 192.0.2.10
host_vars:
nova_nova_conf_overrides:
DEFAULT:
remove_unused_original_minimum_age_seconds: 43200
libvirt:
cpu_mode: host-model
disk_cachemodes: file=directsync,block=none
database:
idle_timeout: 300
max_pool_size: 10
Use this method for any ``INI`` file format for all OpenStack projects
deployed in OpenStack-Ansible.
To assist you in finding the appropriate variable name to use for
overrides, the general format for the variable name is:
``<service>_<filename>_<file extension>_overrides``.
Overriding .json files
~~~~~~~~~~~~~~~~~~~~~~
You can adjust the default policies applied by services in order
to implement access controls which are different to a standard OpenStack
environment. Policy files are in a ``JSON`` format.
For example, you can add the following policy in keystone's ``policy.json``:
.. code-block:: json
{
"identity:foo": "rule:admin_required",
"identity:bar": "rule:admin_required"
}
Accomplish this through the use of the following configuration
entry in ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
keystone_policy_overrides:
identity:foo: "rule:admin_required"
identity:bar: "rule:admin_required"
Use this method for any ``JSON`` file format for all OpenStack projects
deployed in OpenStack-Ansible.
To assist you in finding the appropriate variable name to use for
overrides, the general format for the variable name is
``<service>_policy_overrides``.
Currently available overrides
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following is a list of overrides available:
Galera:
* galera_client_my_cnf_overrides
* galera_my_cnf_overrides
* galera_cluster_cnf_overrides
* galera_debian_cnf_overrides
Ceilometer:
* ceilometer_policy_overrides
* ceilometer_ceilometer_conf_overrides
* ceilometer_api_paste_ini_overrides
* ceilometer_event_definitions_yaml_overrides
* ceilometer_event_pipeline_yaml_overrides
* ceilometer_pipeline_yaml_overrides
Cinder:
* cinder_policy_overrides
* cinder_rootwrap_conf_overrides
* cinder_api_paste_ini_overrides
* cinder_cinder_conf_overrides
Glance:
* glance_glance_api_paste_ini_overrides
* glance_glance_api_conf_overrides
* glance_glance_cache_conf_overrides
* glance_glance_manage_conf_overrides
* glance_glance_registry_paste_ini_overrides
* glance_glance_registry_conf_overrides
* glance_glance_scrubber_conf_overrides
* glance_glance_scheme_json_overrides
* glance_policy_overrides
Heat:
* heat_heat_conf_overrides
* heat_api_paste_ini_overrides
* heat_default_yaml_overrides
* heat_aws_cloudwatch_alarm_yaml_overrides
* heat_aws_rds_dbinstance_yaml_overrides
* heat_policy_overrides
Keystone:
* keystone_keystone_conf_overrides
* keystone_keystone_default_conf_overrides
* keystone_keystone_paste_ini_overrides
* keystone_policy_overrides
Neutron:
* neutron_neutron_conf_overrides
* neutron_ml2_conf_ini_overrides
* neutron_dhcp_agent_ini_overrides
* neutron_api_paste_ini_overrides
* neutron_rootwrap_conf_overrides
* neutron_policy_overrides
* neutron_dnsmasq_neutron_conf_overrides
* neutron_l3_agent_ini_overrides
* neutron_metadata_agent_ini_overrides
* neutron_metering_agent_ini_overrides
Nova:
* nova_nova_conf_overrides
* nova_rootwrap_conf_overrides
* nova_api_paste_ini_overrides
* nova_policy_overrides
Swift:
* swift_swift_conf_overrides
* swift_swift_dispersion_conf_overrides
* swift_proxy_server_conf_overrides
* swift_account_server_conf_overrides
* swift_account_server_replicator_conf_overrides
* swift_container_server_conf_overrides
* swift_container_server_replicator_conf_overrides
* swift_object_server_conf_overrides
* swift_object_server_replicator_conf_overrides
Tempest:
* tempest_tempest_conf_overrides
pip:
* pip_global_conf_overrides
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,42 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring RabbitMQ (optional)
===============================
RabbitMQ provides the messaging broker for various OpenStack services. The
OpenStack-Ansible project configures a plaintext listener on port 5672 and
a SSL/TLS encrypted listener on port 5671.
Customize your RabbitMQ deployment in ``/etc/openstack_deploy/user_variables.yml``.
Add a TLS encrypted listener to RabbitMQ
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure RabbitMQ
communications with self-signed or user-provided SSL certificates. Refer to
`Securing services with SSL certificates`_ for available configuration
options.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Enable encrypted connections to RabbitMQ
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The control of SSL communication between various OpenStack services and
RabbitMQ is via the Ansible variable ``rabbitmq_use_ssl``:
.. code-block:: yaml
rabbitmq_use_ssl: true
Setting this variable to ``true`` adjusts the RabbitMQ port to 5671 (the
default SSL/TLS listener port) and enables SSL connectivity between each
OpenStack service and RabbitMQ.
Setting this variable to ``false`` disables SSL encryption between
OpenStack services and RabbitMQ. Use the plaintext port for RabbitMQ, 5672,
for all services.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,144 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Securing services with SSL certificates
=======================================
The `OpenStack Security Guide`_ recommends providing secure communication
between various services in an OpenStack deployment.
.. _OpenStack Security Guide: http://docs.openstack.org/security-guide/secure-communication.html
The OpenStack-Ansible project currently offers the ability to configure SSL
certificates for secure communication with the following services:
* HAProxy
* Horizon
* Keystone
* RabbitMQ
For each service, you have the option to use self-signed certificates
generated during the deployment process or provide SSL certificates,
keys, and CA certificates from your own trusted certificate authority. Highly
secured environments use trusted, user-provided, certificates for as
many services as possible.
.. note::
Conduct all SSL certificate configuration in
``/etc/openstack_deploy/user_variables.yml`` and not in the playbook
roles themselves.
Self-signed certificates
~~~~~~~~~~~~~~~~~~~~~~~~
Self-signed certificates ensure you are able to start quickly and you are able to
encrypt data in transit. However, they do not provide a high level of trust
for highly secure environments. The use of self-signed certificates is
currently the default in OpenStack-Ansible. When self-signed certificates are
being used, certificate verification must be disabled using the following
user variables depending on your configuration. Add these variables
in ``/etc/openstack_deploy/user_variables.yml``.
.. code-block:: yaml
keystone_service_adminuri_insecure: true
keystone_service_internaluri_insecure: true
Setting self-signed certificate subject data
--------------------------------------------
Change the subject data of any self-signed certificate using
configuration variables. The configuration variable for each service is
``<servicename>_ssl_self_signed_subject``. To change the SSL certificate
subject data for HAProxy, adjust ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
haproxy_ssl_self_signed_subject: "/C=US/ST=Texas/L=San Antonio/O=IT/CN=haproxy.example.com"
For more information about the available fields in the certificate subject,
refer to OpenSSL's documentation on the `req subcommand`_.
.. _req subcommand: https://www.openssl.org/docs/manmaster/apps/req.html
Generating and regenerating self-signed certificates
----------------------------------------------------
Generate self-signed certificates for each service during the first run
of the playbook.
.. note::
Subsequent runs of the playbook do not generate new SSL
certificates unless you set ``<servicename>_ssl_self_signed_regen`` to
``true``.
To force a self-signed certificate to regenerate, you can pass the variable to
``openstack-ansible`` on the command line:
.. code-block:: shell-session
# openstack-ansible -e "horizon_ssl_self_signed_regen=true" os-horizon-install.yml
To force a self-signed certificate to regenerate with every playbook run,
set the appropriate regeneration option to ``true``. For example, if
you have already run the ``os-horizon`` playbook, but you want to regenerate the
self-signed certificate, set the ``horizon_ssl_self_signed_regen`` variable to
``true`` in ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
horizon_ssl_self_signed_regen: true
.. note::
Regenerating self-signed certificates replaces the existing
certificates whether they are self-signed or user-provided.
User-provided certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~
You can provide your own SSL certificates, keys, and CA certificates
for added trust in highly secure environments. Acquiring certificates from a
trusted certificate authority is outside the scope of this document, but the
`Certificate Management`_ section of the Linux Documentation Project explains
how to create your own certificate authority and sign certificates.
.. _Certificate Management: http://www.tldp.org/HOWTO/SSL-Certificates-HOWTO/c118.html
Deploying user-provided SSL certificates is a three step process:
#. Copy your SSL certificate, key, and CA certificate to the deployment host.
#. Specify the path to your SSL certificate, key, and CA certificate in
``/etc/openstack_deploy/user_variables.yml``.
#. Run the playbook for that service.
For example, to deploy user-provided certificates for RabbitMQ,
copy the certificates to the deployment host, edit
``/etc/openstack_deploy/user_variables.yml`` and set the following three
variables:
.. code-block:: yaml
rabbitmq_user_ssl_cert: /tmp/example.com.crt
rabbitmq_user_ssl_key: /tmp/example.com.key
rabbitmq_user_ssl_ca_cert: /tmp/ExampleCA.crt
Run the playbook to apply the certificates:
.. code-block:: shell-session
# openstack-ansible rabbitmq-install.yml
The playbook deploys your user-provided SSL certificate, key, and CA
certificate to each RabbitMQ container.
The process is identical to the other services. Replace
``rabbitmq`` in the configuration variables shown above with ``horizon``,
``haproxy``, or ``keystone`` to deploy user-provided certificates to those
services.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,38 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Add to existing deployment
==========================
Complete the following procedure to deploy swift on an
existing deployment.
#. `The section called "Configure and mount storage
devices" <configure-swift-devices.html>`_
#. `The section called "Configure an Object Storage
deployment" <configure-swift-config.html>`_
#. Optionally, allow all keystone users to use swift by setting
``swift_allow_all_users`` in the ``user_variables.yml`` file to
``True``. Any users with the ``_member_`` role (all authorized
keystone users) can create containers and upload objects
to swift.
If this value is ``False``, by default only users with the
``admin`` or ``swiftoperator`` role can create containers or
manage tenants.
When the backend type for the glance is set to
``swift``, glance can access the swift cluster
regardless of whether this value is ``True`` or ``False``.
#. Run the swift play:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-swift-install.yml
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,325 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the service
=======================
**Procedure 5.2. Updating the Object Storage configuration ``swift.yml``
file**
#. Copy the ``/etc/openstack_deploy/conf.d/swift.yml.example`` file to
``/etc/openstack_deploy/conf.d/swift.yml``:
.. code-block:: shell-session
# cp /etc/openstack_deploy/conf.d/swift.yml.example \
/etc/openstack_deploy/conf.d/swift.yml
#. Update the global override values:
.. code-block:: yaml
# global_overrides:
# swift:
# part_power: 8
# weight: 100
# min_part_hours: 1
# repl_number: 3
# storage_network: 'br-storage'
# replication_network: 'br-repl'
# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
# mount_point: /srv/node
# account:
# container:
# storage_policies:
# - policy:
# name: gold
# index: 0
# default: True
# - policy:
# name: silver
# index: 1
# repl_number: 3
# deprecated: True
# statsd_host: statsd.example.com
# statsd_port: 8125
# statsd_metric_prefix:
# statsd_default_sample_rate: 1.0
# statsd_sample_rate_factor: 1.0
``part_power``
Set the partition power value based on the total amount of
storage the entire ring uses.
Multiply the maximum number of drives ever used with the swift
installation by 100 and round that value up to the
closest power of two value. For example, a maximum of six drives,
times 100, equals 600. The nearest power of two above 600 is two
to the power of nine, so the partition power is nine. The
partition power cannot be changed after the swift rings
are built.
``weight``
The default weight is 100. If the drives are different sizes, set
the weight value to avoid uneven distribution of data. For
example, a 1 TB disk would have a weight of 100, while a 2 TB
drive would have a weight of 200.
``min_part_hours``
The default value is 1. Set the minimum partition hours to the
amount of time to lock a partition's replicas after moving a partition.
Moving multiple replicas at the same time
makes data inaccessible. This value can be set separately in the
swift, container, account, and policy sections with the value in
lower sections superseding the value in the swift section.
``repl_number``
The default value is 3. Set the replication number to the number
of replicas of each object. This value can be set separately in
the swift, container, account, and policy sections with the value
in the more granular sections superseding the value in the swift
section.
``storage_network``
By default, the swift services listen on the default
management IP. Optionally, specify the interface of the storage
network.
If the ``storage_network`` is not set, but the ``storage_ips``
per host are set (or the ``storage_ip`` is not on the
``storage_network`` interface) the proxy server is unable
to connect to the storage services.
``replication_network``
Optionally, specify a dedicated replication network interface, so
dedicated replication can be setup. If this value is not
specified, no dedicated ``replication_network`` is set.
Replication does not work properly if the ``repl_ip`` is not set on
the ``replication_network`` interface.
``drives``
Set the default drives per host. This is useful when all hosts
have the same drives. These can be overridden on a per host
basis.
``mount_point``
Set the ``mount_point`` value to the location where the swift
drives are mounted. For example, with a mount point of ``/srv/node``
and a drive of ``sdc``, a drive is mounted at ``/srv/node/sdc`` on the
``swift_host``. This can be overridden on a per-host basis.
``storage_policies``
Storage policies determine on which hardware data is stored, how
the data is stored across that hardware, and in which region the
data resides. Each storage policy must have an unique ``name``
and a unique ``index``. There must be a storage policy with an
index of 0 in the ``swift.yml`` file to use any legacy containers
created before storage policies were instituted.
``default``
Set the default value to ``yes`` for at least one policy. This is
the default storage policy for any non-legacy containers that are
created.
``deprecated``
Set the deprecated value to ``yes`` to turn off storage policies.
For account and container rings, ``min_part_hours`` and
``repl_number`` are the only values that can be set. Setting them
in this section overrides the defaults for the specific ring.
``statsd_host``
Swift supports sending extra metrics to a ``statsd`` host. This option
sets the ``statsd`` host to receive ``statsd`` metrics. Specifying
this here applies to all hosts in the cluster.
If ``statsd_host`` is left blank or omitted, then ``statsd`` are
disabled.
All ``statsd`` settings can be overridden or you can specify deeper in the
structure if you want to only catch ``statsdv`` metrics on certain hosts.
``statsd_port``
Optionally, use this to specify the ``statsd`` server's port you are
sending metrics to. Defaults to 8125 of omitted.
``statsd_default_sample_rate`` and ``statsd_sample_rate_factor``
These ``statsd`` related options are more complex and are
used to tune how many samples are sent to ``statsd``. Omit them unless
you need to tweak these settings, if so first read:
http://docs.openstack.org/developer/swift/admin_guide.html
#. Update the swift proxy hosts values:
.. code-block:: yaml
# swift-proxy_hosts:
# infra-node1:
# ip: 192.0.2.1
# statsd_metric_prefix: proxy01
# infra-node2:
# ip: 192.0.2.2
# statsd_metric_prefix: proxy02
# infra-node3:
# ip: 192.0.2.3
# statsd_metric_prefix: proxy03
``swift-proxy_hosts``
Set the ``IP`` address of the hosts so Ansible connects to
to deploy the ``swift-proxy`` containers. The ``swift-proxy_hosts``
value matches the infra nodes.
``statsd_metric_prefix``
This metric is optional, and also only evaluated it you have defined
``statsd_host`` somewhere. It allows you define a prefix to add to
all ``statsd`` metrics sent from this hose. If omitted, use the node name.
#. Update the swift hosts values:
.. code-block:: yaml
# swift_hosts:
# swift-node1:
# ip: 192.0.2.4
# container_vars:
# swift_vars:
# zone: 0
# statsd_metric_prefix: node1
# swift-node2:
# ip: 192.0.2.5
# container_vars:
# swift_vars:
# zone: 1
# statsd_metric_prefix: node2
# swift-node3:
# ip: 192.0.2.6
# container_vars:
# swift_vars:
# zone: 2
# statsd_metric_prefix: node3
# swift-node4:
# ip: 192.0.2.7
# container_vars:
# swift_vars:
# zone: 3
# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container
# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
``swift_hosts``
Specify the hosts to be used as the storage nodes. The ``ip`` is
the address of the host to which Ansible connects. Set the name
and IP address of each swift host. The ``swift_hosts``
section is not required.
``swift_vars``
Contains the swift host specific values.
``storage_ip`` and ``repl_ip``
Base these values on the IP addresses of the host's
``storage_network`` or ``replication_network``. For example, if
the ``storage_network`` is ``br-storage`` and host1 has an IP
address of 1.1.1.1 on ``br-storage``, then this is the IP address
in use for ``storage_ip``. If only the ``storage_ip``
is specified, then the ``repl_ip`` defaults to the ``storage_ip``.
If neither are specified, both default to the host IP
address.
Overriding these values on a host or drive basis can cause
problems if the IP address that the service listens on is based
on a specified ``storage_network`` or ``replication_network`` and
the ring is set to a different IP address.
``zone``
The default is 0. Optionally, set the swift zone for the
ring.
``region``
Optionally, set the swift region for the ring.
``weight``
The default weight is 100. If the drives are different sizes, set
the weight value to avoid uneven distribution of data. This value
can be specified on a host or drive basis (if specified at both,
the drive setting takes precedence).
``groups``
Set the groups to list the rings to which a host's drive belongs.
This can be set on a per drive basis which overrides the host
setting.
``drives``
Set the names of the drives on the swift host. Specify at least
one name.
``statsd_metric_prefix``
This metric is optional, and only evaluates if ``statsd_host`` is defined
somewhere. This allows you to define a prefix to add to
all ``statsd`` metrics sent from the hose. If omitted, use the node name.
In the following example, ``swift-node5`` shows values in the
``swift_hosts`` section that override the global values. Groups
are set, which overrides the global settings for drive ``sdb``. The
weight is overridden for the host and specifically adjusted on drive
``sdb``. Also, the ``storage_ip`` and ``repl_ip`` are set differently
for ``sdb``.
.. code-block:: yaml
# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container
# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
#. Ensure the ``swift.yml`` is in the ``/etc/openstack_deploy/conf.d/``
folder.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,104 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Storage devices
===============
This section offers a set of prerequisite instructions for setting up
Object Storage (swift) storage devices. The storage devices must be set up
before installing swift.
**Procedure 5.1. Configuring and mounting storage devices**
Object Storage recommends a minimum of three swift hosts
with five storage disks. The example commands in this procedure
use the storage devices ``sdc`` through to ``sdg``.
#. Determine the storage devices on the node to be used for swift.
#. Format each device on the node used for storage with XFS. While
formatting the devices, add a unique label for each device.
Without labels, a failed drive causes mount points to shift and
data to become inaccessible.
For example, create the file systems on the devices using the
``mkfs`` command:
.. code-block:: shell-session
# apt-get install xfsprogs
# mkfs.xfs -f -i size=1024 -L sdc /dev/sdc
# mkfs.xfs -f -i size=1024 -L sdd /dev/sdd
# mkfs.xfs -f -i size=1024 -L sde /dev/sde
# mkfs.xfs -f -i size=1024 -L sdf /dev/sdf
# mkfs.xfs -f -i size=1024 -L sdg /dev/sdg
#. Add the mount locations to the ``fstab`` file so that the storage
devices are remounted on boot. The following example mount options
are recommended when using XFS:
.. code-block:: shell-session
LABEL=sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sdd /srv/node/sdd xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sde /srv/node/sde xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
#. Create the mount points for the devices using the ``mkdir`` command:
.. code-block:: shell-session
# mkdir -p /srv/node/sdc
# mkdir -p /srv/node/sdd
# mkdir -p /srv/node/sde
# mkdir -p /srv/node/sdf
# mkdir -p /srv/node/sdg
The mount point is referenced as the ``mount_point`` parameter in
the ``swift.yml`` file (``/etc/rpc_deploy/conf.d/swift.yml``):
.. code-block:: shell-session
# mount /srv/node/sdc
# mount /srv/node/sdd
# mount /srv/node/sde
# mount /srv/node/sdf
# mount /srv/node/sdg
To view an annotated example of the ``swift.yml`` file, see `Appendix A,
*OSA configuration files* <app-configfiles.html>`_.
For the following mounted devices:
+--------------------------------------+--------------------------------------+
| Device | Mount location |
+======================================+======================================+
| /dev/sdc | /srv/node/sdc |
+--------------------------------------+--------------------------------------+
| /dev/sdd | /srv/node/sdd |
+--------------------------------------+--------------------------------------+
| /dev/sde | /srv/node/sde |
+--------------------------------------+--------------------------------------+
| /dev/sdf | /srv/node/sdf |
+--------------------------------------+--------------------------------------+
| /dev/sdg | /srv/node/sdg |
+--------------------------------------+--------------------------------------+
Table: Table 5.1. Mounted devices
The entry in the ``swift.yml``:
.. code-block:: yaml
# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
# - name: sdg
# mount_point: /srv/node
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,69 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Integrate with the Image Service (glance)
=========================================
As an option, you can create images in Image Service (glance) and
store them using Object Storage (swift).
If there is an existing glance backend (for example,
cloud files) but you want to add swift to use as the glance backend,
you can re-add any images from glance after moving
to swift. Images are no longer available if there is a change in the
glance variables when you begin using swift.
**Procedure 5.3. Integrating Object Storage with Image Service**
This procedure requires the following:
- OSA Kilo (v11)
- Object Storage v2.2.0
#. Update the glance options in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
# Glance Options
glance_default_store: swift
glance_swift_store_auth_address: '{{ auth_identity_uri }}'
glance_swift_store_container: glance_images
glance_swift_store_endpoint_type: internalURL
glance_swift_store_key: '{{ glance_service_password }}'
glance_swift_store_region: RegionOne
glance_swift_store_user: 'service:glance'
- ``glance_default_store``: Set the default store to ``swift``.
- ``glance_swift_store_auth_address``: Set to the local
authentication address using the
``'{{ auth_identity_uri }}'`` variable.
- ``glance_swift_store_container``: Set the container name.
- ``glance_swift_store_endpoint_type``: Set the endpoint type to
``internalURL``.
- ``glance_swift_store_key``: Set the glance password using
the ``{{ glance_service_password }}`` variable.
- ``glance_swift_store_region``: Set the region. The default value
is ``RegionOne``.
- ``glance_swift_store_user``: Set the tenant and user name to
``'service:glance'``.
#. Rerun the glance configuration plays.
#. Run the glance playbook:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-glance-install.yml --tags "glance-config"
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,51 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Storage policies
================
Storage policies allow segmenting the cluster for various purposes
through the creation of multiple object rings. Using policies, different
devices can belong to different rings with varying levels of
replication. By supporting multiple object rings, swift can
segregate the objects within a single cluster.
Use storage policies for the following situations:
- Differing levels of replication: A provider may want to offer 2x
replication and 3x replication, but does not want to maintain two
separate clusters. They can set up a 2x policy and a 3x policy and
assign the nodes to their respective rings.
- Improving performance: Just as solid state drives (SSD) can be used
as the exclusive members of an account or database ring, an SSD-only
object ring can be created to implement a low-latency or high
performance policy.
- Collecting nodes into groups: Different object rings can have
different physical servers so that objects in specific storage
policies are always placed in a specific data center or geography.
- Differing storage implementations: A policy can be used to direct
traffic to collected nodes that use a different disk file (for
example: Kinetic, GlusterFS).
Most storage clusters do not require more than one storage policy. The
following problems can occur if using multiple storage policies per
cluster:
- Creating a second storage policy without any specified drives (all
drives are part of only the account, container, and default storage
policy groups) creates an empty ring for that storage policy.
- Only use a non-default storage policy if specified when creating
a container, using the ``X-Storage-Policy: <policy-name>`` header.
After creating the container, it uses the storage policy.
Other containers continue using the default or another specified
storage policy.
For more information about storage policies, see: `Storage
Policies <http://docs.openstack.org/developer/swift/overview_policies.html>`_
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,63 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Object Storage (swift) service (optional)
=========================================================
.. toctree::
configure-swift-devices.rst
configure-swift-config.rst
configure-swift-glance.rst
configure-swift-add.rst
configure-swift-policies.rst
Object Storage (swift) is a multi-tenant Object Storage system. It is
highly scalable, can manage large amounts of unstructured data, and
provides a RESTful HTTP API.
The following procedure describes how to set up storage devices and
modify the Object Storage configuration files to enable swift
usage.
#. `The section called "Configure and mount storage
devices" <configure-swift-devices.html>`_
#. `The section called "Configure an Object Storage
deployment" <configure-swift-config.html>`_
#. Optionally, allow all Identity (keystone) users to use swift by setting
``swift_allow_all_users`` in the ``user_variables.yml`` file to
``True``. Any users with the ``_member_`` role (all authorized
keystone users) can create containers and upload objects
to Object Storage.
If this value is ``False``, then by default, only users with the
admin or ``swiftoperator`` role are allowed to create containers or
manage tenants.
When the backend type for the Image Service (glance) is set to
``swift``, glance can access the swift cluster
regardless of whether this value is ``True`` or ``False``.
Overview
~~~~~~~~
Object Storage (swift) is configured using the
``/etc/openstack_deploy/conf.d/swift.yml`` file and the
``/etc/openstack_deploy/user_variables.yml`` file.
When installing swift, use the group variables in the
``/etc/openstack_deploy/conf.d/swift.yml`` file for the Ansible
playbooks. Some variables cannot
be changed after they are set, while some changes require re-running the
playbooks. The values in the ``swift_hosts`` section supersede values in
the ``swift`` section.
To view the configuration files, including information about which
variables are required and which are optional, see `Appendix A, *OSA
configuration files* <app-configfiles.html>`_.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,62 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Chapter 4. Deployment configuration
-----------------------------------
.. toctree::
configure-initial.rst
configure-networking.rst
configure-hostlist.rst
configure-creds.rst
configure-hypervisor.rst
configure-nova.rst
configure-ironic.rst
configure-glance.rst
configure-cinder.rst
configure-swift.rst
configure-haproxy.rst
configure-horizon.rst
configure-rabbitmq.rst
configure-ceilometer.rst
configure-aodh.rst
configure-keystone.rst
configure-network-services.rst
configure-openstack.rst
configure-sslcertificates.rst
configure-configurationintegrity.rst
configure-federation.rst
configure-ceph.rst
**Figure 4.1. Installation work flow**
.. image:: figures/workflow-configdeployment.png
Ansible references a handful of files containing mandatory and optional
configuration directives. These files must be modified to define the
target environment before running the Ansible playbooks. Perform the
following tasks:
- Configure Target host networking to define bridge interfaces and
networks
- Configure a list of target hosts on which to install the software
- Configure virtual and physical network relationships for OpenStack
Networking (neutron)
- (Optional) Configure the hypervisor
- (Optional) Configure Block Storage (cinder) to use the NetApp back
end
- (Optional) Configure Block Storage (cinder) backups.
- (Optional) Configure Block Storage availability zones
- Configure passwords for all services
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,87 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Chapter 2. Deployment host
==========================
**Figure 2.1. Installation work flow**
.. image:: figures/workflow-deploymenthost.png
The OSA installation process recommends one deployment host. The
deployment host contains Ansible and orchestrates the OpenStack-Ansible
installation on the target hosts. We recommend using separate deployment and
target hosts. You could alternatively use one of the target hosts, preferably
one of the infrastructure variants, as the deployment host. To use a
deployment host as a target host, follow the steps in `Chapter 3, Target
hosts <targethosts.html>`_ on the deployment host.
Installing the operating system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install the `Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit
<http://releases.ubuntu.com/14.04/>`_ operating system on the
deployment host. Configure at least one network interface to
access the Internet or suitable local repositories.
Configuring the operating system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install additional software packages and configure NTP.
#. Install additional software packages if not already installed
during operating system installation:
.. code-block:: shell-session
# apt-get install aptitude build-essential git ntp ntpdate \
openssh-server python-dev sudo
#. Configure NTP to synchronize with a suitable time source.
Configuring the network
~~~~~~~~~~~~~~~~~~~~~~~
Ansible deployments fail if the deployment server is unable to SSH to the containers.
Configure the deployment host to be on the same network designated for container management.
This configuration reduces the rate of failure due to connectivity issues.
The following network information is used as an example:
Container management: 172.29.236.0/22 (VLAN 10)
Select an IP from this range to assign to the deployment host.
Installing source and dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install the source and dependencies for the deployment host.
#. Clone the OSA repository into the ``/opt/openstack-ansible``
directory:
.. code-block:: shell-session
# git clone -b TAG https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
Replace ``TAG`` with the current stable release tag.
#. Change to the ``/opt/openstack-ansible`` directory, and run the
Ansible bootstrap script:
.. code-block:: shell-session
# scripts/bootstrap-ansible.sh
Configuring Secure Shell (SSH) keys
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ansible uses Secure Shell (SSH) with public key authentication for
connectivity between the deployment and target hosts. To reduce user
interaction during Ansible operations, do not include pass phrases with
key pairs. However, if a pass phrase is required, consider using the
``ssh-agent`` and ``ssh-add`` commands to temporarily store the
pass phrase before performing Ansible operations.
--------------
.. include:: navigation.txt

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View File

@ -0,0 +1,72 @@
============================================
OpenStack-Ansible Installation Guide - DRAFT
============================================
This is a draft revision of the install guide for Newton
and is currently under development.
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Overview
^^^^^^^^
.. toctree::
overview.rst
Deployment host
^^^^^^^^^^^^^^^
.. toctree::
deploymenthost.rst
Target hosts
^^^^^^^^^^^^
.. toctree::
targethosts.rst
Configuration
^^^^^^^^^^^^^
.. toctree::
configure.rst
Installation
^^^^^^^^^^^^
.. toctree::
install-foundation.rst
install-infrastructure.rst
install-openstack.rst
Operations
^^^^^^^^^^
.. toctree::
ops.rst
Appendices
^^^^^^^^^^
.. toctree::
app-configfiles.rst
app-resources.rst
app-minorupgrade.rst
app-tips.rst
app-plumgrid.rst
app-nuage.rst
app-no-internet-connectivity.rst
app-custom-layouts.rst

View File

@ -0,0 +1,77 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===============================
Chapter 5. Foundation playbooks
===============================
**Figure 5.1. Installation work flow**
.. image:: figures/workflow-foundationplaybooks.png
The main Ansible foundation playbook prepares the target hosts for
infrastructure and OpenStack services and performs the following
operations:
- Performs deployment host initial setup
- Builds containers on target hosts
- Restarts containers on target hosts
- Installs common components into containers on target hosts
Running the foundation playbook
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note::
Before continuing, validate the configuration files using the
guidance in `Checking the integrity of your configuration files`_.
.. _Checking the integrity of your configuration files: ../install-guide/configure-configurationintegrity.html
#. Change to the ``/opt/openstack-ansible/playbooks`` directory.
#. Run the host setup playbook:
.. code-block:: shell-session
# openstack-ansible setup-hosts.yml
Confirm satisfactory completion with zero items unreachable or
failed:
.. code-block:: shell-session
PLAY RECAP ********************************************************************
...
deployment_host : ok=18 changed=11 unreachable=0 failed=0
#. If using HAProxy:
.. note::
To run HAProxy on multiple hosts, use ``keepalived`` to make HAProxy highly
available. The keepalived role downloads during the ``bootstrap-ansible`` stage.
If not, re-run the following command before running the HAProxy playbook:
.. code-block:: shell-session
# pushd /opt/openstack-ansible; scripts/bootstrap-ansible.sh; popd
or
.. code-block:: shell-session
# ansible-galaxy install -r ../ansible-role-requirements.yml
Run the playbook to deploy HAProxy:
.. code-block:: shell-session
# openstack-ansible haproxy-install.yml
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,96 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===================================
Chapter 6. Infrastructure playbooks
===================================
**Figure 6.1. Installation workflow**
.. image:: figures/workflow-infraplaybooks.png
The main Ansible infrastructure playbook installs infrastructure
services and performs the following operations:
- Installs Memcached
- Installs the repository server
- Installs Galera
- Installs RabbitMQ
- Installs Rsyslog
- Configures Rsyslog
Running the infrastructure playbook
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note::
Before continuing, validate the configuration files using the
guidance in `Checking the integrity of your configuration files`_.
.. _Checking the integrity of your configuration files: ../install-guide/configure-configurationintegrity.html
#. Change to the ``/opt/openstack-ansible/playbooks`` directory.
#. Run the infrastructure setup playbook:
.. code-block:: shell-session
# openstack-ansible setup-infrastructure.yml
Confirm satisfactory completion with zero items unreachable or
failed:
.. code-block:: shell-session
PLAY RECAP ********************************************************************
...
deployment_host : ok=27 changed=0 unreachable=0 failed=0
Verify the database cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Change to the ``/opt/openstack-ansible/playbooks`` directory.
#. Execute the following command to show the current cluster state:
.. code-block:: shell-session
# ansible galera_container -m shell -a "mysql \
-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
Example output:
.. code-block:: shell-session
node3_galera_container-3ea2cbd3 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node2_galera_container-49a47d25 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
The ``wsrep_cluster_size`` field indicates the number of nodes
in the cluster and the ``wsrep_cluster_status`` field indicates
primary.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,132 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
==============================
Chapter 7. OpenStack playbooks
==============================
**Figure 7.1. Installation work flow**
.. image:: figures/workflow-openstackplaybooks.png
The ``setup-openstack.yml`` playbook installs OpenStack services and
performs the following operations:
- Installs Identity (keystone)
- Installs the Image service (glance)
- Installs Block Storage (cinder)
- Installs Compute (nova)
- Installs Networking (neutron)
- Installs Orchestration (heat)
- Installs Dashboard (horizon)
- Installs Telemetry (ceilometer and aodh)
- Installs Object Storage (swift)
- Installs Ironic
Running the OpenStack playbook
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Change to the ``/opt/openstack-ansible/playbooks`` directory.
#. Run the OpenStack setup playbook:
.. code-block:: shell-session
# openstack-ansible setup-openstack.yml
Confirm satisfactory completion with zero items unreachable or
failed.
Utility container
~~~~~~~~~~~~~~~~~
The utility container provides a space where miscellaneous tools and
software are installed. Tools and objects are placed in a
utility container if they do not require a dedicated container or if it
is impractical to create a new container for a single tool or object.
Utility containers are also used when tools cannot be installed
directly onto a host.
For example, the tempest playbooks are installed on the utility
container since tempest testing does not need a container of its own.
Verifying OpenStack operation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Verify basic operation of the OpenStack API and dashboard.
**Procedure 8.1. Verifying the API**
The utility container provides a CLI environment for additional
configuration and testing.
#. Determine the utility container name:
.. code-block:: shell-session
# lxc-ls | grep utility
infra1_utility_container-161a4084
#. Access the utility container:
.. code-block:: shell-session
# lxc-attach -n infra1_utility_container-161a4084
#. Source the ``admin`` tenant credentials:
.. code-block:: shell-session
# source /root/openrc
#. Run an OpenStack command that uses one or more APIs. For example:
.. code-block:: shell-session
# openstack user list
+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
| 08fe5eeeae314d578bba0e47e7884f3a | alt_demo |
| 0aa10040555e47c09a30d2240e474467 | dispersion |
| 10d028f9e47b4d1c868410c977abc3df | glance |
| 249f9ad93c024f739a17ca30a96ff8ee | demo |
| 39c07b47ee8a47bc9f9214dca4435461 | swift |
| 3e88edbf46534173bc4fd8895fa4c364 | cinder |
| 41bef7daf95a4e72af0986ec0583c5f4 | neutron |
| 4f89276ee4304a3d825d07b5de0f4306 | admin |
| 943a97a249894e72887aae9976ca8a5e | nova |
| ab4f0be01dd04170965677e53833e3c3 | stack_domain_admin |
| ac74be67a0564722b847f54357c10b29 | heat |
| b6b1d5e76bc543cda645fa8e778dff01 | ceilometer |
| dc001a09283a404191ff48eb41f0ffc4 | aodh |
| e59e4379730b41209f036bbeac51b181 | keystone |
+----------------------------------+--------------------+
**Procedure 8.2. Verifying the dashboard**
#. With a web browser, access the dashboard using the external load
balancer IP address defined by the ``external_lb_vip_address`` option
in the ``/etc/openstack_deploy/openstack_user_config.yml`` file. The
dashboard uses HTTPS on port 443.
#. Authenticate using the username ``admin`` and password defined by the
``keystone_auth_admin_password`` option in the
``/etc/openstack_deploy/user_variables.yml`` file.
.. note::
Only users with administrator privileges can upload public images
using the dashboard or CLI.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,4 @@
* `Documentation Home <../index.html>`_
* `Installation Guide <index.html>`_
* `Upgrade Guide <../upgrade-guide/index.html>`_
* `Developer Documentation <../developer-docs/index.html>`_

View File

@ -0,0 +1,34 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=====================
Adding a compute host
=====================
Use the following procedure to add a compute host to an operational
cluster.
#. Configure the host as a target host. See `Chapter 3, *Target
hosts* <targethosts.html>`_ for more information.
#. Edit the ``/etc/openstack_deploy/openstack_user_config.yml`` file and
add the host to the ``compute_hosts`` stanza.
If necessary, also modify the ``used_ips`` stanza.
#. If the cluster is utilizing Telemetry/Metering (Ceilometer),
edit the ``/etc/openstack_deploy/conf.d/ceilometer.yml`` file and add the host to
the ``metering-compute_hosts`` stanza.
#. Run the following commands to add the host. Replace
``NEW_HOST_NAME`` with the name of the new host.
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible setup-hosts.yml --limit NEW_HOST_NAME
# openstack-ansible setup-openstack.yml --skip-tags nova-key-distribute --limit NEW_HOST_NAME
# openstack-ansible setup-openstack.yml --tags nova-key --limit compute_hosts
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,308 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=======================
Galera cluster recovery
=======================
Run the `` ``galera-bootstrap`` playbook to automatically recover
a node or an entire environment. Run the ``galera install`` playbook`
using the ``galera-bootstrap`` tag to auto recover a node or an
entire environment.
#. Run the following Ansible command to show the failed nodes:
.. code-block:: shell-session
# openstack-ansible galera-install.yml --tags galera-bootstrap
The cluster comes back online after completion of this command.
Single-node failure
~~~~~~~~~~~~~~~~~~~
If a single node fails, the other nodes maintain quorum and
continue to process SQL requests.
#. Run the following Ansible command to determine the failed node:
.. code-block:: shell-session
# ansible galera_container -m shell -a "mysql -h localhost \
-e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server through
socket '/var/run/mysqld/mysqld.sock' (111)
node2_galera_container-49a47d25 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
In this example, node 3 has failed.
#. Restart MariaDB on the failed node and verify that it rejoins the
cluster.
#. If MariaDB fails to start, run the ``mysqld`` command and perform
further analysis on the output. As a last resort, rebuild the container
for the node.
Multi-node failure
~~~~~~~~~~~~~~~~~~
When all but one node fails, the remaining node cannot achieve quorum and
stops processing SQL requests. In this situation, failed nodes that
recover cannot join the cluster because it no longer exists.
#. Run the following Ansible command to show the failed nodes:
.. code-block:: shell-session
# ansible galera_container -m shell -a "mysql \
-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node2_galera_container-49a47d25 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111)
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111)
node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 18446744073709551615
wsrep_cluster_size 1
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status non-Primary
In this example, nodes 2 and 3 have failed. The remaining operational
server indicates ``non-Primary`` because it cannot achieve quorum.
#. Run the following command to
`rebootstrap <http://galeracluster.com/documentation-webpages/quorumreset.html#id1>`_
the operational node into the cluster:
.. code-block:: shell-session
# mysql -e "SET GLOBAL wsrep_provider_options='pc.bootstrap=yes';"
node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 15
wsrep_cluster_size 1
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111)
node2_galera_container-49a47d25 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111)
The remaining operational node becomes the primary node and begins
processing SQL requests.
#. Restart MariaDB on the failed nodes and verify that they rejoin the
cluster:
.. code-block:: shell-session
# ansible galera_container -m shell -a "mysql \
-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node2_galera_container-49a47d25 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 17
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
#. If MariaDB fails to start on any of the failed nodes, run the
``mysqld`` command and perform further analysis on the output. As a
last resort, rebuild the container for the node.
Complete failure
~~~~~~~~~~~~~~~~
Restore from backup if all of the nodes in a Galera cluster fail (do not shutdown
gracefully). Run the following command to determine if all nodes in the
cluster have failed:
.. code-block:: shell-session
# ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat"
node3_galera_container-3ea2cbd3 | success | rc=0 >>
# GALERA saved state
version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: -1
cert_index:
node2_galera_container-49a47d25 | success | rc=0 >>
# GALERA saved state
version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: -1
cert_index:
node4_galera_container-76275635 | success | rc=0 >>
# GALERA saved state
version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: -1
cert_index:
All the nodes have failed if ``mysqld`` is not running on any of the
nodes and all of the nodes contain a ``seqno`` value of -1.
If any single node has a positive ``seqno`` value, then that node can be
used to restart the cluster. However, because there is no guarantee that
each node has an identical copy of the data, we do not recommend to
restart the cluster using the ``--wsrep-new-cluster`` command on one
node.
Rebuilding a container
~~~~~~~~~~~~~~~~~~~~~~
Recovering from certain failures require rebuilding one or more containers.
#. Disable the failed node on the load balancer.
.. note::
Do not rely on the load balancer health checks to disable the node.
If the node is not disabled, the load balancer sends SQL requests
to it before it rejoins the cluster and cause data inconsistencies.
#. Destroy the container and remove MariaDB data stored outside
of the container:
.. code-block:: shell-session
# lxc-stop -n node3_galera_container-3ea2cbd3
# lxc-destroy -n node3_galera_container-3ea2cbd3
# rm -rf /openstack/node3_galera_container-3ea2cbd3/*
In this example, node 3 failed.
#. Run the host setup playbook to rebuild the container on node 3:
.. code-block:: shell-session
# openstack-ansible setup-hosts.yml -l node3 \
-l node3_galera_container-3ea2cbd3
The playbook restarts all other containers on the node.
#. Run the infrastructure playbook to configure the container
specifically on node 3:
.. code-block:: shell-session
# openstack-ansible setup-infrastructure.yml \
-l node3_galera_container-3ea2cbd3
.. warning::
The new container runs a single-node Galera cluster, which is a dangerous
state because the environment contains more than one active database
with potentially different data.
.. code-block:: shell-session
# ansible galera_container -m shell -a "mysql \
-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 1
wsrep_cluster_size 1
wsrep_cluster_state_uuid da078d01-29e5-11e4-a051-03d896dbdb2d
wsrep_cluster_status Primary
node2_galera_container-49a47d25 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 4
wsrep_cluster_size 2
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 4
wsrep_cluster_size 2
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
#. Restart MariaDB in the new container and verify that it rejoins the
cluster.
.. note::
In larger deployments, it may take some time for the MariaDB daemon to
start in the new container. It will be synchronizing data from the other
MariaDB servers during this time. You can monitor the status during this
process by tailing the ``/var/log/mysql_logs/galera_server_error.log``
log file.
Lines starting with ``WSREP_SST`` will appear during the sync process
and you should see a line with ``WSREP: SST complete, seqno: <NUMBER>``
if the sync was successful.
.. code-block:: shell-session
# ansible galera_container -m shell -a "mysql \
-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
node2_galera_container-49a47d25 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 5
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node3_galera_container-3ea2cbd3 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 5
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 5
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
#. Enable the failed node on the load balancer.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,38 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
==============
Removing nodes
==============
In the following example, all but one node was shut down gracefully:
.. code-block:: shell-session
# ansible galera_container -m shell -a "mysql -h localhost \
-e 'show status like \"%wsrep_cluster_%\";'"
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (2)
node2_galera_container-49a47d25 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (2)
node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 7
wsrep_cluster_size 1
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
Compare this example output with the output from the multi-node failure
scenario where the remaining operational node is non-primary and stops
processing SQL requests. Gracefully shutting down the MariaDB service on
all but one node allows the remaining operational node to continue
processing SQL requests. When gracefully shutting down multiple nodes,
perform the actions sequentially to retain operation.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,93 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
==================
Starting a cluster
==================
Gracefully shutting down all nodes destroys the cluster. Starting or
restarting a cluster from zero nodes requires creating a new cluster on
one of the nodes.
#. Start a new cluster on the most advanced node.
Check the ``seqno`` value in the ``grastate.dat`` file on all of the nodes:
.. code-block:: shell-session
# ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat"
node2_galera_container-49a47d25 | success | rc=0 >>
# GALERA saved state version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: 31
cert_index:
node3_galera_container-3ea2cbd3 | success | rc=0 >>
# GALERA saved state version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: 31
cert_index:
node4_galera_container-76275635 | success | rc=0 >>
# GALERA saved state version: 2.1
uuid: 338b06b0-2948-11e4-9d06-bef42f6c52f1
seqno: 31
cert_index:
In this example, all nodes in the cluster contain the same positive
``seqno`` values as they were synchronized just prior to
graceful shutdown. If all ``seqno`` values are equal, any node can
start the new cluster.
.. code-block:: shell-session
# /etc/init.d/mysql start --wsrep-new-cluster
This command results in a cluster containing a single node. The
``wsrep_cluster_size`` value shows the number of nodes in the
cluster.
.. code-block:: shell-session
node2_galera_container-49a47d25 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (111)
node3_galera_container-3ea2cbd3 | FAILED | rc=1 >>
ERROR 2002 (HY000): Can't connect to local MySQL server
through socket '/var/run/mysqld/mysqld.sock' (2)
node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 1
wsrep_cluster_size 1
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
#. Restart MariaDB on the other nodes and verify that they rejoin the
cluster.
.. code-block:: shell-session
node2_galera_container-49a47d25 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 3
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node3_galera_container-3ea2cbd3 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 3
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
node4_galera_container-76275635 | success | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 3
wsrep_cluster_size 3
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,23 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
==========================
Galera cluster maintenance
==========================
.. toctree::
ops-galera-remove.rst
ops-galera-start.rst
ops-galera-recovery.rst
Routine maintenance includes gracefully adding or removing nodes from
the cluster without impacting operation and also starting a cluster
after gracefully shutting down all nodes.
MySQL instances are restarted when creating a cluster, when adding a
node, when the service is not running, or when changes are made to the
``/etc/mysql/my.cnf`` configuration file.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,29 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===================
Centralized logging
===================
OpenStack-Ansible configures all instances to send syslog data to a
container (or group of containers) running rsyslog. The rsyslog server
containers are specified in the ``log_hosts`` section of the
``openstack_user_config.yml`` file.
The rsyslog server container(s) have logrotate installed and configured with
a 14 day retention. All rotated logs are compressed by default.
Finding logs
~~~~~~~~~~~~
Logs are accessible in multiple locations within an OpenStack-Ansible
deployment:
* The rsyslog server container collects logs in ``/var/log/log-storage`` within
directories named after the container or physical host.
* Each physical host has the logs from its service containers mounted at
``/openstack/log/``.
* Each service container has its own logs stored at ``/var/log/<service_name>``.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,130 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===============
Troubleshooting
===============
Host kernel upgrade from version 3.13
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ubuntu kernel packages newer than version 3.13 contain a change in
module naming from ``nf_conntrack`` to ``br_netfilter``. After
upgrading the kernel, re-run the ``openstack-hosts-setup.yml``
playbook against those hosts. See `OSA bug 157996`_ for more
information.
.. _OSA bug 157996: https://bugs.launchpad.net/openstack-ansible/+bug/1579963
Container networking issues
~~~~~~~~~~~~~~~~~~~~~~~~~~~
All LXC containers on the host have two virtual Ethernet interfaces:
* `eth0` in the container connects to `lxcbr0` on the host
* `eth1` in the container connects to `br-mgmt` on the host
.. note::
Some containers, such as ``cinder``, ``glance``, ``neutron_agents``, and
``swift_proxy``, have more than two interfaces to support their
functions.
Predictable interface naming
----------------------------
On the host, all virtual Ethernet devices are named based on their
container as well as the name of the interface inside the container:
.. code-block:: shell-session
${CONTAINER_UNIQUE_ID}_${NETWORK_DEVICE_NAME}
As an example, an all-in-one (AIO) build might provide a utility
container called `aio1_utility_container-d13b7132`. That container
will have two network interfaces: `d13b7132_eth0` and `d13b7132_eth1`.
Another option would be to use the LXC tools to retrieve information
about the utility container:
.. code-block:: shell-session
# lxc-info -n aio1_utility_container-d13b7132
Name: aio1_utility_container-d13b7132
State: RUNNING
PID: 8245
IP: 10.0.3.201
IP: 172.29.237.204
CPU use: 79.18 seconds
BlkIO use: 678.26 MiB
Memory use: 613.33 MiB
KMem use: 0 bytes
Link: d13b7132_eth0
TX bytes: 743.48 KiB
RX bytes: 88.78 MiB
Total bytes: 89.51 MiB
Link: d13b7132_eth1
TX bytes: 412.42 KiB
RX bytes: 17.32 MiB
Total bytes: 17.73 MiB
The ``Link:`` lines will show the network interfaces that are attached
to the utility container.
Reviewing container networking traffic
--------------------------------------
To dump traffic on the ``br-mgmt`` bridge, use ``tcpdump`` to see all
communications between the various containers. To narrow the focus,
run ``tcpdump`` only on the desired network interface of the
containers.
Cached Ansible facts issues
~~~~~~~~~~~~~~~~~~~~~~~~~~~
At the beginning of a playbook run, information about each host is gathered.
Examples of the information gathered are:
* Linux distribution
* Kernel version
* Network interfaces
To improve performance, particularly in large deployments, you can
cache host facts and information.
OpenStack-Ansible enables fact caching by default. The facts are
cached in JSON files within ``/etc/openstack_deploy/ansible_facts``.
Fact caching can be disabled by commenting out the ``fact_caching``
parameter in ``playbooks/ansible.cfg``. Refer to the Ansible
documentation on `fact caching`_ for more details.
.. _fact caching: http://docs.ansible.com/ansible/playbooks_variables.html#fact-caching
Forcing regeneration of cached facts
------------------------------------
Cached facts may be incorrect if the host receives a kernel upgrade or new network
interfaces. Newly created bridges also disrupt cache facts.
This can lead to unexpected errors while running playbooks, and
require that the cached facts be regenerated.
Run the following command to remove all currently cached facts for all hosts:
.. code-block:: shell-session
# rm /etc/openstack_deploy/ansible_facts/*
New facts will be gathered and cached during the next playbook run.
To clear facts for a single host, find its file within
``/etc/openstack_deploy/ansible_facts/`` and remove it. Each host has
a JSON file that is named after its hostname. The facts for that host
will be regenerated on the next playbook run.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,20 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=====================
Chapter 8. Operations
=====================
The following operations and troubleshooting procedures apply to
installed environments.
.. toctree::
ops-addcomputehost.rst
ops-galera.rst
ops-logging.rst
ops-troubleshooting.rst
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,83 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===========
Host layout
===========
We recommend a layout that contains a minimum of five hosts (or servers):
- Three control plane infrastructure hosts
- One logging infrastructure host
- One compute host
If using the optional Block Storage (cinder) service, we recommend
the use of a sixth host. Block Storage hosts require an LVM volume group named
``cinder-volumes``. See `the section called "Installation
requirements" <overview-requirements.html>`_ and `the section
called "Configuring LVM" <targethosts-configlvm.html>`_ for more information.
The hosts are called target hosts because Ansible deploys the OSA
environment within these hosts. We recommend a
deployment host from which Ansible orchestrates the deployment
process. One of the target hosts can function as the deployment host.
Use at least one load balancer to manage the traffic among
the target hosts. You can use any type of load balancer such as a hardware
appliance or HAProxy. We recommend using physical load balancers for
production environments.
Infrastructure Control Plane target hosts contain the following
services:
- Infrastructure:
- Galera
- RabbitMQ
- Memcached
- Logging
- Repository
- OpenStack:
- Identity (keystone)
- Image service (glance)
- Compute management (nova)
- Networking (neutron)
- Orchestration (heat)
- Dashboard (horizon)
Infrastructure Logging target hosts contain the following services:
- Rsyslog
Compute target hosts contain the following services:
- Compute virtualization
- Logging
(Optional) Storage target hosts contain the following services:
- Block Storage scheduler
- Block Storage volumes
**Figure 1.1. Host Layout Overview**
.. image:: figures/environment-overview.png
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,114 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=======================
About OpenStack-Ansible
=======================
OpenStack-Ansible (OSA) uses the Ansible IT automation framework to
deploy an OpenStack environment on Ubuntu Linux. OpenStack components are
installed into Linux Containers (LXC) for isolation and ease of
maintenance.
This documentation is intended for deployers of the OpenStack-Ansible
deployment system who are interested in installing an OpenStack environment.
Third-party trademarks and tradenames appearing in this document are the
property of their respective owners. Such third-party trademarks have
been printed in caps or initial caps and are used for referential
purposes only. We do not intend our use or display of other companies'
tradenames, trademarks, or service marks to imply a relationship with,
or endorsement or sponsorship of us by, these other companies.
Ansible
~~~~~~~
OpenStack-Ansible Deployment uses a combination of Ansible and
Linux Containers (LXC) to install and manage OpenStack. Ansible
provides an automation platform to simplify system and application
deployment. Ansible manages systems using Secure Shell (SSH)
instead of unique protocols that require remote daemons or agents.
Ansible uses playbooks written in the YAML language for orchestration.
For more information, see `Ansible - Intro to
Playbooks <http://docs.ansible.com/playbooks_intro.html>`_.
In this guide, we refer to the host running Ansible playbooks as
the deployment host and the hosts on which Ansible installs OSA as the
target hosts.
A recommended minimal layout for deployments involves five target
hosts in total: three infrastructure hosts, one compute host, and one
logging host. All hosts will need at least one networking interface, but
we recommend multiple bonded interfaces. More information on setting up
target hosts can be found in `the section called "Host layout"`_.
For more information on physical, logical, and virtual network
interfaces within hosts see `the section called "Host
networking"`_.
.. _the section called "Host layout": overview-hostlayout.html
.. _the section called "Host networking": overview-hostnetworking.html
Linux Containers (LXC)
~~~~~~~~~~~~~~~~~~~~~~
Containers provide operating-system level virtualization by enhancing
the concept of ``chroot`` environments, which isolate resources and file
systems for a particular group of processes without the overhead and
complexity of virtual machines. They access the same kernel, devices,
and file systems on the underlying host and provide a thin operational
layer built around a set of rules.
The Linux Containers (LXC) project implements operating system level
virtualization on Linux using kernel namespaces and includes the
following features:
- Resource isolation including CPU, memory, block I/O, and network
using ``cgroups``.
- Selective connectivity to physical and virtual network devices on the
underlying physical host.
- Support for a variety of backing stores including LVM.
- Built on a foundation of stable Linux technologies with an active
development and support community.
Useful commands:
- List containers and summary information such as operational state and
network configuration:
.. code-block:: shell-session
# lxc-ls --fancy
- Show container details including operational state, resource
utilization, and ``veth`` pairs:
.. code-block:: shell-session
# lxc-info --name container_name
- Start a container:
.. code-block:: shell-session
# lxc-start --name container_name
- Attach to a container:
.. code-block:: shell-session
# lxc-attach --name container_name
- Stop a container:
.. code-block:: shell-session
# lxc-stop --name container_name
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,130 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=========================
Installation requirements
=========================
.. note::
These are the minimum requirements for OpenStack-Ansible. Larger
deployments require additional resources.
CPU requirements
~~~~~~~~~~~~~~~~
Compute hosts have multi-core processors that have `hardware-assisted
virtualization extensions`_ available. These extensions provide a significant
performance boost and improve security in virtualized environments.
Infrastructure hosts have multi-core processors for best
performance. Some services, such as MySQL, greatly benefit from additional CPU
cores and other technologies, such as `Hyper-threading`_.
.. _hardware-assisted virtualization extensions: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
.. _Hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
Disk requirements
~~~~~~~~~~~~~~~~~
Different hosts have different disk space requirements based on the
services running on each host:
Deployment hosts
10GB of disk space is sufficient for holding the OpenStack-Ansible
repository content and additional required software.
Compute hosts
Disk space requirements vary depending on the total number of instances
running on each host and the amount of disk space allocated to each instance.
Compute hosts have at least 100GB of disk space available at an
absolute minimum. Consider disks that provide higher
throughput with lower latency, such as SSD drives in a RAID array.
Storage hosts
Hosts running the Block Storage (cinder) service often consume the most disk
space in OpenStack environments. As with compute hosts,
choose disks that provide the highest I/O throughput with the lowest latency
for storage hosts. Storage hosts contain 1TB of disk space at a
minimum.
Infrastructure hosts
The OpenStack control plane contains storage-intensive services, such as
the Image (glance) service as well as MariaDB. These control plane hosts
have 100GB of disk space available at a minimum.
Logging hosts
An OpenStack-Ansible deployment generates a significant amount of logging.
Logs come from a variety of sources, including services running in
containers, the containers themselves, and the physical hosts. Logging hosts
need additional disk space to hold live and rotated (historical) log files.
In addition, the storage performance must be enough to keep pace with the
log traffic coming from various hosts and containers within the OpenStack
environment. Reserve a minimum of 50GB of disk space for storing
logs on the logging hosts.
Hosts that provide Block Storage (cinder) volumes must have logical volume
manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume group
that OpenStack-Ansible can configure for use with cinder.
Each control plane host runs services inside LXC containers. The container
filesystems are deployed by default onto the root filesystem of each control
plane hosts. You have the option to deploy those container filesystems
into logical volumes by creating a volume group called ``lxc``. OpenStack-Ansible
creates a 5GB logical volume for the filesystem of each container running
on the host.
Network requirements
~~~~~~~~~~~~~~~~~~~~
.. note::
You can deploy an OpenStack environment with only one physical
network interface. This works for small environments, but it can cause
problems when your environment grows.
For the best performance, reliability and scalability, deployers should
consider a network configuration that contains the following features:
* Bonded network interfaces: Increases performance and/or reliability
(dependent on bonding architecture).
* VLAN offloading: Increases performance by adding and removing VLAN tags in
hardware, rather than in the server's main CPU.
* Gigabit or 10 Gigabit Ethernet: Supports higher network speeds, which can
also improve storage performance when using the Block Storage (cinder)
service.
* Jumbo frames: Increases network performance by allowing more data to be sent
in each packet.
Software requirements
~~~~~~~~~~~~~~~~~~~~~
Ensure all hosts within an OpenStack-Ansible environment meet the following
minimum requirements:
* Ubuntu 14.04 LTS (Trusty Tahr)
* OSA is tested regularly against the latest Ubuntu 14.04 LTS point
releases
* Linux kernel version ``3.13.0-34-generic`` or later
* For swift storage hosts, you must enable the ``trusty-backports``
repositories in ``/etc/apt/sources.list`` or ``/etc/apt/sources.list.d/``
See the `Ubuntu documentation
<https://help.ubuntu.com/community/UbuntuBackports#Enabling_Backports_Manually>`_ for more detailed instructions.
* Secure Shell (SSH) client and server that supports public key
authentication
* Network Time Protocol (NTP) client for time synchronization (such as
``ntpd`` or ``chronyd``)
* Python 2.7 or later
* en_US.UTF-8 as locale
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,126 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
========
Security
========
The OpenStack-Ansible project provides several security features for
OpenStack deployments. This section of documentation covers those
features and how they can benefit deployers of various sizes.
Security requirements always differ between deployers. If you require
additional security measures, refer to the official
`OpenStack Security Guide`_ for additional resources.
AppArmor
~~~~~~~~
The Linux kernel offers multiple `security modules`_ (LSMs) that that set
`mandatory access controls`_ (MAC) on Linux systems. The OpenStack-Ansible
project configures `AppArmor`_. AppArmor is a Linux security module that
provides additional security on LXC container hosts. AppArmor allows
administrators to set specific limits and policies around what resources a
particular application can access. Any activity outside the allowed policies
is denied at the kernel level.
AppArmor profiles that are applied in OpenStack-Ansible limit the actions
that each LXC container may take on a system. This is done within the
`lxc_hosts role`_.
.. _security modules: https://en.wikipedia.org/wiki/Linux_Security_Modules
.. _mandatory access controls: https://en.wikipedia.org/wiki/Mandatory_access_control
.. _AppArmor: https://en.wikipedia.org/wiki/AppArmor
.. _lxc_hosts role: https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/lxc_hosts/templates/lxc-openstack.apparmor.j2
Encrypted communication
~~~~~~~~~~~~~~~~~~~~~~~
While in transit, data is encrypted between some OpenStack services in
OpenStack-Ansible deployments. Not all communication between all services is
encrypted. For more details on what traffic is encrypted, and how
to configure SSL certificates, refer to the documentation section titled
`Securing services with SSL certificates`_.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Host security hardening
~~~~~~~~~~~~~~~~~~~~~~~
Deployers can apply security hardening to OpenStack infrastructure and compute
hosts using the ``openstack-ansible-security`` role. The purpose of the role is to
apply as many security configurations as possible without disrupting the
operation of an OpenStack deployment.
Refer to the documentation on :ref:`security_hardening` for more information
on the role and how to enable it in OpenStack-Ansible.
Least privilege
~~~~~~~~~~~~~~~
The `principle of least privilege`_ is used throughout OpenStack-Ansible to
limit the damage that could be caused if an attacker gained access to a set of
credentials.
OpenStack-Ansible configures unique username and password combinations for
each service that talks to RabbitMQ and Galera/MariaDB. Each service that
connects to RabbitMQ uses a separate virtual host for publishing and consuming
messages. The MariaDB users for each service are only granted access to the
database(s) that they need to query.
.. _principle of least privilege: https://en.wikipedia.org/wiki/Principle_of_least_privilege
.. _least-access-openstack-services:
Securing network access to OpenStack services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack environments expose many service ports and API endpoints to the
network.
.. note::
Deployers must limit access to these resources and expose them only
to trusted users and networks.
The resources within an OpenStack environment can be divided into two groups:
1. Services that users must access directly to consume the OpenStack cloud.
* Aodh
* Cinder
* Ceilometer
* Glance
* Heat
* Horizon
* Keystone *(excluding the admin API endpoint)*
* Neutron
* Nova
* Swift
2. Services that are only utilized internally by the OpenStack cloud.
* Keystone (admin API endpoint)
* MariaDB
* RabbitMQ
To manage instances, you are able to access certain public API endpoints, such as
the Nova or Neutron API. Configure firewalls to limit network access to
these services.
Other services, such as MariaDB and RabbitMQ, must be segmented away from
direct user access. You must configure a firewall to only allow
connectivity to these services within the OpenStack environment itself. This
reduces an attacker's ability to query or manipulate data in OpenStack's
critical database and queuing services, especially if one of these services has
a known vulnerability.
For more details on recommended network policies for OpenStack clouds, refer to
the `API endpoint process isolation and policy`_ section from the `OpenStack
Security Guide`_
.. _API endpoint process isolation and policy: http://docs.openstack.org/security-guide/api-endpoints/api-endpoint-configuration-recommendations.html#network-policy
.. _OpenStack Security Guide: http://docs.openstack.org/security-guide
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,100 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=====================
Installation workflow
=====================
This diagram shows the general workflow associated with an
OpenStack-Ansible (OSA) installation.
**Figure 1.7. Installation workflow**
.. image:: figures/workflow-overview.png
#. :doc:`Prepare deployment hosts <deploymenthost>`
#. :doc:`Prepare target hosts <targethosts>`
#. :doc:`Configure deployment <configure>`
#. :doc:`Run foundation playbooks <install-foundation>`
#. :doc:`Run infrastructure playbooks <install-infrastructure>`
#. :doc:`Run OpenStack playbooks <install-openstack>`
=======
Network ranges
~~~~~~~~~~~~~~
For consistency, the following IP addresses and hostnames are
referred to in this installation workflow.
+-----------------------+-----------------+
| Network | IP Range |
+=======================+=================+
| Management Network | 172.29.236.0/22 |
+-----------------------+-----------------+
| Tunnel (VXLAN) Network| 172.29.240.0/22 |
+-----------------------+-----------------+
| Storage Network | 172.29.244.0/22 |
+-----------------------+-----------------+
IP assignments
~~~~~~~~~~~~~~
+------------------+----------------+-------------------+----------------+
| Host name | Management IP | Tunnel (VxLAN) IP | Storage IP |
+==================+================+===================+================+
| infra1 | 172.29.236.101 | 172.29.240.101 | 172.29.244.101 |
+------------------+----------------+-------------------+----------------+
| infra2 | 172.29.236.102 | 172.29.240.102 | 172.29.244.102 |
+------------------+----------------+-------------------+----------------+
| infra3 | 172.29.236.103 | 172.29.240.103 | 172.29.244.103 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| net1 | 172.29.236.111 | 172.29.240.111 | |
+------------------+----------------+-------------------+----------------+
| net2 | 172.29.236.112 | 172.29.240.112 | |
+------------------+----------------+-------------------+----------------+
| net3 | 172.29.236.113 | 172.29.240.113 | |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| compute1 | 172.29.236.121 | 172.29.240.121 | 172.29.244.121 |
+------------------+----------------+-------------------+----------------+
| compute2 | 172.29.236.122 | 172.29.240.122 | 172.29.244.122 |
+------------------+----------------+-------------------+----------------+
| compute3 | 172.29.236.123 | 172.29.240.123 | 172.29.244.123 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| lvm-storage1 | 172.29.236.131 | | 172.29.244.131 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| nfs-storage1 | 172.29.236.141 | | 172.29.244.141 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| ceph-mon1 | 172.29.236.151 | | 172.29.244.151 |
+------------------+----------------+-------------------+----------------+
| ceph-mon2 | 172.29.236.152 | | 172.29.244.152 |
+------------------+----------------+-------------------+----------------+
| ceph-mon3 | 172.29.236.153 | | 172.29.244.153 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| swift1 | 172.29.236.161 | | 172.29.244.161 |
+------------------+----------------+-------------------+----------------+
| swift2 | 172.29.236.162 | | 172.29.244.162 |
+------------------+----------------+-------------------+----------------+
| swift3 | 172.29.236.163 | | 172.29.244.163 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| log1 | 172.29.236.171 | | |
+------------------+----------------+-------------------+----------------+
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,17 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===================
Chapter 1. Overview
===================
.. toctree::
overview-osa.rst
overview-hostlayout.rst
overview-requirements.rst
overview-workflow.rst
overview-security.rst
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,173 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=====================
Designing the network
=====================
This section describes the recommended network architecture.
Some components are mandatory, such as the bridges described below. We
recommend other components such as a bonded network interface but this
is not a requirement.
.. important::
Follow the reference design as closely as possible for production deployments.
Although Ansible automates most deployment operations, networking on
target hosts requires manual configuration as it varies
dramatically per environment. For demonstration purposes, these
instructions use a reference architecture with example network interface
names, networks, and IP addresses. Modify these values as needed for your
particular environment.
Bonded network interfaces
~~~~~~~~~~~~~~~~~~~~~~~~~
The reference architecture includes bonded network interfaces, which
use multiple physical network interfaces for better redundancy and throughput.
Avoid using two ports on the same multi-port network card for the same bonded
interface since a network card failure affects both physical network
interfaces used by the bond.
The ``bond0`` interface carries traffic from the containers
running your OpenStack infrastructure. Configure a static IP address on the
``bond0`` interface from your management network.
The ``bond1`` interface carries traffic from your virtual machines.
Do not configure a static IP on this interface, since neutron uses this
bond to handle VLAN and VXLAN networks for virtual machines.
Additional bridge networks are required for OpenStack-Ansible. These bridges
connect the two bonded network interfaces.
Adding bridges
~~~~~~~~~~~~~~
The combination of containers and flexible deployment options require
implementation of advanced Linux networking features, such as bridges and
namespaces.
Bridges provide layer 2 connectivity (similar to switches) among
physical, logical, and virtual network interfaces within a host. After
creating a bridge, the network interfaces are virtually plugged in to
it.
OpenStack-Ansible uses bridges to connect physical and logical network
interfaces on the host to virtual network interfaces within containers.
Namespaces provide logically separate layer 3 environments (similar to
routers) within a host. Namespaces use virtual interfaces to connect
with other namespaces, including the host namespace. These interfaces,
often called ``veth`` pairs, are virtually plugged in between
namespaces similar to patch cables connecting physical devices such as
switches and routers.
Each container has a namespace that connects to the host namespace with
one or more ``veth`` pairs. Unless specified, the system generates
random names for ``veth`` pairs.
The following image demonstrates how the container network interfaces are
connected to the host's bridges and to the host's physical network interfaces:
.. image:: figures/networkcomponents.png
Target hosts can contain the following network bridges:
- LXC internal ``lxcbr0``:
- This bridge is **required**, but LXC configures it automatically.
- Provides external (typically internet) connectivity to containers.
- This bridge does not directly attach to any physical or logical
interfaces on the host because iptables handles connectivity. It
attaches to ``eth0`` in each container, but the container network
interface is configurable in ``openstack_user_config.yml`` in the
``provider_networks`` dictionary.
- Container management ``br-mgmt``:
- This bridge is **required**.
- Provides management of and communication among infrastructure and
OpenStack services.
- Manually creates and attaches to a physical or logical interface,
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth1``
in each container. The container network interface
is configurable in ``openstack_user_config.yml``.
- Storage ``br-storage``:
- This bridge is *optional*, but recommended.
- Provides segregated access to Block Storage devices between
Compute and Block Storage hosts.
- Manually creates and attaches to a physical or logical interface,
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth2``
in each associated container. The container network
interface is configurable in ``openstack_user_config.yml``.
- OpenStack Networking tunnel ``br-vxlan``:
- This bridge is **required**.
- Provides infrastructure for VXLAN tunnel networks.
- Manually creates and attaches to a physical or logical interface,
typically a ``bond1`` VLAN subinterface. Also attaches to
``eth10`` in each associated container. The
container network interface is configurable in
``openstack_user_config.yml``.
- OpenStack Networking provider ``br-vlan``:
- This bridge is **required**.
- Provides infrastructure for VLAN networks.
- Manually creates and attaches to a physical or logical interface,
typically ``bond1``. Attaches to ``eth11`` for vlan type networks
in each associated container. It does not contain an IP address because
it only handles layer 2 connectivity. The
container network interface is configurable in
``openstack_user_config.yml``.
- This interface supports flat networks with additional
bridge configuration. More details are available here:
:ref:`network_configuration`.
Network diagrams
~~~~~~~~~~~~~~~~
The following image shows how all of the interfaces and bridges interconnect
to provide network connectivity to the OpenStack deployment:
.. image:: figures/networkarch-container-external.png
OpenStack-Ansible deploys the compute service on the physical host rather than
in a container. The following image shows how to use bridges for
network connectivity:
.. image:: figures/networkarch-bare-external.png
The following image shows how the neutron agents work with the bridges
``br-vlan`` and ``br-vxlan``. OpenStack Networking (neutron) is
configured to use a DHCP agent, L3 agent, and Linux Bridge agent within a
``networking-agents`` container. The image shows how DHCP agents provide
information (IP addresses and DNS servers) to the instances, and how
routing works on the image:
.. image:: figures/networking-neutronagents.png
The following image shows how virtual machines connect to the ``br-vlan`` and
``br-vxlan`` bridges and send traffic to the network outside the host:
.. image:: figures/networking-compute.png
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,21 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=================
Configuring hosts
=================
With the information available on the design guide page, you are now
able to make the decisions to build your own OpenStack. There
are two examples given here: reference architecture (recommended) and
single host architecture (simple).
.. toctree::
targethosts-networkrefarch.rst
targethosts-networkexample.rst
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,184 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=========================================
Simple architecture: A single target host
=========================================
Overview
~~~~~~~~
This example uses the following parameters to configure networking on a
single target host. See `Figure 3.2, "Target hosts for infrastructure,
networking, compute, and storage
services" <targethosts-networkexample.html#fig_hosts-target-network-containerexample>`_
for a visual representation of these parameters in the architecture.
- VLANs:
- Host management: Untagged/Native
- Container management: 10
- Tunnels: 30
- Storage: 20
Networks:
- Host management: 10.240.0.0/22
- Container management: 172.29.236.0/22
- Tunnel: 172.29.240.0/22
- Storage: 172.29.244.0/22
Addresses:
- Host management: 10.240.0.11
- Host management gateway: 10.240.0.1
- DNS servers: 69.20.0.164 69.20.0.196
- Container management: 172.29.236.11
- Tunnel: 172.29.240.11
- Storage: 172.29.244.11
**Figure 3.2. Target host for infrastructure, networking, compute, and
storage services**
.. image:: figures/networkarch-container-external-example.png
Modifying the network interfaces file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After establishing the initial host management network connectivity using
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file as
described in this procedure.
Contents of the ``/etc/network/interfaces`` file:
#. Physical interfaces:
.. code-block:: yaml
# Physical interface 1
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0
# Physical interface 2
auto eth1
iface eth1 inet manual
bond-master bond1
bond-primary eth1
# Physical interface 3
auto eth2
iface eth2 inet manual
bond-master bond0
# Physical interface 4
auto eth3
iface eth3 inet manual
bond-master bond1
#. Bonding interfaces:
.. code-block:: yaml
# Bond interface 0 (physical interfaces 1 and 3)
auto bond0
iface bond0 inet static
bond-slaves eth0 eth2
bond-mode active-backup
bond-miimon 100
bond-downdelay 200
bond-updelay 200
address 10.240.0.11
netmask 255.255.252.0
gateway 10.240.0.1
dns-nameservers 69.20.0.164 69.20.0.196
# Bond interface 1 (physical interfaces 2 and 4)
auto bond1
iface bond1 inet manual
bond-slaves eth1 eth3
bond-mode active-backup
bond-miimon 100
bond-downdelay 250
bond-updelay 250
#. Logical (VLAN) interfaces:
.. code-block:: yaml
# Container management VLAN interface
iface bond0.10 inet manual
vlan-raw-device bond0
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
iface bond1.30 inet manual
vlan-raw-device bond1
# Storage network VLAN interface (optional)
iface bond0.20 inet manual
vlan-raw-device bond0
#. Bridge devices:
.. code-block:: yaml
# Container management bridge
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references tagged interface
bridge_ports bond0.10
address 172.29.236.11
netmask 255.255.252.0
dns-nameservers 69.20.0.164 69.20.0.196
# OpenStack Networking VXLAN (tunnel/overlay) bridge
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references tagged interface
bridge_ports bond1.30
address 172.29.240.11
netmask 255.255.252.0
# OpenStack Networking VLAN bridge
auto br-vlan
iface br-vlan inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references untagged interface
bridge_ports bond1
# Storage bridge
auto br-storage
iface br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port reference tagged interface
bridge_ports bond0.20
address 172.29.244.11
netmask 255.255.252.0
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,205 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
======================
Reference architecture
======================
Overview
~~~~~~~~
This example allows you to use your own parameters for the deployment.
The following is a table of the bridges that are be configured on hosts, if you followed the
previously proposed design.
+-------------+-----------------------+-------------------------------------+
| Bridge name | Best configured on | With a static IP |
+=============+=======================+=====================================+
| br-mgmt | On every node | Always |
+-------------+-----------------------+-------------------------------------+
| | On every storage node | When component is deployed on metal |
+ br-storage +-----------------------+-------------------------------------+
| | On every compute node | Always |
+-------------+-----------------------+-------------------------------------+
| | On every network node | When component is deployed on metal |
+ br-vxlan +-----------------------+-------------------------------------+
| | On every compute node | Always |
+-------------+-----------------------+-------------------------------------+
| | On every network node | Never |
+ br-vlan +-----------------------+-------------------------------------+
| | On every compute node | Never |
+-------------+-----------------------+-------------------------------------+
Modifying the network interfaces file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After establishing initial host management network connectivity using
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file as
described in the following procedure.
**Procedure 4.1. Modifying the network interfaces file**
#. Physical interfaces:
.. code-block:: yaml
# Physical interface 1
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0
# Physical interface 2
auto eth1
iface eth1 inet manual
bond-master bond1
bond-primary eth1
# Physical interface 3
auto eth2
iface eth2 inet manual
bond-master bond0
# Physical interface 4
auto eth3
iface eth3 inet manual
bond-master bond1
#. Bonding interfaces:
.. code-block:: yaml
# Bond interface 0 (physical interfaces 1 and 3)
auto bond0
iface bond0 inet static
bond-slaves eth0 eth2
bond-mode active-backup
bond-miimon 100
bond-downdelay 200
bond-updelay 200
address HOST_IP_ADDRESS
netmask HOST_NETMASK
gateway HOST_GATEWAY
dns-nameservers HOST_DNS_SERVERS
# Bond interface 1 (physical interfaces 2 and 4)
auto bond1
iface bond1 inet manual
bond-slaves eth1 eth3
bond-mode active-backup
bond-miimon 100
bond-downdelay 250
bond-updelay 250
If not already complete, replace ``HOST_IP_ADDRESS``,
``HOST_NETMASK``, ``HOST_GATEWAY``, and ``HOST_DNS_SERVERS``
with the appropriate configuration for the host management network.
#. Logical (VLAN) interfaces:
.. code-block:: yaml
# Container management VLAN interface
iface bond0.CONTAINER_MGMT_VLAN_ID inet manual
vlan-raw-device bond0
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
iface bond1.TUNNEL_VLAN_ID inet manual
vlan-raw-device bond1
# Storage network VLAN interface (optional)
iface bond0.STORAGE_VLAN_ID inet manual
vlan-raw-device bond0
Replace ``*_VLAN_ID`` with the appropriate configuration for the
environment.
#. Bridge devices:
.. code-block:: yaml
# Container management bridge
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references tagged interface
bridge_ports bond0.CONTAINER_MGMT_VLAN_ID
address CONTAINER_MGMT_BRIDGE_IP_ADDRESS
netmask CONTAINER_MGMT_BRIDGE_NETMASK
dns-nameservers CONTAINER_MGMT_BRIDGE_DNS_SERVERS
# OpenStack Networking VXLAN (tunnel/overlay) bridge
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references tagged interface
bridge_ports bond1.TUNNEL_VLAN_ID
address TUNNEL_BRIDGE_IP_ADDRESS
netmask TUNNEL_BRIDGE_NETMASK
# OpenStack Networking VLAN bridge
auto br-vlan
iface br-vlan inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references untagged interface
bridge_ports bond1
# Storage bridge (optional)
auto br-storage
iface br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port reference tagged interface
bridge_ports bond0.STORAGE_VLAN_ID
address STORAGE_BRIDGE_IP_ADDRESS
netmask STORAGE_BRIDGE_NETMASK
Replace ``*_VLAN_ID``, ``*_BRIDGE_IP_ADDRESS``, and
``*_BRIDGE_NETMASK``, ``*_BRIDGE_DNS_SERVERS`` with the
appropriate configuration for the environment.
Example for 3 controller nodes and 2 compute nodes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- VLANs:
- Host management: Untagged/Native
- Container management: 10
- Tunnels: 30
- Storage: 20
- Networks:
- Host management: 10.240.0.0/22
- Container management: 172.29.236.0/22
- Tunnel: 172.29.240.0/22
- Storage: 172.29.244.0/22
- Addresses for the controller nodes:
- Host management: 10.240.0.11 - 10.240.0.13
- Host management gateway: 10.240.0.1
- DNS servers: 69.20.0.164 69.20.0.196
- Container management: 172.29.236.11 - 172.29.236.13
- Tunnel: no IP (because IP exist in the containers, when the components aren't deployed directly on metal)
- Storage: no IP (because IP exist in the containers, when the components aren't deployed directly on metal)
- Addresses for the compute nodes:
- Host management: 10.240.0.21 - 10.240.0.22
- Host management gateway: 10.240.0.1
- DNS servers: 69.20.0.164 69.20.0.196
- Container management: 172.29.236.21 - 172.29.236.22
- Tunnel: 172.29.240.21 - 172.29.240.22
- Storage: 172.29.244.21 - 172.29.244.22
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,112 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
==========================
Preparing the target hosts
==========================
The following section describes the installation and configuration of
operating systems for the target hosts.
Installing the operating system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install the Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit operating
system on the target host. Configure at least one network interface
to access the internet or suitable local repositories.
We recommend adding the Secure Shell (SSH) server packages to the
installation on target hosts without local (console) access.
We also recommend setting your locale to en_US.UTF-8. Other locales may
work, but they are not tested or supported.
Configuring the operating system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Upgrade system packages and kernel:
.. code-block:: shell-session
# apt-get dist-upgrade
#. Ensure the kernel version is ``3.13.0-34-generic`` or later.
#. Install additional software packages:
.. code-block:: shell-session
# apt-get install bridge-utils debootstrap ifenslave ifenslave-2.6 \
lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan
#. Add the appropriate kernel modules to the ``/etc/modules`` file to
enable VLAN and bond interfaces:
.. code-block:: shell-session
# echo 'bonding' >> /etc/modules
# echo '8021q' >> /etc/modules
#. Configure NTP to synchronize with a suitable time source.
#. Reboot the host to activate the changes and use new kernel.
Deploying SSH keys
~~~~~~~~~~~~~~~~~~
Ansible uses SSH for connectivity between the deployment and target hosts.
#. Copy the contents of the public key file on the deployment host to
the ``/root/.ssh/authorized_keys`` file on each target host.
#. Test public key authentication from the deployment host to each
target host. SSH provides a shell without asking for a
password.
For more information on how to generate an SSH keypair as well as best
practices, refer to `GitHub's documentation on generating SSH keys`_.
.. _GitHub's documentation on generating SSH keys: https://help.github.com/articles/generating-ssh-keys/
.. warning:: OpenStack-Ansible deployments expect the presence of a
``/root/.ssh/id_rsa.pub`` file on the deployment host.
The contents of this file is inserted into an
``authorized_keys`` file for the containers, which is a
necessary step for the Ansible playbooks. You can
override this behavior by setting the
``lxc_container_ssh_key`` variable to the public key for
the container.
Configuring LVM
~~~~~~~~~~~~~~~
`Logical Volume Manager (LVM)`_ allows a single device to be split into multiple
logical volumes which appear as a physical storage device to the operating
system. The Block Storage (cinder) service, as well as the LXC containers that
run the OpenStack infrastructure, can optionally use LVM for their data storage.
.. note::
OpenStack-Ansible automatically configures LVM on the nodes, and
overrides any existing LVM configuration. If you had a customized LVM
configuration, edit the generated configuration file as needed.
#. To use the optional Block Storage (cinder) service, create an LVM
volume group named ``cinder-volumes`` on the Block Storage host. A
metadata size of 2048 must be specified during physical volume
creation. For example:
.. code-block:: shell-session
# pvcreate --metadatasize 2048 physical_volume_device_path
# vgcreate cinder-volumes physical_volume_device_path
#. Optionally, create an LVM volume group named ``lxc`` for container file
systems. If the ``lxc`` volume group does not exist, containers are
automatically installed into the file system under ``/var/lib/lxc`` by
default.
.. _Logical Volume Manager (LVM): https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,35 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=======================
Chapter 3. Target hosts
=======================
.. toctree::
targethosts-prepare.rst
targethosts-network.rst
targethosts-networkconfig.rst
**Figure 3.1. Installation workflow**
.. image:: figures/workflow-targethosts.png
We recommend at least five target hosts to contain the
OpenStack environment and supporting infrastructure for the OSA
installation process. On each target host, perform the following tasks:
- Naming target hosts
- Install the operating system
- Generate and set up security measures
- Update the operating system and install additional software packages
- Create LVM volume groups
- Configure networking devices
--------------
.. include:: navigation.txt