Merge "[DOCS] Clean up of draft install guide"

This commit is contained in:
Jenkins 2016-08-22 17:57:46 +00:00 committed by Gerrit Code Review
commit 5dcfd6ac50
18 changed files with 89 additions and 82 deletions

View File

@ -1,5 +1,6 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===========================================
Overriding OpenStack configuration defaults
===========================================
@ -22,8 +23,8 @@ guidance is available in the developer documentation in the section titled
.. _OpenStack Configuration Reference: http://docs.openstack.org/draft/config-reference/
.. _Setting overrides in configuration files: ../developer-docs/extending.html#setting-overrides-in-configuration-files
Overriding .conf files
~~~~~~~~~~~~~~~~~~~~~~
Overriding ``.conf`` files
~~~~~~~~~~~~~~~~~~~~~~~~~~
The most common use-case for implementing overrides are for the
``<service>.conf`` files (for example, ``nova.conf``). These files use a
@ -85,8 +86,8 @@ To assist you in finding the appropriate variable name to use for
overrides, the general format for the variable name is:
``<service>_<filename>_<file extension>_overrides``.
Overriding .json files
~~~~~~~~~~~~~~~~~~~~~~
Overriding ``.json`` files
~~~~~~~~~~~~~~~~~~~~~~~~~~
You can adjust the default policies applied by services in order
to implement access controls which are different to a standard OpenStack
@ -110,8 +111,8 @@ entry in ``/etc/openstack_deploy/user_variables.yml``:
identity:foo: "rule:admin_required"
identity:bar: "rule:admin_required"
Use this method for any ``JSON`` file format for all OpenStack projects
deployed in OpenStack-Ansible.
Use this method for all OpenStack projects
deployed in OpenStack-Ansible with ``JSON`` file formats.
To assist you in finding the appropriate variable name to use for
overrides, the general format for the variable name is

View File

@ -6,6 +6,7 @@ Appendix C: Customizing host and service layouts
Understanding the default layout
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The default layout of containers and services in OpenStack-Ansible is driven
by the ``/etc/openstack_deploy/openstack_user_config.yml`` file and the
contents of both the ``/etc/openstack_deploy/conf.d/`` and
@ -27,12 +28,13 @@ desire before running the installation playbooks.
Understanding host groups
-------------------------
As part of initial configuration, each target host appears in either the
``/etc/openstack_deploy/openstack_user_config.yml`` file or in files within
the ``/etc/openstack_deploy/conf.d/`` directory. We use a format for files in
``conf.d/`` which is identical to the syntax used in the
``openstack_user_config.yml`` file. These hosts are listed under one or more
headings such as ``shared-infra_hosts`` or ``storage_hosts`` which serve as
headings, such as ``shared-infra_hosts`` or ``storage_hosts``, which serve as
Ansible group mappings. We treat these groupings as mappings to the physical
hosts.
@ -40,7 +42,7 @@ The example file ``haproxy.yml.example`` in the ``conf.d/`` directory provides
a simple example of defining a host group (``haproxy_hosts``) with two hosts
(``infra1`` and ``infra2``).
A more complex example file is ``swift.yml.example``. Here, in addition, we
A more complex example file is ``swift.yml.example``. Here, we
specify host variables for a target host using the ``container_vars`` key.
OpenStack-Ansible applies all entries under this key as host-specific
variables to any component containers on the specific host.
@ -53,6 +55,7 @@ variables to any component containers on the specific host.
Understanding container groups
------------------------------
Additional group mappings can be found within files in the
``/etc/openstack_deploy/env.d/`` directory. These groupings are treated as
virtual mappings from the host groups (described above) onto the container
@ -88,7 +91,7 @@ The default layout does not rely exclusively on groups being subsets of other
groups. The ``memcache`` component group is part of the ``memcache_container``
group, as well as the ``memcache_all`` group and also contains a ``memcached``
component group. If you review the ``playbooks/memcached-install.yml``
playbook you see that the playbook applies to hosts in the ``memcached``
playbook, you see that the playbook applies to hosts in the ``memcached``
group. Other services may have more complex deployment needs. They define and
consume inventory container groups differently. Mapping components to several
groups in this way allows flexible targeting of roles and tasks.
@ -117,20 +120,19 @@ is the same for a service deployed directly onto the host.
Omit a service or component from the deployment
-----------------------------------------------
To omit a component from a deployment, several options exist.
To omit a component from a deployment, several options exist:
- You could remove the ``physical_skel`` link between the container group and
the host group. The simplest way to do this is to simply delete the related
- Remove the ``physical_skel`` link between the container group and
the host group. The simplest way to do this is to delete the related
file located in the ``env.d/`` directory.
- You could choose to not run the playbook which installs the component.
Unless you specify the component to run directly on a host using is_metal, a
container creates for this component.
- You could adjust the ``affinity`` to 0 for the host group. Unless you
specify the component to run directly on a host using is_metal, a container
creates for this component. `Affinity`_ is discussed in the initial
environment configuration section of the install guide.
- Do not run the playbook which installs the component.
Unless you specify the component to run directly on a host using
``is_metal``, a container creates for this component.
- Adjust the `affinity`_ to 0 for the host group. Unless you
specify the component to run directly on a host using ``is_metal``,
a container creates for this component.
.. _Affinity: configure-initial.html#affinity
.. _affinity: app-advanced-config-affinity.rst
Deploying existing components on dedicated hosts
------------------------------------------------
@ -160,10 +162,10 @@ segment of the ``env.d/galera.yml`` file might look like:
``is_metal: true`` property. We include it here as a recipe for the more
commonly requested layout.
Since we define the new container group (``db_containers`` above) we must
Since we define the new container group (``db_containers`` above), we must
assign that container group to a host group. To assign the new container
group to a new host group, provide a ``physical_skel`` for the new host group
(in a new or existing file, such as ``env.d/galera.yml``) like the following:
(in a new or existing file, such as ``env.d/galera.yml``). For example:
.. code-block:: yaml
@ -176,7 +178,7 @@ group to a new host group, provide a ``physical_skel`` for the new host group
- hosts
Lastly, define the host group (db_hosts above) in a ``conf.d/`` file (such as
``galera.yml``).
``galera.yml``):
.. code-block:: yaml

View File

@ -1,8 +1,8 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
========
Security
========
=====================
Appendix D: Security
=====================
The OpenStack-Ansible project provides several security features for
OpenStack deployments. This section of documentation covers those

View File

@ -10,4 +10,5 @@ Appendices
app-configfiles.rst
app-resources.rst
app-custom-layouts.rst
app-security.rst
app-advanced-config-options.rst

View File

@ -1,5 +1,6 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===============================
Configuring service credentials
===============================
@ -11,8 +12,9 @@ security by encrypting any files containing credentials.
Adjust permissions on these files to restrict access by non-privileged
users.
Note that the following options configure passwords for the web
interfaces:
.. note::
The following options configure passwords for the web interfaces.
- ``keystone_auth_admin_password`` configures the ``admin`` tenant
password for both the OpenStack API and dashboard access.

View File

@ -30,8 +30,8 @@ to the deployment of your OpenStack environment.
There are various types of physical hardware that are able to use containers
deployed by OpenStack-Ansible. For example, hosts listed in the
`shared-infra_hosts` run containers for many of the shared services that
your OpenStack environments requires. Some of these services include databases,
``shared-infra_hosts`` run containers for many of the shared services that
your OpenStack environment requires. Some of these services include databases,
memcached, and RabbitMQ. There are several other host types that contain
other types of containers and all of these are listed in
``openstack_user_config.yml``.

View File

@ -1,13 +1,12 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
==================================
openstack_user_config.yml examples
==================================
======================================
``openstack_user_config.yml`` examples
======================================
The ``/etc/openstack_deploy/openstack_user_config.yml`` configuration file
contains parameters to configure target host, and target host networking.
Examples are provided below for a test environment and production environment.
(WIP)
Test environment
~~~~~~~~~~~~~~~~
@ -22,9 +21,13 @@ Production environment
Setting an MTU on a default lxc bridge (lxcbr0)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To modify a container MTU it is also required to set ``lxc_net_mtu`` to
a value other than 1500 in ``user_variables.yml``. It will also be necessary
to modify the ``provider_networks`` subsection to reflect the change.
To modify a container MTU it is required to set ``lxc_net_mtu`` to
a value other than 1500 in ``user_variables.yml``.
.. note::
It is necessary to modify the ``provider_networks`` subsection to
reflect the change.
This will define the mtu on the lxcbr0 interface. An ifup/ifdown will
be required if the interface is already up for the changes to take effect.

View File

@ -21,15 +21,15 @@ configuration directives. These files must be modified to define the
target environment before running the Ansible playbooks. Configuration
tasks include:
- target host networking to define bridge interfaces and
networks
- Target host networking to define bridge interfaces and
networks.
- a list of target hosts on which to install the software
- A list of target hosts on which to install the software.
- virtual and physical network relationships for OpenStack
Networking (neutron)
- Virtual and physical network relationships for OpenStack
Networking (neutron).
- passwords for all services
- Passwords for all services.
--------------

View File

@ -7,8 +7,8 @@ and is currently under development.
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Table of Contents
^^^^^^^^^^^^^^^^^
Table of contents
~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 2

View File

@ -31,9 +31,11 @@ Before running any playbook, check the integrity of your configuration files:
YAML compliant. Guidelines can be found here:
`<http://docs.ansible.com/ansible/YAMLSyntax.html>`_
#. Check the integrity of your YAML files:
#. Check the integrity of your YAML files.
.. note:: Here is an online linter: `<http://www.yamllint.com/>`_
.. note::
To check your lint online, we recommend: `<http://www.yamllint.com/>`_.
#. Run your command with ``syntax-check``:
@ -41,11 +43,9 @@ Before running any playbook, check the integrity of your configuration files:
# openstack-ansible setup-infrastructure.yml --syntax-check
#. Recheck that all indentation is correct.
.. note::
The syntax of the configuration files can be correct
while not being meaningful for OpenStack-Ansible.
#. Recheck that all indentation is correct. This is important as the syntax
of the configuration files can be correct while not being meaningful for
OpenStack-Ansible.
Run playbooks
~~~~~~~~~~~~~
@ -204,9 +204,9 @@ configuration and testing.
| e59e4379730b41209f036bbeac51b181 | keystone |
+----------------------------------+--------------------+
**Verifying the dashboard**
**Verifying the Dashboard**
#. With a web browser, access the dashboard using the external load
#. With a web browser, access the Dashboard using the external load
balancer IP address defined by the ``external_lb_vip_address`` option
in the ``/etc/openstack_deploy/openstack_user_config.yml`` file. The
dashboard uses HTTPS on port 443.

View File

@ -6,8 +6,8 @@
Host layout
===========
The hosts are called target hosts because Ansible deploys the OSA
environment within these hosts. We recommend a
The hosts are called target hosts because Ansible deploys the
OpenStack-Ansible environment within these hosts. We recommend a
deployment host from which Ansible orchestrates the deployment
process. One of the target hosts can function as the deployment host.

View File

@ -63,7 +63,7 @@ Logging hosts
environment. Reserve a minimum of 50GB of disk space for storing
logs on the logging hosts.
Hosts that provide Block Storage (cinder) volumes must have logical volume
Hosts that provide Block Storage volumes must have logical volume
manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume
group that OpenStack-Ansible can configure for use with cinder.
@ -83,7 +83,7 @@ Network requirements
network interface. This works for small environments, but it can cause
problems when your environment grows.
For the best performance, reliability and scalability in a production
For the best performance, reliability, and scalability in a production
environment, deployers should consider a network configuration that contains
the following features:
@ -94,8 +94,7 @@ the following features:
hardware, rather than in the server's main CPU.
* Gigabit or 10 Gigabit Ethernet: Supports higher network speeds, which can
also improve storage performance when using the Block Storage (cinder)
service.
also improve storage performance when using the Block Storage service.
* Jumbo frames: Increases network performance by allowing more data to be sent
in each packet.

View File

@ -7,7 +7,6 @@ Storage architecture
OpenStack-Ansible supports Block Storage (cinder), Ephemeral storage
(nova), Image service (glance) and Object Storage (swift).
Block Storage (cinder)
~~~~~~~~~~~~~~~~~~~~~~
@ -33,8 +32,8 @@ NFS) set up a container inside one of the infra hosts.
For more information: `<https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/cinder-volume-active-active-support.html>`_.
Configuring the Block Storage service (cinder)
----------------------------------------------
Configuring the Block Storage service
-------------------------------------
Configure ``cinder-api`` infra hosts with ``br-storage`` and ``br-mgmt``.
Configure ``cinder-volumes`` hosts with ``br-storage`` and ``br-mgmt``.
@ -59,8 +58,8 @@ actual swift objects are stored on separate physical hosts.
The swift proxy service is responsible for storage, retrieval, encoding and
decoding of objects from an object server.
Configuring the Object Storage (swift)
--------------------------------------
Configuring the Object Storage
------------------------------
Ensure the swift proxy hosts are configured with ``br-mgmt`` and
``br-storage``. Ensure storage hosts are on ``br-storage``. When using
@ -97,8 +96,8 @@ creation or deletion). These messages are then sent to
``nova-conductor`` which in turn pushes messages to ``nova-compute``
on the compute host.
Configuring the ephemeral storage (nova)
----------------------------------------
Configuring the ephemeral storage
---------------------------------
All nova containers on the infra hosts communicate using the AMQP service over
the management network ``br-mgmt``.
@ -124,8 +123,8 @@ Image service (glance)
The glance API and volume service runs in the glance container on
infra hosts.
Configuring the Image service (glance)
--------------------------------------
Configuring the Image service
-----------------------------
Configure glance-volume container to use the ``br-storage`` and
``br-mgmt`` interfaces.

View File

@ -5,7 +5,7 @@ Installation workflow
=====================
This diagram shows the general workflow associated with an
OpenStack-Ansible (OSA) installation.
OpenStack-Ansible installation.
.. figure:: figures/installation-workflow-overview.png

View File

@ -11,7 +11,6 @@ Overview
overview-network-arch.rst
overview-storage-arch.rst
overview-requirements.rst
overview-security.rst
overview-workflow.rst
--------------

View File

@ -10,8 +10,7 @@ Production environment
This example allows you to use your own parameters for the deployment.
If you followed the previously proposed design, the following table shows
bridges that are to be configured on hosts.
bridges that are to be configured on hosts:
+-------------+-----------------------+-------------------------------------+
| Bridge name | Best configured on | With a static IP |
+=============+=======================+=====================================+

View File

@ -67,14 +67,16 @@ practices, refer to `GitHub's documentation on generating SSH keys`_.
.. _GitHub's documentation on generating SSH keys: https://help.github.com/articles/generating-ssh-keys/
.. warning:: OpenStack-Ansible deployments expect the presence of a
``/root/.ssh/id_rsa.pub`` file on the deployment host.
The contents of this file is inserted into an
``authorized_keys`` file for the containers, which is a
necessary step for the Ansible playbooks. You can
override this behavior by setting the
``lxc_container_ssh_key`` variable to the public key for
the container.
.. warning::
OpenStack-Ansible deployments expect the presence of a
``/root/.ssh/id_rsa.pub`` file on the deployment host.
The contents of this file is inserted into an
``authorized_keys`` file for the containers, which is a
necessary step for the Ansible playbooks. You can
override this behavior by setting the
``lxc_container_ssh_key`` variable to the public key for
the container.
Configuring LVM
~~~~~~~~~~~~~~~

View File

@ -17,7 +17,7 @@ Target hosts
On each target host, perform the following tasks:
- Naming target hosts
- Name the target hosts
- Install the operating system