[docs] Revise deployment configuration chapter

Reorganised content based on feedback and IA proposal in
https://etherpad.openstack.org/p/osa-install-guide-IA:

1. Move affinity content to the appendix
2. Move security hardening configuration to the appendix
3. Create an advanced configuration section in the appendix
4. Delete configuring hosts and configuring target host networking information,
and create a configuration file examples section
5. Move glance configuration information to the developer docs
6. Move overridding configuration defaults to the appendix.
7. Move checking configuration file content to the installation chapter

Change-Id: I71efaf2472b1233f1b1a1367fcb00ca598d27ea9
Implements: blueprint osa-install-guide-overhaul
This commit is contained in:
daz 2016-08-02 14:56:07 +10:00 committed by Alexandra Settle
parent 19b68d8758
commit 5cc9d0b004
17 changed files with 245 additions and 768 deletions

View File

@ -0,0 +1,97 @@
========
Affinity
========
OpenStack-Ansible's dynamic inventory generation has a concept called
`affinity`. This determines how many containers of a similar type are deployed
onto a single physical host.
Using `shared-infra_hosts` as an example, consider this
``openstack_user_config.yml``:
.. code-block:: yaml
shared-infra_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
Three hosts are assigned to the `shared-infra_hosts` group,
OpenStack-Ansible ensures that each host runs a single database container,
a single memcached container, and a single RabbitMQ container. Each host has
an affinity of 1 by default, and that means each host will run one of each
container type.
You can skip the deployment of RabbitMQ altogether. This is
helpful when deploying a standalone swift environment. If you need
this configuration, your ``openstack_user_config.yml`` would look like this:
.. code-block:: yaml
shared-infra_hosts:
infra1:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.101
infra2:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.102
infra3:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.103
The configuration above deploys a memcached container and a database
container on each host, without the RabbitMQ containers.
.. _security_hardening:
Security hardening
~~~~~~~~~~~~~~~~~~
OpenStack-Ansible automatically applies host security hardening configurations
using the `openstack-ansible-security`_ role. The role uses a version of the
`Security Technical Implementation Guide (STIG)`_ that has been adapted for
Ubuntu 14.04 and OpenStack.
The role is applicable to physical hosts within an OpenStack-Ansible deployment
that are operating as any type of node, infrastructure or compute. By
default, the role is enabled. You can disable it by changing a variable
within ``user_variables.yml``:
.. code-block:: yaml
apply_security_hardening: false
When the variable is set to ``true``, the ``setup-hosts.yml`` playbook applies
the role during deployments.
You can apply security configurations to an existing environment or audit
an environment using a playbook supplied with OpenStack-Ansible:
.. code-block:: bash
# Perform a quick audit using Ansible's check mode
openstack-ansible --check security-hardening.yml
# Apply security hardening configurations
openstack-ansible security-hardening.yml
For more details on the security configurations that will be applied, refer to
the `openstack-ansible-security`_ documentation. Review the `Configuration`_
section of the openstack-ansible-security documentation to find out how to
fine-tune certain security configurations.
.. _openstack-ansible-security: http://docs.openstack.org/developer/openstack-ansible-security/
.. _Security Technical Implementation Guide (STIG): https://en.wikipedia.org/wiki/Security_Technical_Implementation_Guide
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
.. _Appendix H: ../install-guide/app-custom-layouts.html
--------------
.. include:: navigation.txt

View File

@ -10,6 +10,8 @@ Contents:
.. toctree::
configure-affinity.rst
configure-glance.rst
configure-hypervisor.rst
configure-nova.rst
configure-ironic.rst

View File

@ -0,0 +1,16 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
======================
Advanced configuration
======================
.. toctree::
:maxdepth: 2
app-advanced-config-override
app-advanced-config-security
app-advanced-config-sslcertificates
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,49 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
.. _security_hardening:
==================
Security hardening
==================
OpenStack-Ansible automatically applies host security hardening configurations
using the `openstack-ansible-security`_ role. The role uses a version of the
`Security Technical Implementation Guide (STIG)`_ that has been adapted for
Ubuntu 14.04 and OpenStack.
The role is applicable to physical hosts within an OpenStack-Ansible deployment
that are operating as any type of node, infrastructure or compute. By
default, the role is enabled. You can disable it by changing a variable
within ``user_variables.yml``:
.. code-block:: yaml
apply_security_hardening: false
When the variable is set to ``true``, the ``setup-hosts.yml`` playbook applies
the role during deployments.
You can apply security configurations to an existing environment or audit
an environment using a playbook supplied with OpenStack-Ansible:
.. code-block:: bash
# Perform a quick audit using Ansible's check mode
openstack-ansible --check security-hardening.yml
# Apply security hardening configurations
openstack-ansible security-hardening.yml
For more details on the security configurations that will be applied, refer to
the `openstack-ansible-security`_ documentation. Review the `Configuration`_
section of the openstack-ansible-security documentation to find out how to
fine-tune certain security configurations.
.. _openstack-ansible-security: http://docs.openstack.org/developer/openstack-ansible-security/
.. _Security Technical Implementation Guide (STIG): https://en.wikipedia.org/wiki/Security_Technical_Implementation_Guide
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
.. _Appendix H: ../install-guide/app-custom-layouts.html
--------------
.. include:: navigation.txt

View File

@ -1,5 +1,6 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=======================================
Securing services with SSL certificates
=======================================

View File

@ -8,6 +8,7 @@ Appendices
:maxdepth: 2
app-configfiles.rst
app-advanced-config-options.rst
app-resources.rst
app-plumgrid.rst
app-nuage.rst

View File

@ -1,30 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Checking the integrity of your configuration files
==================================================
Execute the following steps before running any playbook:
#. Ensure all files edited in ``/etc/`` are Ansible
YAML compliant. Guidelines can be found here:
`<http://docs.ansible.com/ansible/YAMLSyntax.html>`_
#. Check the integrity of your YAML files:
.. note:: Here is an online linter: `<http://www.yamllint.com/>`_
#. Run your command with ``syntax-check``:
.. code-block:: shell-session
# openstack-ansible setup-infrastructure.yml --syntax-check
#. Recheck that all indentation is correct.
.. note::
The syntax of the configuration files can be correct
while not being meaningful for OpenStack-Ansible.
--------------
.. include:: navigation.txt

View File

@ -1,132 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring target hosts
========================
Modify the ``/etc/openstack_deploy/openstack_user_config.yml`` file to
configure the target hosts.
Do not assign the same IP address to different target hostnames.
Unexpected results may occur. Each IP address and hostname must be a
matching pair. To use the same host in multiple roles, for example
infrastructure and networking, specify the same hostname and IP in each
section.
Use short hostnames rather than fully-qualified domain names (FQDN) to
prevent length limitation issues with LXC and SSH. For example, a
suitable short hostname for a compute host might be:
``123456-Compute001``.
Unless otherwise stated, replace ``*_IP_ADDRESS`` with the IP address of
the ``br-mgmt`` container management bridge on each target host.
#. Configure a list containing at least three infrastructure target
hosts in the ``shared-infra_hosts`` section:
.. code-block:: yaml
shared-infra_hosts:
infra01:
ip: INFRA01_IP_ADDRESS
infra02:
ip: INFRA02_IP_ADDRESS
infra03:
ip: INFRA03_IP_ADDRESS
infra04: ...
#. Configure a list containing at least two infrastructure target
hosts in the ``os-infra_hosts`` section (you can reuse
previous hosts as long as their name and ip is consistent):
.. code-block:: yaml
os-infra_hosts:
infra01:
ip: INFRA01_IP_ADDRESS
infra02:
ip: INFRA02_IP_ADDRESS
infra03:
ip: INFRA03_IP_ADDRESS
infra04: ...
#. Configure a list of at least one keystone target host in the
``identity_hosts`` section:
.. code-block:: yaml
identity_hosts:
infra1:
ip: IDENTITY01_IP_ADDRESS
infra2: ...
#. Configure a list containing at least one network target host in the
``network_hosts`` section:
.. code-block:: yaml
network_hosts:
network01:
ip: NETWORK01_IP_ADDRESS
network02: ...
Providing more than one network host in the ``network_hosts`` block will
enable `L3HA support using VRRP`_ in the ``neutron-agent`` containers.
.. _L3HA support using VRRP: http://docs.openstack.org/liberty/networking-guide/scenario_l3ha_lb.html
#. Configure a list containing at least one compute target host in the
``compute_hosts`` section:
.. code-block:: yaml
compute_hosts:
compute001:
ip: COMPUTE001_IP_ADDRESS
compute002: ...
#. Configure a list containing at least one logging target host in the
``log_hosts`` section:
.. code-block:: yaml
log_hosts:
logging01:
ip: LOGGER1_IP_ADDRESS
logging02: ...
#. Configure a list containing at least one repository target host in the
``repo-infra_hosts`` section:
.. code-block:: yaml
repo-infra_hosts:
repo01:
ip: REPO01_IP_ADDRESS
repo02:
ip: REPO02_IP_ADDRESS
repo03:
ip: REPO03_IP_ADDRESS
repo04: ...
The repository typically resides on one or more infrastructure hosts.
#. Configure a list containing at least one optional storage host in the
``storage_hosts`` section:
.. code-block:: yaml
storage_hosts:
storage01:
ip: STORAGE01_IP_ADDRESS
storage02: ...
Each storage host requires additional configuration to define the back end
driver.
The default configuration includes an optional storage host. To
install without storage hosts, comment out the stanza beginning with
the *storage_hosts:* line.
--------------
.. include:: navigation.txt

View File

@ -1,5 +1,6 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=================================
Initial environment configuration
=================================
@ -13,8 +14,7 @@ for Ansible. Start by getting those files into the correct places:
.. note::
As of Newton, the ``env.d`` directory has been moved from this source
directory to ``playbooks/inventory/``. See `Appendix H`_ for more
details on this change.
directory to ``playbooks/inventory/``.
#. Change to the ``/etc/openstack_deploy`` directory.
@ -39,99 +39,6 @@ other types of containers and all of these are listed in
For details about how the inventory is generated from the environment
configuration, see :ref:`developer-inventory`.
Affinity
~~~~~~~~
OpenStack-Ansible's dynamic inventory generation has a concept called
`affinity`. This determines how many containers of a similar type are deployed
onto a single physical host.
Using `shared-infra_hosts` as an example, consider this
``openstack_user_config.yml``:
.. code-block:: yaml
shared-infra_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
Three hosts are assigned to the `shared-infra_hosts` group,
OpenStack-Ansible ensures that each host runs a single database container,
a single memcached container, and a single RabbitMQ container. Each host has
an affinity of 1 by default, and that means each host will run one of each
container type.
You can skip the deployment of RabbitMQ altogether. This is
helpful when deploying a standalone swift environment. If you need
this configuration, your ``openstack_user_config.yml`` would look like this:
.. code-block:: yaml
shared-infra_hosts:
infra1:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.101
infra2:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.102
infra3:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.103
The configuration above deploys a memcached container and a database
container on each host, without the RabbitMQ containers.
.. _security_hardening:
Security hardening
~~~~~~~~~~~~~~~~~~
OpenStack-Ansible automatically applies host security hardening configurations
using the `openstack-ansible-security`_ role. The role uses a version of the
`Security Technical Implementation Guide (STIG)`_ that has been adapted for
Ubuntu 14.04 and OpenStack.
The role is applicable to physical hosts within an OpenStack-Ansible deployment
that are operating as any type of node, infrastructure or compute. By
default, the role is enabled. You can disable it by changing a variable
within ``user_variables.yml``:
.. code-block:: yaml
apply_security_hardening: false
When the variable is set to ``true``, the ``setup-hosts.yml`` playbook applies
the role during deployments.
You can apply security configurations to an existing environment or audit
an environment using a playbook supplied with OpenStack-Ansible:
.. code-block:: bash
# Perform a quick audit using Ansible's check mode
openstack-ansible --check security-hardening.yml
# Apply security hardening configurations
openstack-ansible security-hardening.yml
For more details on the security configurations that will be applied, refer to
the `openstack-ansible-security`_ documentation. Review the `Configuration`_
section of the openstack-ansible-security documentation to find out how to
fine-tune certain security configurations.
.. _openstack-ansible-security: http://docs.openstack.org/developer/openstack-ansible-security/
.. _Security Technical Implementation Guide (STIG): https://en.wikipedia.org/wiki/Security_Technical_Implementation_Guide
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
.. _Appendix H: ../install-guide/app-custom-layouts.html
--------------
.. include:: navigation.txt

View File

@ -1,296 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
.. _network_configuration:
Configuring target host networking
==================================
Edit the ``/etc/openstack_deploy/openstack_user_config.yml`` file to
configure target host networking.
#. Configure the IP address ranges associated with each network in the
``cidr_networks`` section:
.. code-block:: yaml
cidr_networks:
# Management (same range as br-mgmt on the target hosts)
container: CONTAINER_MGMT_CIDR
# Tunnel endpoints for VXLAN tenant networks
# (same range as br-vxlan on the target hosts)
tunnel: TUNNEL_CIDR
# Storage (same range as br-storage on the target hosts)
storage: STORAGE_CIDR
Replace ``*_CIDR`` with the appropriate IP address range in CIDR
notation. For example, 203.0.113.0/24.
Use the same IP address ranges as the underlying physical network
interfaces or bridges in `the section called "Configuring
the network" <targethosts-network.html>`_. For example, if the
container network uses 203.0.113.0/24, the ``CONTAINER_MGMT_CIDR``
also uses 203.0.113.0/24.
The default configuration includes the optional storage and service
networks. To remove one or both of them, comment out the appropriate
network name.
#. Configure the existing IP addresses in the ``used_ips`` section:
.. code-block:: yaml
used_ips:
- EXISTING_IP_ADDRESSES
Replace ``EXISTING_IP_ADDRESSES`` with a list of existing IP
addresses in the ranges defined in the previous step. This list
should include all IP addresses manually configured on target hosts,
internal load balancers, service network bridge, deployment hosts and
any other devices to avoid conflicts during the automatic IP address
generation process.
Add individual IP addresses on separate lines. For example, to
prevent use of 203.0.113.101 and 201:
.. code-block:: yaml
used_ips:
- 203.0.113.101
- 203.0.113.201
Add a range of IP addresses using a comma. For example, to prevent
use of 203.0.113.101-201:
.. code-block:: yaml
used_ips:
- "203.0.113.101,203.0.113.201"
#. Configure load balancing in the ``global_overrides`` section:
.. code-block:: yaml
global_overrides:
# Internal load balancer VIP address
internal_lb_vip_address: INTERNAL_LB_VIP_ADDRESS
# External (DMZ) load balancer VIP address
external_lb_vip_address: EXTERNAL_LB_VIP_ADDRESS
# Container network bridge device
management_bridge: "MGMT_BRIDGE"
# Tunnel network bridge device
tunnel_bridge: "TUNNEL_BRIDGE"
Replace ``INTERNAL_LB_VIP_ADDRESS`` with the internal IP address of
the load balancer. Infrastructure and OpenStack services use this IP
address for internal communication.
Replace ``EXTERNAL_LB_VIP_ADDRESS`` with the external, public, or
DMZ IP address of the load balancer. Users primarily use this IP
address for external API and web interfaces access.
Replace ``MGMT_BRIDGE`` with the container bridge device name,
typically ``br-mgmt``.
Replace ``TUNNEL_BRIDGE`` with the tunnel/overlay bridge device
name, typically ``br-vxlan``.
#. Configure the management network in the ``provider_networks`` subsection:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- all_containers
- hosts
type: "raw"
container_bridge: "br-mgmt"
container_interface: "eth1"
container_type: "veth"
ip_from_q: "container"
is_container_address: true
is_ssh_address: true
#. Configure optional networks in the ``provider_networks`` subsection. For
example, a storage network:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
The default configuration includes the optional storage and service
networks. To remove one or both of them, comment out the entire
associated stanza beginning with the ``- network:`` line.
#. Configure OpenStack Networking VXLAN tunnel/overlay networks in the
``provider_networks`` subsection:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "TUNNEL_ID_RANGE"
net_name: "vxlan"
Replace ``TUNNEL_ID_RANGE`` with the tunnel ID range. For example,
1:1000.
#. Configure OpenStack Networking flat (untagged) and VLAN (tagged) networks
in the ``provider_networks`` subsection:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "PHYSICAL_NETWORK_INTERFACE"
type: "flat"
net_name: "flat"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: VLAN_ID_RANGE
net_name: "vlan"
Replace ``VLAN_ID_RANGE`` with the VLAN ID range for each VLAN network.
For example, 1:1000. Supports more than one range of VLANs on a particular
network. For example, 1:1000,2001:3000. Create a similar stanza for each
additional network.
Replace ``PHYSICAL_NETWORK_INTERFACE`` with the network interface used for
flat networking. Ensure this is a physical interface on the same L2 network
being used with the ``br-vlan`` devices. If no additional network interface is
available, a veth pair plugged into the ``br-vlan`` bridge can provide the
necessary interface.
The following is an example of creating a ``veth-pair`` within an existing bridge:
.. code-block:: text
# Create veth pair, do not abort if already exists
pre-up ip link add br-vlan-veth type veth peer name PHYSICAL_NETWORK_INTERFACE || true
# Set both ends UP
pre-up ip link set br-vlan-veth up
pre-up ip link set PHYSICAL_NETWORK_INTERFACE up
# Delete veth pair on DOWN
post-down ip link del br-vlan-veth || true
bridge_ports br-vlan-veth
Adding static routes to network interfaces
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Optionally, you can add one or more static routes to interfaces within
containers. Each route requires a destination network in CIDR notation
and a gateway. For example:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_interface: "eth2"
container_type: "veth"
ip_from_q: "storage"
static_routes:
- cidr: 10.176.0.0/12
gateway: 172.29.248.1
This example adds the following content to the
``/etc/network/interfaces.d/eth2.cfg`` file in the appropriate
containers:
.. code-block:: shell-session
post-up ip route add 10.176.0.0/12 via 172.29.248.1 || true
The ``cidr`` and ``gateway`` values must *both* be specified, or the
inventory script will raise an error. Accuracy of the network information
is not checked within the inventory script, just that the keys and values
are present.
Setting an MTU on a network interface
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Larger MTU's can be useful on certain networks, especially storage networks.
Add a ``container_mtu`` attribute within the ``provider_networks`` dictionary
to set a custom MTU on the container network interfaces that attach to a
particular network:
.. code-block:: yaml
provider_networks:
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_interface: "eth2"
container_type: "veth"
container_mtu: "9000"
ip_from_q: "storage"
static_routes:
- cidr: 10.176.0.0/12
gateway: 172.29.248.1
The example above enables `jumbo frames`_ by setting the MTU on the storage
network to 9000.
.. note:: If you are editing the MTU for your networks, you may also want
to adapt your neutron MTU settings (depending on your needs). Please
refer to the neutron documentation (`Neutron MTU considerations`_).
.. _jumbo frames: https://en.wikipedia.org/wiki/Jumbo_frame
.. _neutron MTU considerations: http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html
Setting an MTU on a default lxc bridge (lxcbr0)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To modify a container MTU it is also required to set ``lxc_net_mtu`` to
a value other than 1500 in ``user_variables.yml``. It will also be necessary
to modify the ``provider_networks`` subsection to reflect the change.
This will define the mtu on the lxcbr0 interface. An ifup/ifdown will
be required if the interface is already up for the changes to take effect.
--------------
.. include:: navigation.txt

View File

@ -0,0 +1,34 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
==================================
openstack_user_config.yml examples
==================================
The ``/etc/openstack_deploy/openstack_user_config.yml`` configuration file
contains parameters to configure target host, and target host networking.
Examples are provided below for a test environment and production environment.
(WIP)
Test environment
~~~~~~~~~~~~~~~~
.. TODO Parse openstack_user_config.yml examples when done.
Production environment
~~~~~~~~~~~~~~~~~~~~~~
.. TODO Parse openstack_user_config.yml examples when done.
Setting an MTU on a default lxc bridge (lxcbr0)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To modify a container MTU it is also required to set ``lxc_net_mtu`` to
a value other than 1500 in ``user_variables.yml``. It will also be necessary
to modify the ``provider_networks`` subsection to reflect the change.
This will define the mtu on the lxcbr0 interface. An ifup/ifdown will
be required if the interface is already up for the changes to take effect.
--------------
.. include:: navigation.txt

View File

@ -1,49 +1,35 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Chapter 4. Deployment configuration
-----------------------------------
========================
Deployment configuration
========================
.. toctree::
:maxdepth: 2
configure-initial.rst
configure-networking.rst
configure-hostlist.rst
configure-user-config-examples.rst
configure-creds.rst
configure-glance.rst
configure-openstack.rst
configure-sslcertificates.rst
configure-configurationintegrity.rst
.. figure:: figures/workflow-configdeployment.png
:width: 100%
**Figure 4.1. Installation work flow**
.. image:: figures/workflow-configdeployment.png
Installation workflow
Ansible references a handful of files containing mandatory and optional
configuration directives. These files must be modified to define the
target environment before running the Ansible playbooks. Perform the
following tasks:
target environment before running the Ansible playbooks. Configuration
tasks include:
- Configure Target host networking to define bridge interfaces and
- target host networking to define bridge interfaces and
networks
- Configure a list of target hosts on which to install the software
- a list of target hosts on which to install the software
- Configure virtual and physical network relationships for OpenStack
- virtual and physical network relationships for OpenStack
Networking (neutron)
- (Optional) Configure the hypervisor
- (Optional) Configure Block Storage (cinder) to use the NetApp back
end
- (Optional) Configure Block Storage (cinder) backups.
- (Optional) Configure Block Storage availability zones
- Configure passwords for all services
- passwords for all services
--------------

View File

@ -22,19 +22,36 @@ The installation process requires running three main playbooks:
(ceilometer and aodh), Object Storage service (swift), and OpenStack
bare metal provisioning (ironic).
Installation process
~~~~~~~~~~~~~~~~~~~~
Checking the integrity of your configuration files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before running any playbook, check the integrity of your configuration files:
#. Ensure all files edited in ``/etc/`` are Ansible
YAML compliant. Guidelines can be found here:
`<http://docs.ansible.com/ansible/YAMLSyntax.html>`_
#. Check the integrity of your YAML files:
.. note:: Here is an online linter: `<http://www.yamllint.com/>`_
#. Run your command with ``syntax-check``:
.. code-block:: shell-session
# openstack-ansible setup-infrastructure.yml --syntax-check
#. Recheck that all indentation is correct.
.. note::
The syntax of the configuration files can be correct
while not being meaningful for OpenStack-Ansible.
Run playbooks
~~~~~~~~~~~~~
.. figure:: figures/installation-workflow-run-playbooks.png
:scale: 100
.. note::
Before continuing, validate the configuration files using the
guidance in `Checking the integrity of your configuration files`_.
.. _Checking the integrity of your configuration files: ../install-guide/configure-configurationintegrity.html
:width: 100%
#. Change to the ``/opt/openstack-ansible/playbooks`` directory.
@ -132,7 +149,7 @@ Verifying OpenStack operation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. figure:: figures/installation-workflow-verify-openstack.png
:scale: 100
:width: 100%
.. TODO Add procedures to test different layers of the OpenStack environment

View File

@ -51,7 +51,7 @@ the role is to apply as many security configurations as possible without
disrupting the operation of an OpenStack deployment.
Refer to the documentation on :ref:`security_hardening` for more information
on the role and how to enable it in OpenStack-Ansible.
on the role in OpenStack-Ansible.
Least privilege
~~~~~~~~~~~~~~~

View File

@ -1,175 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Image (glance) service
======================================
In an all-in-one deployment with a single infrastructure node, the Image
(glance) service uses the local file system on the target host to store images.
When deploying production clouds, we recommend backing glance with a
swift backend or some form of shared storage.
Configuring default and additional stores
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible provides two configurations for controlling where glance
stores files: the default store and additional stores. glance stores images in
file-based storage by default. Two additional stores, ``http`` and ``cinder``
(Block Storage), are also enabled by default.
You can choose alternative default stores and alternative additional stores.
For example, a deployer that uses Ceph may configure the following Ansible
variables:
.. code-block:: yaml
glance_default_store = rbd
glance_additional_stores:
- swift
- http
- cinder
The configuration above configures glance to use ``rbd`` (Ceph) by
default, but ``glance_additional_stores`` list enables ``swift``,
``http`` and ``cinder`` stores in the glance
configuration files.
The following example sets glance to use the ``images`` pool.
This example uses cephx authentication and requires an existing ``glance``
account for the ``images`` pool.
In ``user_variables.yml``:
.. code-block:: yaml
glance_default_store: rbd
ceph_mons:
- 172.29.244.151
- 172.29.244.152
- 172.29.244.153
You can use the following variables if you are not using the defaults:
.. code-block:: yaml
glance_ceph_client: <glance-username>
glance_rbd_store_pool: <glance-pool-name>
glance_rbd_store_chunk_size: <chunk-size>
Storing images in Cloud Files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following procedure describes how to modify the
``/etc/openstack_deploy/user_variables.yml`` file to enable Cloud Files
usage.
#. Change the default store to use Object Storage (swift), the
underlying architecture of Cloud Files:
.. code-block:: yaml
glance_default_store: swift
#. Set the appropriate authentication URL and version:
.. code-block:: yaml
glance_swift_store_auth_version: 2
glance_swift_store_auth_address: https://127.0.0.1/v2.0
#. Set the swift account credentials:
.. code-block:: yaml
# Replace this capitalized variables with actual data.
glance_swift_store_user: GLANCE_SWIFT_TENANT:GLANCE_SWIFT_USER
glance_swift_store_key: SWIFT_PASSWORD_OR_KEY
#. Change the ``glance_swift_store_endpoint_type`` from the default
``internalURL`` settings to ``publicURL`` if needed.
.. code-block:: yaml
glance_swift_store_endpoint_type: publicURL
#. Define the store name:
.. code-block:: yaml
glance_swift_store_container: STORE_NAME
Replace ``STORE_NAME`` with the container name in swift to be
used for storing images. If the container does not exist, it is
automatically created.
#. Define the store region:
.. code-block:: yaml
glance_swift_store_region: STORE_REGION
Replace ``STORE_REGION`` if needed.
#. (Optional) Set the paste deploy flavor:
.. code-block:: yaml
glance_flavor: GLANCE_FLAVOR
By default, glance uses caching and authenticates with the
Identity (keystone) service. The default maximum size of the image cache is 10GB.
The default glance container size is 12GB. In some
configurations, glance attempts to cache an image
which exceeds the available disk space. If necessary, you can disable
caching. For example, to use Identity without caching, replace
``GLANCE_FLAVOR`` with ``keystone``:
.. code-block:: yaml
glance_flavor: keystone
Or, to disable both authentication and caching, set
``GLANCE_FLAVOR`` to no value:
.. code-block:: yaml
glance_flavor:
This option is set by default to use authentication and cache
management in the ``playbooks/roles/os_glance/defaults/main.yml``
file. To override the default behavior, set ``glance_flavor`` to a
different value in ``/etc/openstack_deploy/user_variables.yml``.
The possible values for ``GLANCE_FLAVOR`` are:
- (Nothing)
- ``caching``
- ``cachemanagement``
- ``keystone``
- ``keystone+caching``
- ``keystone+cachemanagement`` (default)
- ``trusted-auth``
- ``trusted-auth+cachemanagement``
Special considerations
~~~~~~~~~~~~~~~~~~~~~~
If the swift password or key contains a dollar sign (``$``), it must
be escaped with an additional dollar sign (``$$``). For example, a password of
``super$ecure`` would need to be entered as ``super$$ecure``. This is
necessary due to the way `oslo.config formats strings`_.
.. _oslo.config formats strings: https://bugs.launchpad.net/oslo-incubator/+bug/1259729
--------------
.. include:: navigation.txt