[docs] Applying edits to the OSA install guide: overview
Bug: #1628958 Change-Id: Id41224acc3d54a89b28c9610ec205be51ab5fb51
This commit is contained in:
parent
5bac2e595d
commit
4f4dbc80eb
@ -2,6 +2,10 @@
|
|||||||
Installation Guide
|
Installation Guide
|
||||||
==================
|
==================
|
||||||
|
|
||||||
|
This guide provides instructions for performing an OpenStack-Ansible
|
||||||
|
installation in a test environment and a production environment, and is
|
||||||
|
intended for deployers.
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
|
|
||||||
|
@ -4,84 +4,82 @@
|
|||||||
Network architecture
|
Network architecture
|
||||||
====================
|
====================
|
||||||
|
|
||||||
Although Ansible automates most deployment operations, networking on
|
Although Ansible automates most deployment operations, networking on target
|
||||||
target hosts requires manual configuration as it varies from one use-case
|
hosts requires manual configuration because it varies from one use case to
|
||||||
to another.
|
another. This section describes the network configuration that must be
|
||||||
|
|
||||||
The following section describes the network configuration that must be
|
|
||||||
implemented on all target hosts.
|
implemented on all target hosts.
|
||||||
|
|
||||||
A deeper explanation of how the networking works can be found in
|
For more information about how networking works, see :ref:`network-appendix`.
|
||||||
:ref:`network-appendix`.
|
|
||||||
|
|
||||||
Host network bridges
|
Host network bridges
|
||||||
~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
OpenStack-Ansible uses bridges to connect physical and logical network
|
OpenStack-Ansible uses bridges to connect physical and logical network
|
||||||
interfaces on the host to virtual network interfaces within containers.
|
interfaces on the host to virtual network interfaces within containers.
|
||||||
|
Target hosts are configured with the following network bridges.
|
||||||
|
|
||||||
Target hosts are configured with the following network bridges:
|
|
||||||
|
|
||||||
* LXC internal ``lxcbr0``:
|
* LXC internal: ``lxcbr0``
|
||||||
|
|
||||||
* This bridge is **required**, but OpenStack-Ansible configures it
|
The ``lxcbr0`` bridge is **required**, but OpenStack-Ansible configures it
|
||||||
automatically.
|
automatically. It provides external (typically Internet) connectivity to
|
||||||
|
containers.
|
||||||
|
|
||||||
* Provides external (typically internet) connectivity to containers.
|
This bridge does not directly attach to any physical or logical
|
||||||
|
|
||||||
* This bridge does not directly attach to any physical or logical
|
|
||||||
interfaces on the host because iptables handles connectivity. It
|
interfaces on the host because iptables handles connectivity. It
|
||||||
attaches to ``eth0`` in each container, but the container network
|
attaches to ``eth0`` in each container.
|
||||||
interface it attaches to is configurable in
|
|
||||||
``openstack_user_config.yml`` in the ``provider_networks``
|
The container network that the bridge attaches to is configurable in the
|
||||||
|
``openstack_user_config.yml`` file in the ``provider_networks``
|
||||||
dictionary.
|
dictionary.
|
||||||
|
|
||||||
* Container management ``br-mgmt``:
|
* Container management: ``br-mgmt``
|
||||||
|
|
||||||
* This bridge is **required**.
|
The ``br-mgmt`` bridge is **required**. It provides management of and
|
||||||
|
communication between the infrastructure and OpenStack services.
|
||||||
|
|
||||||
* Provides management of and communication between the infrastructure
|
The bridge attaches to a physical or logical interface, typically a
|
||||||
and OpenStack services.
|
``bond0`` VLAN subinterface. It also attaches to ``eth1`` in each container.
|
||||||
|
|
||||||
* Attaches to a physical or logical interface, typically a ``bond0`` VLAN
|
The container network interface that the bridge attaches to is configurable
|
||||||
subinterface. Also attaches to ``eth1`` in each container. The container
|
in the ``openstack_user_config.yml`` file.
|
||||||
network interface it attaches to is configurable in
|
|
||||||
``openstack_user_config.yml``.
|
|
||||||
|
|
||||||
* Storage ``br-storage``:
|
* Storage:``br-storage``
|
||||||
|
|
||||||
* This bridge is **optional**, but recommended for production
|
The ``br-storage`` bridge is **optional**, but recommended for production
|
||||||
environments.
|
environments. It provides segregated access to Block Storage devices
|
||||||
|
between OpenStack services and Block Storage devices.
|
||||||
|
|
||||||
* Provides segregated access to Block Storage devices between
|
The bridge attaches to a physical or logical interface, typically a
|
||||||
OpenStack services and Block Storage devices.
|
``bond0`` VLAN subinterface. It also attaches to ``eth2`` in each
|
||||||
|
associated container.
|
||||||
|
|
||||||
|
The container network interface that the bridge attaches to is configurable
|
||||||
|
in the ``openstack_user_config.yml`` file.
|
||||||
|
|
||||||
|
* OpenStack Networking tunnel: ``br-vxlan``
|
||||||
|
|
||||||
|
The ``br-vxlan`` bridge is **required** if the environment is configured to
|
||||||
|
allow projects to create virtual networks. It provides the interface for
|
||||||
|
virtual (VXLAN) tunnel networks.
|
||||||
|
|
||||||
|
The bridge attaches to a physical or logical interface, typically a
|
||||||
|
``bond1`` VLAN subinterface. It also attaches to ``eth10`` in each
|
||||||
|
associated container.
|
||||||
|
|
||||||
* Attaches to a physical or logical interface, typically a ``bond0`` VLAN
|
|
||||||
subinterface. Also attaches to ``eth2`` in each associated container.
|
|
||||||
The container network interface it attaches to is configurable in
|
The container network interface it attaches to is configurable in
|
||||||
``openstack_user_config.yml``.
|
the ``openstack_user_config.yml`` file.
|
||||||
|
|
||||||
* OpenStack Networking tunnel ``br-vxlan``:
|
* OpenStack Networking provider: ``br-vlan``
|
||||||
|
|
||||||
- This bridge is **required** if the environment is configured to allow
|
The ``br-vlan`` bridge is **required**. It provides infrastructure for VLAN
|
||||||
projects to create virtual networks.
|
tagged or flat (no VLAN tag) networks.
|
||||||
|
|
||||||
- Provides the interface for virtual (VXLAN) tunnel networks.
|
The bridge attaches to a physical or logical interface, typically ``bond1``.
|
||||||
|
It attaches to ``eth11`` for VLAN type networks in each associated
|
||||||
|
container. It is not assigned an IP address because it handles only
|
||||||
|
layer 2 connectivity.
|
||||||
|
|
||||||
- Attaches to a physical or logical interface, typically a ``bond1`` VLAN
|
The container network interface that the bridge attaches to is configurable
|
||||||
subinterface. Also attaches to ``eth10`` in each associated container.
|
in the ``openstack_user_config.yml`` file.
|
||||||
The container network interface it attaches to is configurable in
|
|
||||||
``openstack_user_config.yml``.
|
|
||||||
|
|
||||||
- OpenStack Networking provider ``br-vlan``:
|
|
||||||
|
|
||||||
- This bridge is **required**.
|
|
||||||
|
|
||||||
- Provides infrastructure for VLAN tagged or flat (no VLAN tag) networks.
|
|
||||||
|
|
||||||
- Attaches to a physical or logical interface, typically ``bond1``.
|
|
||||||
Attaches to ``eth11`` for vlan type networks in each associated
|
|
||||||
container. It is not assigned an IP address because it only handles
|
|
||||||
layer 2 connectivity. The container network interface it attaches to is
|
|
||||||
configurable in ``openstack_user_config.yml``.
|
|
||||||
|
|
||||||
|
@ -7,56 +7,51 @@ automation engine to deploy an OpenStack environment on Ubuntu Linux.
|
|||||||
For isolation and ease of maintenance, you can install OpenStack components
|
For isolation and ease of maintenance, you can install OpenStack components
|
||||||
into Linux containers (LXC).
|
into Linux containers (LXC).
|
||||||
|
|
||||||
This documentation is intended for deployers, and walks through an
|
|
||||||
OpenStack-Ansible installation for a test environment and production
|
|
||||||
environment.
|
|
||||||
|
|
||||||
Ansible
|
Ansible
|
||||||
~~~~~~~
|
~~~~~~~
|
||||||
|
|
||||||
Ansible provides an automation platform to simplify system and application
|
Ansible provides an automation platform to simplify system and application
|
||||||
deployment. Ansible manages systems using Secure Shell (SSH)
|
deployment. Ansible manages systems by using Secure Shell (SSH)
|
||||||
instead of unique protocols that require remote daemons or agents.
|
instead of unique protocols that require remote daemons or agents.
|
||||||
|
|
||||||
Ansible uses playbooks written in the YAML language for orchestration.
|
Ansible uses playbooks written in the YAML language for orchestration.
|
||||||
For more information, see `Ansible - Intro to
|
For more information, see `Ansible - Intro to
|
||||||
Playbooks <http://docs.ansible.com/playbooks_intro.html>`_.
|
Playbooks <http://docs.ansible.com/playbooks_intro.html>`_.
|
||||||
|
|
||||||
In this guide, we refer to two types of hosts:
|
This guide refers to the following types of hosts:
|
||||||
|
|
||||||
* `Deployment host` - The host running Ansible playbooks.
|
* `Deployment host`, which runs the Ansible playbooks
|
||||||
* `Target hosts` - The hosts where Ansible installs OpenStack services and
|
* `Target hosts`, where Ansible installs OpenStack services and infrastructure
|
||||||
infrastructure components.
|
components
|
||||||
|
|
||||||
Linux containers (LXC)
|
Linux containers (LXC)
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Containers provide operating-system level virtualization by enhancing
|
Containers provide operating-system level virtualization by enhancing
|
||||||
the concept of ``chroot`` environments. These isolate resources and file
|
the concept of ``chroot`` environments. Containers isolate resources and file
|
||||||
systems for a particular group of processes without the overhead and
|
systems for a particular group of processes without the overhead and
|
||||||
complexity of virtual machines. They access the same kernel, devices,
|
complexity of virtual machines. They access the same kernel, devices,
|
||||||
and file systems on the underlying host and provide a thin operational
|
and file systems on the underlying host and provide a thin operational
|
||||||
layer built around a set of rules.
|
layer built around a set of rules.
|
||||||
|
|
||||||
The LXC project implements operating system level
|
The LXC project implements operating-system-level
|
||||||
virtualization on Linux using kernel namespaces and includes the
|
virtualization on Linux by using kernel namespaces, and it includes the
|
||||||
following features:
|
following features:
|
||||||
|
|
||||||
* Resource isolation including CPU, memory, block I/O, and network
|
* Resource isolation including CPU, memory, block I/O, and network, by
|
||||||
using ``cgroups``.
|
using ``cgroups``
|
||||||
* Selective connectivity to physical and virtual network devices on the
|
* Selective connectivity to physical and virtual network devices on the
|
||||||
underlying physical host.
|
underlying physical host
|
||||||
* Support for a variety of backing stores including LVM.
|
* Support for a variety of backing stores, including Logical Volume Manager
|
||||||
|
(LVM)
|
||||||
* Built on a foundation of stable Linux technologies with an active
|
* Built on a foundation of stable Linux technologies with an active
|
||||||
development and support community.
|
development and support community
|
||||||
|
|
||||||
|
|
||||||
Installation workflow
|
Installation workflow
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
This diagram shows the general workflow associated with an
|
The following diagram shows the general workflow of an OpenStack-Ansible
|
||||||
OpenStack-Ansible installation.
|
installation.
|
||||||
|
|
||||||
|
|
||||||
.. figure:: figures/installation-workflow-overview.png
|
.. figure:: figures/installation-workflow-overview.png
|
||||||
:width: 100%
|
:width: 100%
|
||||||
|
@ -8,41 +8,41 @@ network recommendations for running OpenStack in a production environment.
|
|||||||
Software requirements
|
Software requirements
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Ensure all hosts within an OpenStack-Ansible environment meet the following
|
Ensure that all hosts within an OpenStack-Ansible (OSA) environment meet the
|
||||||
minimum requirements:
|
following minimum requirements:
|
||||||
|
|
||||||
* Ubuntu 16.04 LTS (Xenial Xerus)/Ubuntu 14.04 LTS (Trusty Tahr)
|
* Ubuntu 16.04 LTS (Xenial Xerus) or Ubuntu 14.04 LTS (Trusty Tahr)
|
||||||
|
|
||||||
* OSA is tested regularly against the latest Ubuntu 16.04 LTS Xenial
|
* OpenStack-Ansible is tested regularly against the latest point releases of
|
||||||
point releases and Ubuntu 14.04 Trusty as well.
|
Ubuntu 16.04 LTS and Ubuntu 14.04 LTS.
|
||||||
* Linux kernel version ``3.13.0-34-generic`` or later.
|
* Linux kernel version ``3.13.0-34-generic`` or later is required.
|
||||||
* For Trusty hosts, you must enable the ``trusty-backports`` or
|
* For Trusty hosts, you must enable the ``trusty-backports`` or the
|
||||||
repositories in ``/etc/apt/sources.list`` or
|
repositories in ``/etc/apt/sources.list`` or
|
||||||
``/etc/apt/sources.list.d/``. For detailed instructions, see
|
``/etc/apt/sources.list.d/``. For detailed instructions, see the
|
||||||
`Ubuntu documentation <https://help.ubuntu.com/community/
|
`Ubuntu documentation <https://help.ubuntu.com/community/
|
||||||
UbuntuBackports#Enabling_Backports_Manually>`_.
|
UbuntuBackports#Enabling_Backports_Manually>`_.
|
||||||
|
|
||||||
* Secure Shell (SSH) client and server that supports public key
|
* Secure Shell (SSH) client and server that support public key
|
||||||
authentication
|
authentication
|
||||||
|
|
||||||
* Network Time Protocol (NTP) client for time synchronization (such as
|
* Network Time Protocol (NTP) client for time synchronization (such as
|
||||||
``ntpd`` or ``chronyd``)
|
``ntpd`` or ``chronyd``)
|
||||||
|
|
||||||
* Python 2.7.x must be on the hosts.
|
* Python 2.7.*x*
|
||||||
|
|
||||||
* en_US.UTF-8 as locale
|
* en_US.UTF-8 as the locale
|
||||||
|
|
||||||
CPU recommendations
|
CPU recommendations
|
||||||
~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
* Compute hosts with multi-core processors that have `hardware-assisted
|
* Compute hosts should have multicore processors with `hardware-assisted
|
||||||
virtualization extensions`_ available. These extensions provide a
|
virtualization extensions`_. These extensions provide a
|
||||||
significant performance boost and improve security in virtualized
|
significant performance boost and improve security in virtualized
|
||||||
environments.
|
environments.
|
||||||
|
|
||||||
* Infrastructure hosts with multi-core processors for best
|
* Infrastructure (control plane) hosts should have multicore processors for
|
||||||
performance. Some services, such as MySQL, greatly benefit from additional
|
best performance. Some services, such as MySQL, benefit from
|
||||||
CPU cores and other technologies, such as `Hyper-threading`_.
|
additional CPU cores and other technologies, such as `Hyper-threading`_.
|
||||||
|
|
||||||
.. _hardware-assisted virtualization extensions: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
|
.. _hardware-assisted virtualization extensions: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
|
||||||
.. _Hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
|
.. _Hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
|
||||||
@ -54,46 +54,45 @@ Different hosts have different disk space requirements based on the
|
|||||||
services running on each host:
|
services running on each host:
|
||||||
|
|
||||||
Deployment hosts
|
Deployment hosts
|
||||||
10GB of disk space is sufficient for holding the OpenStack-Ansible
|
Ten GB of disk space is sufficient for holding the OpenStack-Ansible
|
||||||
repository content and additional required software.
|
repository content and additional required software.
|
||||||
|
|
||||||
Compute hosts
|
Compute hosts
|
||||||
Disk space requirements vary depending on the total number of instances
|
Disk space requirements depend on the total number of instances
|
||||||
running on each host and the amount of disk space allocated to each instance.
|
running on each host and the amount of disk space allocated to each instance.
|
||||||
Compute hosts need to have at least 100GB of disk space available. Consider
|
Compute hosts must have a minimum of 1 TB of disk space available. Consider
|
||||||
disks that provide higher throughput with lower latency, such as SSD drives
|
disks that provide higher I/O throughput with lower latency, such as SSD
|
||||||
in a RAID array.
|
drives in a RAID array.
|
||||||
|
|
||||||
Storage hosts
|
Storage hosts
|
||||||
Hosts running the Block Storage (cinder) service often consume the most disk
|
Hosts running the Block Storage (cinder) service often consume the most disk
|
||||||
space in OpenStack environments. As with compute hosts,
|
space in OpenStack environments. Storage hosts must have a minimum of 1 TB
|
||||||
choose disks that provide the highest I/O throughput with the lowest latency
|
of disk space. As with Compute hosts, choose disks that provide the highest
|
||||||
for storage hosts. Storage hosts need to have 1TB of disk space at a
|
I/O throughput with the lowest latency.
|
||||||
minimum.
|
|
||||||
|
|
||||||
Infrastructure hosts
|
Infrastructure (control plane) hosts
|
||||||
The OpenStack control plane contains storage-intensive services, such as
|
The OpenStack control plane contains storage-intensive services, such as the
|
||||||
the Image (glance) service as well as MariaDB. These control plane hosts
|
Image service (glance), and MariaDB. These hosts must have a minimum of
|
||||||
need to have 100GB of disk space available at a minimum.
|
100 GB of disk space.
|
||||||
|
|
||||||
Logging hosts
|
Logging hosts
|
||||||
An OpenStack-Ansible deployment generates a significant amount of logging.
|
An OpenStack-Ansible deployment generates a significant amount of log
|
||||||
Logs come from a variety of sources, including services running in
|
information. Logs come from a variety of sources, including services running
|
||||||
containers, the containers themselves, and the physical hosts. Logging hosts
|
in containers, the containers themselves, and the physical hosts. Logging
|
||||||
need sufficient disk space to hold live, and rotated (historical), log files.
|
hosts need sufficient disk space to hold live and rotated (historical) log
|
||||||
In addition, the storage performance must be enough to keep pace with the
|
files. In addition, the storage performance must be able to keep pace with
|
||||||
log traffic coming from various hosts and containers within the OpenStack
|
the log traffic coming from various hosts and containers within the OpenStack
|
||||||
environment. Reserve a minimum of 50GB of disk space for storing
|
environment. Reserve a minimum of 50 GB of disk space for storing logs on
|
||||||
logs on the logging hosts.
|
the logging hosts.
|
||||||
|
|
||||||
Hosts that provide Block Storage volumes must have logical volume
|
Hosts that provide Block Storage volumes must have Logical Volume
|
||||||
manager (LVM) support. Ensure those hosts have a ``cinder-volume`` volume
|
Manager (LVM) support. Ensure that hosts have a ``cinder-volume`` volume
|
||||||
group that OpenStack-Ansible can configure for use with cinder.
|
group that OpenStack-Ansible can configure for use with Block Storage.
|
||||||
|
|
||||||
Each control plane host runs services inside LXC containers. The container
|
Each infrastructure (control plane) host runs services inside LXC containers.
|
||||||
filesystems are deployed by default onto the root filesystem of each control
|
The container file systems are deployed by default on the root file system of
|
||||||
plane hosts. You have the option to deploy those container filesystems
|
each control plane host. You have the option to deploy those container file
|
||||||
into logical volumes by creating a volume group called ``lxc``.
|
systems into logical volumes by creating a volume group called lxc.
|
||||||
OpenStack-Ansible creates a 5 GB logical volume for the file system of each
|
OpenStack-Ansible creates a 5 GB logical volume for the file system of each
|
||||||
container running on the host.
|
container running on the host.
|
||||||
|
|
||||||
@ -107,18 +106,17 @@ Network recommendations
|
|||||||
problems when your environment grows.
|
problems when your environment grows.
|
||||||
|
|
||||||
For the best performance, reliability, and scalability in a production
|
For the best performance, reliability, and scalability in a production
|
||||||
environment, deployers should consider a network configuration that contains
|
environment, consider a network configuration that contains
|
||||||
the following features:
|
the following features:
|
||||||
|
|
||||||
* Bonded network interfaces: Increases performance and/or reliability
|
* Bonded network interfaces, which increase performance, reliability, or both
|
||||||
(dependent on bonding architecture).
|
(depending on the bonding architecture)
|
||||||
|
|
||||||
* VLAN offloading: Increases performance by adding and removing VLAN tags in
|
* VLAN offloading, which increases performance by adding and removing VLAN tags
|
||||||
hardware, rather than in the server's main CPU.
|
in hardware, rather than in the server's main CPU
|
||||||
|
|
||||||
* Gigabit or 10 Gigabit Ethernet: Supports higher network speeds, which can
|
* Gigabit or 10 Gigabit Ethernet, which supports higher network speeds and can
|
||||||
also improve storage performance when using the Block Storage service.
|
also improve storage performance when using the Block Storage service
|
||||||
|
|
||||||
* Jumbo frames: Increases network performance by allowing more data to be sent
|
|
||||||
in each packet.
|
|
||||||
|
|
||||||
|
* Jumbo frames, which increase network performance by allowing more data to
|
||||||
|
be sent in each packet
|
||||||
|
@ -6,84 +6,83 @@ Service architecture
|
|||||||
|
|
||||||
Introduction
|
Introduction
|
||||||
~~~~~~~~~~~~
|
~~~~~~~~~~~~
|
||||||
OpenStack-Ansible has a flexible deployment configuration model that is
|
OpenStack-Ansible has a flexible deployment configuration model that
|
||||||
capable of deploying:
|
can deploy all services in separate LXC containers or on designated hosts
|
||||||
|
without using LXC containers, and all network traffic either on a single
|
||||||
|
network interface or on many network interfaces.
|
||||||
|
|
||||||
* All services in separate LXC machine containers, or on designated hosts
|
This flexibility enables deployers to choose how to deploy OpenStack in the
|
||||||
without using LXC containers.
|
appropriate way for the specific use case.
|
||||||
* All network traffic on a single network interface, or on many network
|
|
||||||
interfaces.
|
|
||||||
|
|
||||||
This flexibility enables deployers to choose how to deploy OpenStack in a
|
The following sections describe the services that OpenStack-Ansible deploys.
|
||||||
way that makes the most sense for the specific use-case.
|
|
||||||
|
|
||||||
The following sections describe the services deployed by OpenStack-Ansible.
|
|
||||||
|
|
||||||
Infrastructure services
|
Infrastructure services
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The following infrastructure components are deployed by OpenStack-Ansible:
|
OpenStack-Ansible deploys the following infrastructure components:
|
||||||
|
|
||||||
* MariaDB/Galera
|
* MariaDB with Galera
|
||||||
|
|
||||||
All OpenStack services require an underlying database. MariaDB/Galera
|
All OpenStack services require an underlying database. MariaDB with Galera
|
||||||
implements a multi-master database configuration which simplifies the
|
implements a multimaster database configuration, which simplifies its use
|
||||||
ability to use it as a highly available database with a simple failover
|
as a highly available database with a simple failover model.
|
||||||
model.
|
|
||||||
|
|
||||||
* RabbitMQ
|
* RabbitMQ
|
||||||
|
|
||||||
OpenStack services make use of RabbitMQ for Remote Procedure Calls (RPC).
|
OpenStack services use RabbitMQ for Remote Procedure Calls (RPC).
|
||||||
OpenStack-Ansible deploys RabbitMQ in a clustered configuration with all
|
OSA deploys RabbitMQ in a clustered configuration with all
|
||||||
queues mirrored between the cluster nodes. As Telemetry (ceilometer) message
|
queues mirrored between the cluster nodes. Because Telemetry (ceilometer)
|
||||||
queue traffic is quite heavy, for large environments we recommended
|
message queue traffic is quite heavy, for large environments we recommend
|
||||||
separating Telemetry notifications to a separate RabbitMQ cluster.
|
separating Telemetry notifications into a separate RabbitMQ cluster.
|
||||||
|
|
||||||
* MemcacheD
|
* Memcached
|
||||||
|
|
||||||
OpenStack services use MemcacheD for in-memory caching, speeding up
|
OpenStack services use Memcached for in-memory caching, which accelerates
|
||||||
transactions. For example, the OpenStack Identity service (keystone) uses
|
transactions. For example, the OpenStack Identity service (keystone) uses
|
||||||
MemcacheD for caching authentication tokens. This is to ensure that token
|
Memcached for caching authentication tokens, which ensures that token
|
||||||
validation does not have to complete a disk or database transaction every
|
validation does not have to complete a disk or database transaction every
|
||||||
time the service is asked to validate a token.
|
time the service is asked to validate a token.
|
||||||
|
|
||||||
* Repository
|
* Repository
|
||||||
|
|
||||||
The repository holds the reference set of artifacts which are used for
|
The repository holds the reference set of artifacts that are used for
|
||||||
the installation of the environment. The artifacts include:
|
the installation of the environment. The artifacts include:
|
||||||
|
|
||||||
* A git repository containing a copy of the source code which is used
|
* A Git repository that contains a copy of the source code that is used
|
||||||
to prepare the packages for all OpenStack services.
|
to prepare the packages for all OpenStack services
|
||||||
* Python wheels for all services that are deployed in the environment.
|
* Python wheels for all services that are deployed in the environment
|
||||||
* An apt/yum proxy cache that is used to cache distribution packages
|
* An apt/yum proxy cache that is used to cache distribution packages
|
||||||
installed in the environment.
|
installed in the environment
|
||||||
|
|
||||||
* Load Balancer
|
* Load balancer
|
||||||
|
|
||||||
At least one load balancer is required for a deployment. OpenStack-Ansible
|
At least one load balancer is required for a deployment. OSA
|
||||||
provides a deployment of `HAProxy`_, but we recommend using a physical
|
provides a deployment of `HAProxy`_, but we recommend using a physical
|
||||||
load balancing appliance for production deployments.
|
load balancing appliance for production environments.
|
||||||
|
|
||||||
* Utility Container
|
* Utility container
|
||||||
|
|
||||||
The utility container is prepared with the appropriate credentials and
|
If a tool or object does not require a dedicated container, or if it is
|
||||||
clients in order to administer the OpenStack environment. It is set to
|
impractical to create a new container for a single tool or object, it is
|
||||||
automatically use the internal service endpoints.
|
installed in the utility container. The utility container is also used when
|
||||||
|
tools cannot be installed directly on a host. The utility container is
|
||||||
|
prepared with the appropriate credentials and clients to administer the
|
||||||
|
OpenStack environment. It is set to automatically use the internal service
|
||||||
|
endpoints.
|
||||||
|
|
||||||
* Log Aggregation Host
|
* Log aggregation host
|
||||||
|
|
||||||
A rsyslog service is optionally set up to receive rsyslog traffic from all
|
A rsyslog service is optionally set up to receive rsyslog traffic from all
|
||||||
hosts and containers. You can replace this with any alternative log
|
hosts and containers. You can replace rsyslog with any alternative log
|
||||||
receiver.
|
receiver.
|
||||||
|
|
||||||
* Unbound DNS Container
|
* Unbound DNS container
|
||||||
|
|
||||||
Containers running an `Unbound DNS`_ caching service can optionally be
|
Containers running an `Unbound DNS`_ caching service can optionally be
|
||||||
deployed to cache DNS lookups and to handle internal DNS name resolution.
|
deployed to cache DNS lookups and to handle internal DNS name resolution.
|
||||||
We recommend using this service for large scale production environments as
|
We recommend using this service for large-scale production environments
|
||||||
the deployment will be significantly faster. If this option is not used,
|
because the deployment will be significantly faster. If this service is not
|
||||||
OpenStack-Ansible will fall back to modifying ``/etc/hosts`` entries for
|
used, OSA modifies ``/etc/hosts`` entries for all hosts in the environment.
|
||||||
all hosts in the environment.
|
|
||||||
|
|
||||||
.. _HAProxy: http://www.haproxy.org/
|
.. _HAProxy: http://www.haproxy.org/
|
||||||
.. _Unbound DNS: https://www.unbound.net/
|
.. _Unbound DNS: https://www.unbound.net/
|
||||||
@ -91,7 +90,7 @@ The following infrastructure components are deployed by OpenStack-Ansible:
|
|||||||
OpenStack services
|
OpenStack services
|
||||||
~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
OpenStack-Ansible is able to deploy the following OpenStack services:
|
OSA is able to deploy the following OpenStack services:
|
||||||
|
|
||||||
* Bare Metal (`ironic`_)
|
* Bare Metal (`ironic`_)
|
||||||
* Block Storage (`cinder`_)
|
* Block Storage (`cinder`_)
|
||||||
|
@ -2,9 +2,6 @@
|
|||||||
Storage architecture
|
Storage architecture
|
||||||
====================
|
====================
|
||||||
|
|
||||||
Introduction
|
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
OpenStack has multiple storage realms to consider:
|
OpenStack has multiple storage realms to consider:
|
||||||
|
|
||||||
* Block Storage (cinder)
|
* Block Storage (cinder)
|
||||||
@ -16,23 +13,23 @@ Block Storage (cinder)
|
|||||||
~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The Block Storage (cinder) service manages volumes on storage devices in an
|
The Block Storage (cinder) service manages volumes on storage devices in an
|
||||||
environment. For a production deployment, this is a device that presents
|
environment. In a production environment, the device presents storage via a
|
||||||
storage via a storage protocol (for example: NFS, iSCSI, Ceph RBD) to a
|
storage protocol (for example, NFS, iSCSI, or Ceph RBD) to a storage network
|
||||||
storage network (``br-storage``) and a storage management API to the
|
(``br-storage``) and a storage management API to the
|
||||||
management (``br-mgmt``) network. Instances are connected to the volumes via
|
management network (``br-mgmt``). Instances are connected to the volumes via
|
||||||
the storage network by the hypervisor on the Compute host. The following
|
the storage network by the hypervisor on the Compute host.
|
||||||
diagram illustrates how Block Storage is connected to instances.
|
|
||||||
|
The following diagram illustrates how Block Storage is connected to instances.
|
||||||
|
|
||||||
.. figure:: figures/production-storage-cinder.png
|
.. figure:: figures/production-storage-cinder.png
|
||||||
:width: 600px
|
:width: 600px
|
||||||
|
|
||||||
The following steps relate to the illustration above.
|
The diagram shows the following steps.
|
||||||
|
|
||||||
+----+---------------------------------------------------------------------+
|
+----+---------------------------------------------------------------------+
|
||||||
| 1. | The creation of a volume is executed by the assigned |
|
| 1. | A volume is created by the assigned ``cinder-volume`` service |
|
||||||
| | ``cinder-volume`` service using the appropriate `cinder driver`_. |
|
| | using the appropriate `cinder driver`_. The volume is created by |
|
||||||
| | This is done using an API which is presented to the management |
|
| | using an API that is presented to the management network. |
|
||||||
| | network. |
|
|
||||||
+----+---------------------------------------------------------------------+
|
+----+---------------------------------------------------------------------+
|
||||||
| 2. | After the volume is created, the ``nova-compute`` service connects |
|
| 2. | After the volume is created, the ``nova-compute`` service connects |
|
||||||
| | the Compute host hypervisor to the volume via the storage network. |
|
| | the Compute host hypervisor to the volume via the storage network. |
|
||||||
@ -43,29 +40,28 @@ diagram illustrates how Block Storage is connected to instances.
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
The `LVMVolumeDriver`_ is designed as a reference driver implementation
|
The `LVMVolumeDriver`_ is designed as a reference driver implementation,
|
||||||
which we do not recommend for production usage. The LVM storage back-end
|
which we do not recommend for production usage. The LVM storage back-end
|
||||||
is a single server solution which provides no high availability options.
|
is a single-server solution that provides no high-availability options.
|
||||||
If the server becomes unavailable, then all volumes managed by the
|
If the server becomes unavailable, then all volumes managed by the
|
||||||
``cinder-volume`` service running on that server become unavailable.
|
``cinder-volume`` service running on that server become unavailable.
|
||||||
Upgrading the operating system packages (for example: kernel, iscsi) on
|
Upgrading the operating system packages (for example, kernel or iSCSI)
|
||||||
the server will cause storage connectivity outages due to the iscsi
|
on the server causes storage connectivity outages because the iSCSI service
|
||||||
service (or the host) restarting.
|
(or the host) restarts.
|
||||||
|
|
||||||
Due to a `limitation with container iSCSI connectivity`_, you must deploy the
|
Because of a `limitation with container iSCSI connectivity`_, you must deploy
|
||||||
``cinder-volume`` service directly on a physical host (not into a container)
|
the ``cinder-volume`` service directly on a physical host (not into a
|
||||||
when using storage back-ends which connect via iSCSI. This includes the
|
container) when using storage back ends that connect via iSCSI. This includes
|
||||||
`LVMVolumeDriver`_ and many of the drivers for commercial storage devices.
|
the `LVMVolumeDriver`_ and many of the drivers for commercial storage devices.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
The ``cinder-volume`` service does not run in a highly available
|
The ``cinder-volume`` service does not run in a highly available
|
||||||
configuration. When the ``cinder-volume`` service is configured to manage
|
configuration. When the ``cinder-volume`` service is configured to manage
|
||||||
volumes on the same back-end from multiple hosts/containers, one service
|
volumes on the same back end from multiple hosts or containers, one service
|
||||||
is scheduled to manage the life-cycle of the volume until an alternative
|
is scheduled to manage the life cycle of the volume until an alternative
|
||||||
service is assigned to do so. This assignment may be done through by
|
service is assigned to do so. This assignment can be made through the
|
||||||
using the `cinder-manage CLI tool`_.
|
`cinder-manage CLI tool`_. This configuration might change if
|
||||||
This may change in the future if the
|
|
||||||
`cinder volume active-active support spec`_ is implemented.
|
`cinder volume active-active support spec`_ is implemented.
|
||||||
|
|
||||||
.. _cinder driver: http://docs.openstack.org/developer/cinder/drivers.html
|
.. _cinder driver: http://docs.openstack.org/developer/cinder/drivers.html
|
||||||
@ -78,7 +74,7 @@ Object Storage (swift)
|
|||||||
~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The Object Storage (swift) service implements a highly available, distributed,
|
The Object Storage (swift) service implements a highly available, distributed,
|
||||||
eventually consistent object/blob store which is accessible via HTTP/HTTPS.
|
eventually consistent object/blob store that is accessible via HTTP/HTTPS.
|
||||||
|
|
||||||
The following diagram illustrates how data is accessed and replicated.
|
The following diagram illustrates how data is accessed and replicated.
|
||||||
|
|
||||||
@ -86,53 +82,50 @@ The following diagram illustrates how data is accessed and replicated.
|
|||||||
:width: 600px
|
:width: 600px
|
||||||
|
|
||||||
The ``swift-proxy`` service is accessed by clients via the load balancer
|
The ``swift-proxy`` service is accessed by clients via the load balancer
|
||||||
on the management (``br-mgmt``) network. The ``swift-proxy`` service
|
on the management network (``br-mgmt``). The ``swift-proxy`` service
|
||||||
communicates with the Account, Container and Object services on the
|
communicates with the Account, Container, and Object services on the
|
||||||
``swift_hosts`` via the storage (``br-storage``) network. Replication
|
Object Storage hosts via the storage network(``br-storage``). Replication
|
||||||
between the ``swift_hosts`` is done via the replication (``br-repl``)
|
between the Object Storage hosts is done via the replication network
|
||||||
network.
|
(``br-repl``).
|
||||||
|
|
||||||
Image storage (glance)
|
Image storage (glance)
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The Image service (glance) may be configured to store images on a variety of
|
The Image service (glance) can be configured to store images on a variety of
|
||||||
storage back-ends supported by the `glance_store drivers`_.
|
storage back ends supported by the `glance_store drivers`_.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
When using the File System store, glance has no mechanism of its own to
|
When the File System store is used, the Image service has no mechanism of
|
||||||
replicate the image between glance hosts. We recommend using a shared
|
its own to replicate the image between Image service hosts. We recommend
|
||||||
storage back-end (via a file system mount) to ensure that all
|
using a shared storage back end (via a file system mount) to ensure that
|
||||||
``glance-api`` services have access to all images. This prevents the
|
all ``glance-api`` services have access to all images. Doing so prevents
|
||||||
unfortunate situation of losing access to images when a control plane host
|
losing access to images when an infrastructure (control plane) host is lost.
|
||||||
is lost.
|
|
||||||
|
|
||||||
The following diagram illustrates the interactions between the glance service,
|
The following diagram illustrates the interactions between the Image service,
|
||||||
the storage device, and the ``nova-compute`` service when an instance is
|
the storage device, and the ``nova-compute`` service when an instance is
|
||||||
created.
|
created.
|
||||||
|
|
||||||
.. figure:: figures/production-storage-glance.png
|
.. figure:: figures/production-storage-glance.png
|
||||||
:width: 600px
|
:width: 600px
|
||||||
|
|
||||||
The following steps relate to the illustration above.
|
The diagram shows the following steps.
|
||||||
|
|
||||||
+----+---------------------------------------------------------------------+
|
+----+---------------------------------------------------------------------+
|
||||||
| 1. | When a client requests an image, the ``glance-api`` service |
|
| 1 | When a client requests an image, the ``glance-api`` service |
|
||||||
| | accesses the appropriate store on the storage device over the |
|
| | accesses the appropriate store on the storage device over the |
|
||||||
| | storage (``br-storage``) network and pulls it into its cache. When |
|
| | storage network (``br-storage``) and pulls it into its cache. When |
|
||||||
| | the same image is requested again, it is given to the client |
|
| | the same image is requested again, it is given to the client |
|
||||||
| | directly from the cache instead of re-requesting it from the |
|
| | directly from the cache. |
|
||||||
| | storage device. |
|
|
||||||
+----+---------------------------------------------------------------------+
|
+----+---------------------------------------------------------------------+
|
||||||
| 2. | When an instance is scheduled for creation on a Compute host, the |
|
| 2 | When an instance is scheduled for creation on a Compute host, the |
|
||||||
| | ``nova-compute`` service requests the image from the ``glance-api`` |
|
| | ``nova-compute`` service requests the image from the ``glance-api`` |
|
||||||
| | service over the management (``br-mgmt``) network. |
|
| | service over the management network (``br-mgmt``). |
|
||||||
+----+---------------------------------------------------------------------+
|
+----+---------------------------------------------------------------------+
|
||||||
| 3. | After the image is retrieved, the ``nova-compute`` service stores |
|
| 3 | After the image is retrieved, the ``nova-compute`` service stores |
|
||||||
| | the image in its own image cache. When another instance is created |
|
| | the image in its own image cache. When another instance is created |
|
||||||
| | with the same image, the image is retrieved from the local base |
|
| | with the same image, the image is retrieved from the local base |
|
||||||
| | image cache instead of re-requesting it from the ``glance-api`` |
|
| | image cache. |
|
||||||
| | service. |
|
|
||||||
+----+---------------------------------------------------------------------+
|
+----+---------------------------------------------------------------------+
|
||||||
|
|
||||||
.. _glance_store drivers: http://docs.openstack.org/developer/glance_store/drivers/
|
.. _glance_store drivers: http://docs.openstack.org/developer/glance_store/drivers/
|
||||||
@ -145,33 +138,30 @@ with root or ephemeral disks, the ``nova-compute`` service manages these
|
|||||||
allocations using its ephemeral disk storage location.
|
allocations using its ephemeral disk storage location.
|
||||||
|
|
||||||
In many environments, the ephemeral disks are stored on the Compute host's
|
In many environments, the ephemeral disks are stored on the Compute host's
|
||||||
local disks, but for production environments we recommended that the Compute
|
local disks, but for production environments we recommend that the Compute
|
||||||
hosts are configured to use a shared storage subsystem instead.
|
hosts be configured to use a shared storage subsystem instead. A shared
|
||||||
|
storage subsystem allows quick, live instance migration between Compute hosts,
|
||||||
Making use of a shared storage subsystem allows the use of quick live instance
|
which is useful when the administrator needs to perform maintenance on the
|
||||||
migration between Compute hosts. This is useful when the administrator needs
|
Compute host and wants to evacuate it. Using a shared storage subsystem also
|
||||||
to perform maintenance on the Compute host and wants to evacuate it.
|
allows the recovery of instances when a Compute host goes offline. The
|
||||||
Using a shared storage subsystem also allows the recovery of instances when a
|
administrator is able to evacuate the instance to another Compute host and
|
||||||
Compute host goes offline. The administrator is able to evacuate the instance
|
boot it up again. The following diagram illustrates the interactions between
|
||||||
to another Compute host and boot it up again.
|
the storage device, the Compute host, the hypervisor, and the instance.
|
||||||
|
|
||||||
The following diagram illustrates the interactions between the
|
|
||||||
storage device, the Compute host, the hypervisor and the instance.
|
|
||||||
|
|
||||||
.. figure:: figures/production-storage-nova.png
|
.. figure:: figures/production-storage-nova.png
|
||||||
:width: 600px
|
:width: 600px
|
||||||
|
|
||||||
The following steps relate to the illustration above.
|
The diagram shows the following steps.
|
||||||
|
|
||||||
+----+---------------------------------------------------------------------+
|
+----+---------------------------------------------------------------------+
|
||||||
| 1. | The Compute host is configured with access to the storage device. |
|
| 1 | The Compute host is configured with access to the storage device. |
|
||||||
| | The Compute host accesses the storage space via the storage network |
|
| | The Compute host accesses the storage space via the storage network |
|
||||||
| | (``br-storage``) using a storage protocol (for example: NFS, iSCSI, |
|
| | (``br-storage``) by using a storage protocol (for example, NFS, |
|
||||||
| | Ceph RBD). |
|
| | iSCSI, or Ceph RBD). |
|
||||||
+----+---------------------------------------------------------------------+
|
+----+---------------------------------------------------------------------+
|
||||||
| 2. | The ``nova-compute`` service configures the hypervisor to present |
|
| 2 | The ``nova-compute`` service configures the hypervisor to present |
|
||||||
| | the allocated instance disk as a device to the instance. |
|
| | the allocated instance disk as a device to the instance. |
|
||||||
+----+---------------------------------------------------------------------+
|
+----+---------------------------------------------------------------------+
|
||||||
| 3. | The hypervisor presents the disk as a device to the instance. |
|
| 3 | The hypervisor presents the disk as a device to the instance. |
|
||||||
+----+---------------------------------------------------------------------+
|
+----+---------------------------------------------------------------------+
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user