[docs] Provide example configurations

This patch provides the example configurations for the layouts
set in Appendix A and B and revises the configuration section
to refer to the Appendices for examples.

These aim to help new deployers understand how their desired
environment layout translates into actual configuration.

Change-Id: I6f9bfb4069426180914396cca7ff5b4631098165
This commit is contained in:
Jesse Pretorius 2016-10-12 15:13:56 +01:00 committed by Jesse Pretorius (odyssey4me)
parent 5686c70033
commit 987bab76be
8 changed files with 604 additions and 147 deletions

View File

@ -7,21 +7,22 @@ Appendix B: Example production environment configuration
Introduction
~~~~~~~~~~~~
A production environment contains the minimal set of components needed to
deploy a working OpenStack-Ansible (OSA) environment for production purposes.
This appendix describes an example production environment for a working
OpenStack-Ansible (OSA) deployment with high availability services.
A production environment has the following characteristics:
This example environment has the following characteristics:
* Three infrastructure (control plane) hosts
* Two compute hosts
* One storage host
* One NFS storage device
* One log aggregation host
* Two network agent hosts
* Multiple Network Interface Cards (NIC) configured as bonded pairs for each
host
* Full compute kit with the Telemetry service (ceilometer) included,
with NFS configured as a storage back end for the Compute (nova), Image
(glance), and Block Storage (cinder) services
with NFS configured as a storage back end for the Image (glance), and Block
Storage (cinder) services
* Internet access via the router address 172.29.236.1 on the
Management Network
.. image:: figures/arch-layout-production.png
:width: 100%
@ -29,19 +30,82 @@ A production environment has the following characteristics:
Network configuration
~~~~~~~~~~~~~~~~~~~~~
Network CIDR/VLAN assignments
-----------------------------
+-----------------------+-----------------+------+
| Network | CIDR | VLAN |
+=======================+=================+======+
| Management Network | 172.29.236.0/22 | 10 |
+-----------------------+-----------------+------+
| Tunnel (VXLAN) Network| 172.29.240.0/22 | 30 |
+-----------------------+-----------------+------+
| Storage Network | 172.29.244.0/22 | 20 |
+-----------------------+-----------------+------+
IP assignments
--------------
+------------------+----------------+-------------------+----------------+
| Host name | Management IP | Tunnel (VxLAN) IP | Storage IP |
+==================+================+===================+================+
| lb_vip_address | 172.29.236.9 | | |
+------------------+----------------+-------------------+----------------+
| infra1 | 172.29.236.11 | | |
+------------------+----------------+-------------------+----------------+
| infra2 | 172.29.236.12 | | |
+------------------+----------------+-------------------+----------------+
| infra3 | 172.29.236.13 | | |
+------------------+----------------+-------------------+----------------+
| log1 | 172.29.236.14 | | |
+------------------+----------------+-------------------+----------------+
| NFS Storage | | | 172.29.244.15 |
+------------------+----------------+-------------------+----------------+
| compute1 | 172.29.236.16 | 172.29.240.16 | 172.29.244.16 |
+------------------+----------------+-------------------+----------------+
| compute2 | 172.29.236.17 | 172.29.240.17 | 172.29.244.17 |
+------------------+----------------+-------------------+----------------+
Host network configuration
--------------------------
.. literalinclude:: ../../../etc/network/interfaces.d/openstack_interface.cfg.prod.example
Environment configuration
Deployment configuration
~~~~~~~~~~~~~~~~~~~~~~~~~
The ``/etc/openstack_deploy/openstack_user_config.yml`` configuration file
defines which hosts run the containers and services deployed by OSA. For
example, hosts listed in the ``shared-infra_hosts`` section run containers
for many of the shared services that your OpenStack environment requires.
Following is an example of the
``/etc/openstack_deploy/openstack_user_config.yml`` configuration file for a
production environment.
Environment layout
------------------
.. literalinclude:: ../../../etc/openstack_deploy/openstack_user_config.yml.example
:start-after: # limitations under the License.
The ``/etc/openstack_deploy/openstack_user_config.yml`` file defines the
environment layout.
The following configuration describes the layout for this environment.
.. literalinclude:: ../../../etc/openstack_deploy/openstack_user_config.yml.prod.example
Environment customizations
--------------------------
The optionally deployed files in ``/etc/openstack_deploy/env.d`` allow the
customization of Ansible groups. This allows the deployer to set whether
the services will run in a container (the default), or on the host (on
metal).
For this environment, the ``cinder-volume`` runs in a container on the
infrastructure hosts. To achieve this, implement
``/etc/openstack_deploy/env.d/cinder.yml`` with the following content:
.. literalinclude:: ../../../etc/openstack_deploy/env.d/cinder-volume.yml.container.example
User variables
--------------
The ``/etc/openstack_deploy/user_variables.yml`` file defines the global
overrides for the default variables.
For this environment, implement the load balancer on the infrastructure
hosts. Ensure that keepalived is also configured with HAProxy in
``/etc/openstack_deploy/user_variables.yml`` with the following content.
.. literalinclude:: ../../../etc/openstack_deploy/user_variables.yml.prod.example

View File

@ -7,16 +7,18 @@ Appendix A: Example test environment configuration
Introduction
~~~~~~~~~~~~
A test environment contains the minimal set of components needed to deploy a
working OpenStack-Ansible (OSA) environment for testing purposes.
This appendix describes an example test environment for a working
OpenStack-Ansible (OSA) deployment with a small number of servers.
A test environment has the following characteristics:
This example environment has the following characteristics:
* One infrastructure (control plane) host (8 vCPU, 8 GB RAM, 60 GB HDD)
* One compute host (8 vCPU, 8 GB RAM, 60 GB HDD)
* One Network Interface Card (NIC) for each host
* A basic compute kit environment, with the Image (glance) and Compute (nova)
services set to use file-backed storage.
* Internet access via the router address 172.29.236.1 on the
Management Network
.. image:: figures/arch-layout-test.png
:width: 100%
@ -25,18 +27,67 @@ A test environment has the following characteristics:
Network configuration
~~~~~~~~~~~~~~~~~~~~~
Network CIDR/VLAN assignments
-----------------------------
+-----------------------+-----------------+------+
| Network | CIDR | VLAN |
+=======================+=================+======+
| Management Network | 172.29.236.0/22 | 10 |
+-----------------------+-----------------+------+
| Tunnel (VXLAN) Network| 172.29.240.0/22 | 30 |
+-----------------------+-----------------+------+
| Storage Network | 172.29.244.0/22 | 20 |
+-----------------------+-----------------+------+
IP assignments
--------------
+------------------+----------------+-------------------+----------------+
| Host name | Management IP | Tunnel (VxLAN) IP | Storage IP |
+==================+================+===================+================+
| infra1 | 172.29.236.11 | | |
+------------------+----------------+-------------------+----------------+
| compute1 | 172.29.236.12 | 172.29.240.12 | 172.29.244.12 |
+------------------+----------------+-------------------+----------------+
| storage1 | 172.29.236.13 | | 172.29.244.13 |
+------------------+----------------+-------------------+----------------+
Host network configuration
--------------------------
.. literalinclude:: ../../../etc/network/interfaces.d/openstack_interface.cfg.test.example
Environment configuration
Deployment configuration
~~~~~~~~~~~~~~~~~~~~~~~~~
The ``/etc/openstack_deploy/openstack_user_config.yml`` configuration file
defines which hosts run the containers and services deployed by OSA. For
example, hosts listed in the ``shared-infra_hosts`` section run containers
for many of the shared services that your OpenStack environment requires.
The following is an example of the
``/etc/openstack_deploy/openstack_user_config.yml`` configuration file for a
test environment.
Environment layout
------------------
.. literalinclude:: ../../../etc/openstack_deploy/openstack_user_config.yml.aio
The ``/etc/openstack_deploy/openstack_user_config.yml`` file defines the
environment layout.
The following configuration describes the layout for this environment.
.. literalinclude:: ../../../etc/openstack_deploy/openstack_user_config.yml.test.example
Environment customizations
--------------------------
The optionally deployed files in ``/etc/openstack_deploy/env.d`` allow the
customization of Ansible groups. This allows the deployer to set whether
the services will run in a container (the default), or on the host (on
metal).
For this environment you do not need the ``/etc/openstack_deploy/env.d``
folder as the defaults set by OpenStack-Ansible are suitable.
User variables
--------------
The ``/etc/openstack_deploy/user_variables.yml`` file defines the global
overrides for the default variables
For this environment you do not need the
``/etc/openstack_deploy/user_variables.yml`` file as the defaults set by
OpenStack-Ansible are suitable.

View File

@ -49,16 +49,20 @@ these services include databases, Memcached, and RabbitMQ. Several other
host types contain other types of containers, and all of these are listed
in the ``openstack_user_config.yml`` file.
To configure your OpenStack installation for a test environment or production
environment, see the examples in :ref:`test-environment-config` and
:ref:`production-environment-config` file.
To install additional services, see the example configuration files in
``/etc/openstack_deploy/conf.d``.
For examples, please see :ref:`test-environment-config` and
:ref:`production-environment-config`.
For details about how the inventory is generated from the environment
configuration, see :ref:`developer-inventory`.
Configuring additional services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To install additional services, the files in
``/etc/openstack_deploy/conf.d`` provide examples showing
the correct host groups to use. To add another service, add the host group,
allocate hosts to it, and then execute the playbooks.
Configuring service credentials
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -2,14 +2,7 @@
Network configuration
=====================
Production environment
~~~~~~~~~~~~~~~~~~~~~~
This example allows you to use your own parameters for the deployment.
If you followed the previously proposed design, the following table shows
bridges that are to be configured on hosts.
The following table shows bridges that are to be configured on hosts.
+-------------+-----------------------+-------------------------------------+
| Bridge name | Best configured on | With a static IP |
@ -29,107 +22,5 @@ bridges that are to be configured on hosts.
| | On every compute node | Never |
+-------------+-----------------------+-------------------------------------+
Example for 3 controller nodes and 2 compute nodes
--------------------------------------------------
* VLANs:
* Host management: Untagged/Native
* Container management: 10
* Tunnels: 30
* Storage: 20
* Networks:
* Host management: 10.240.0.0/22
* Container management: 172.29.236.0/22
* Tunnel: 172.29.240.0/22
* Storage: 172.29.244.0/22
* Addresses for the controller nodes:
* Host management: 10.240.0.11 - 10.240.0.13
* Host management gateway: 10.240.0.1
* DNS servers: 69.20.0.164 69.20.0.196
* Container management: 172.29.236.11 - 172.29.236.13
* Tunnel: no IP (because IP exist in the containers, when the components
are not deployed directly on metal)
* Storage: no IP (because IP exist in the containers, when the components
are not deployed directly on metal)
* Addresses for the compute nodes:
* Host management: 10.240.0.21 - 10.240.0.22
* Host management gateway: 10.240.0.1
* DNS servers: 69.20.0.164 69.20.0.196
* Container management: 172.29.236.21 - 172.29.236.22
* Tunnel: 172.29.240.21 - 172.29.240.22
* Storage: 172.29.244.21 - 172.29.244.22
.. TODO Update this section. Should this information be moved to the overview
chapter / network architecture section?
Modifying the network interfaces file
-------------------------------------
After establishing initial host management network connectivity using
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file.
An example is provided on this `Link to Production Environment`_ based
on the production environment described in `host layout for production
environment`_.
Reboot your servers after modifying the network interfaces file.
.. _host layout for production environment: overview-host-layout.html#production-environment
.. _Link to Production Environment: app-targethosts-networkexample.html#production-environment
Test environment
~~~~~~~~~~~~~~~~
This example uses the following parameters to configure networking on a
single target host. See `Figure 3.2`_ for a visual representation of these
parameters in the architecture.
* VLANs:
* Host management: Untagged/Native
* Container management: 10
* Tunnels: 30
* Storage: 20
* Networks:
* Host management: 10.240.0.0/22
* Container management: 172.29.236.0/22
* Tunnel: 172.29.240.0/22
* Storage: 172.29.244.0/22
* Addresses:
* Host management: 10.240.0.11
* Host management gateway: 10.240.0.1
* DNS servers: 69.20.0.164 69.20.0.196
* Container management: 172.29.236.11
* Tunnel: 172.29.240.11
* Storage: 172.29.244.11
.. _Figure 3.2: targethosts-networkconfig.html#fig_hosts-target-network-containerexample
**Figure 3.2. Target host for infrastructure, networking, compute, and
storage services**
.. image:: figures/networkarch-container-external-example.png
Modifying the network interfaces file
-------------------------------------
After establishing initial host management network connectivity using
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file.
An example is provided below on this `link to Test Environment`_ based
on the test environment described in `host layout for testing
environment`_.
.. _Link to Test Environment: app-targethosts-networkexample.html#test-environment
.. _host layout for testing environment: overview-host-layout.html#test-environment
For use case examples, refer to :ref:`test-environment-config` and
:ref:`production-environment-config`.

View File

@ -0,0 +1,12 @@
---
# This file contains an example to show how to set
# the cinder-volume service to run in a container.
#
# Important note:
# When using LVM or any iSCSI-based cinder backends, such as NetApp with
# iSCSI protocol, the cinder-volume service *must* run on metal.
# Reference: https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
container_skel:
cinder_volumes_container:
is_metal: false

View File

@ -0,0 +1,282 @@
---
cidr_networks:
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- "172.29.236.1,172.29.236.50"
- "172.29.240.1,172.29.240.50"
- "172.29.244.1,172.29.244.50"
- "172.29.248.1,172.29.248.50"
global_overrides:
internal_lb_vip_address: 172.29.236.9
external_lb_vip_address: openstack.example.com
tunnel_bridge: "br-vxlan"
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
is_ssh_address: true
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eth12"
type: "flat"
net_name: "flat"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: "1:1"
net_name: "vlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
###
### Infrastructure
###
# galera, memcache, rabbitmq, utility
shared-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# repository (apt cache, python packages, etc)
repo-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# load balancer
# Ideally the load balancer should not use the Infrastructure hosts.
# Dedicated hardware is best for improved performance and security.
haproxy_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# rsyslog server
log_hosts:
log1:
ip: 172.29.236.14
###
### OpenStack
###
# keystone
identity_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# cinder api services
storage-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# glance
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
image_hosts:
infra1:
ip: 172.29.236.11
container_vars:
cinder_backends:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
infra2:
ip: 172.29.236.12
container_vars:
cinder_backends:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
infra3:
ip: 172.29.236.13
container_vars:
cinder_backends:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
# nova api, conductor, etc services
compute-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# heat
orchestration_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# horizon
dashboard_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# neutron server, agents (L3, etc)
network_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# ceilometer (telemetry API)
metering-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# aodh (telemetry alarm service)
metering-alarm_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# gnocchi (telemetry metrics storage)
metrics_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# nova hypervisors
compute_hosts:
compute1:
ip: 172.29.236.16
compute2:
ip: 172.29.236.17
# ceilometer compute agent (telemetry)
metering-compute_hosts:
compute1:
ip: 172.29.236.16
compute2:
ip: 172.29.236.17
# cinder volume hosts (NFS-backed)
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
storage_hosts:
infra1:
ip: 172.29.236.11
container_vars:
cinder_backends:
limit_container_types: cinder_volume
cinder_nfs_client:
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"
infra2:
ip: 172.29.236.12
container_vars:
cinder_backends:
limit_container_types: cinder_volume
cinder_nfs_client:
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"
infra3:
ip: 172.29.236.13
container_vars:
cinder_backends:
limit_container_types: cinder_volume
cinder_nfs_client:
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"

View File

@ -0,0 +1,144 @@
---
cidr_networks:
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- "172.29.236.1,172.29.236.50"
- "172.29.240.1,172.29.240.50"
- "172.29.244.1,172.29.244.50"
- "172.29.248.1,172.29.248.50"
global_overrides:
internal_lb_vip_address: 172.29.236.11
external_lb_vip_address: openstack.example.com
tunnel_bridge: "br-vxlan"
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
is_ssh_address: true
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eth12"
type: "flat"
net_name: "flat"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: "1:1"
net_name: "vlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
###
### Infrastructure
###
# galera, memcache, rabbitmq, utility
shared-infra_hosts:
infra1:
ip: 172.29.236.11
# repository (apt cache, python packages, etc)
repo-infra_hosts:
infra1:
ip: 172.29.236.11
# load balancer
haproxy_hosts:
infra1:
ip: 172.29.236.11
###
### OpenStack
###
# keystone
identity_hosts:
infra1:
ip: 172.29.236.11
# cinder api services
storage-infra_hosts:
infra1:
ip: 172.29.236.11
# glance
image_hosts:
infra1:
ip: 172.29.236.11
# nova api, conductor, etc services
compute-infra_hosts:
infra1:
ip: 172.29.236.11
# heat
orchestration_hosts:
infra1:
ip: 172.29.236.11
# horizon
dashboard_hosts:
infra1:
ip: 172.29.236.11
# neutron server, agents (L3, etc)
network_hosts:
infra1:
ip: 172.29.236.11
# nova hypervisors
compute_hosts:
compute1:
ip: 172.29.236.12
# cinder storage host (LVM-backed)
storage_hosts:
storage1:
ip: 172.29.236.13
container_vars:
cinder_backends:
limit_container_types: cinder_volume
lvm:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name: LVM_iSCSI
iscsi_ip_address: "172.29.244.13"

View File

@ -0,0 +1,9 @@
---
# This file contains an example of the global variable overrides
# which may need to be set for a production environment.
## Load Balancer Configuration (haproxy/keepalived)
haproxy_keepalived_external_vip_cidr: "1.2.3.4/32"
haproxy_keepalived_internal_vip_cidr: "172.29.236.0/22"
haproxy_keepalived_external_interface: ens2
haproxy_keepalived_internal_interface: br-mgmt