[DOCS] Add ceph production example configuration

Adds a Ceph configuration example for production deployment using RBD
backend for glance/cinder/nova.

Change-Id: I7757ceb4f2f367f514fcde8b4ab1130e8ef4868b
(cherry picked from commit 057bb30547)
This commit is contained in:
Logan V 2017-11-06 11:32:15 -06:00
parent da37351ca0
commit 156e34e88a
14 changed files with 345 additions and 7 deletions

View File

@ -1,5 +1,5 @@
==================================
Appendix H: Advanced configuration
Appendix I: Advanced configuration
==================================
.. TODO: include intro on what advanced configuration is, whether its required

View File

@ -1,5 +1,5 @@
====================================
Appendix I: Ceph-Ansible integration
Appendix J: Ceph-Ansible integration
====================================
OpenStack-Ansible allows `Ceph storage <https://ceph.com>`_ cluster integration

View File

@ -0,0 +1,131 @@
.. _production-ceph-environment-config:
=============================================================
Appendix D: Example Ceph production environment configuration
=============================================================
Introduction
~~~~~~~~~~~~
This appendix describes an example production environment for a working
OpenStack-Ansible (OSA) deployment with high availability services and using
the Ceph backend for images, volumes, and instances.
This example environment has the following characteristics:
* Three infrastructure (control plane) hosts with ceph-mon containers
* Two compute hosts
* Three Ceph OSD storage hosts
* One log aggregation host
* Multiple Network Interface Cards (NIC) configured as bonded pairs for each
host
* Full compute kit with the Telemetry service (ceilometer) included,
with Ceph configured as a storage back end for the Image (glance), and Block
Storage (cinder) services
* Internet access via the router address 172.29.236.1 on the
Management Network
.. image:: figures/arch-layout-production-ceph.png
:width: 100%
Network configuration
~~~~~~~~~~~~~~~~~~~~~
Network CIDR/VLAN assignments
-----------------------------
The following CIDR and VLAN assignments are used for this environment.
+-----------------------+-----------------+------+
| Network | CIDR | VLAN |
+=======================+=================+======+
| Management Network | 172.29.236.0/22 | 10 |
+-----------------------+-----------------+------+
| Tunnel (VXLAN) Network| 172.29.240.0/22 | 30 |
+-----------------------+-----------------+------+
| Storage Network | 172.29.244.0/22 | 20 |
+-----------------------+-----------------+------+
IP assignments
--------------
The following host name and IP address assignments are used for this
environment.
+------------------+----------------+-------------------+----------------+
| Host name | Management IP | Tunnel (VxLAN) IP | Storage IP |
+==================+================+===================+================+
| lb_vip_address | 172.29.236.9 | | |
+------------------+----------------+-------------------+----------------+
| infra1 | 172.29.236.11 | | |
+------------------+----------------+-------------------+----------------+
| infra2 | 172.29.236.12 | | |
+------------------+----------------+-------------------+----------------+
| infra3 | 172.29.236.13 | | |
+------------------+----------------+-------------------+----------------+
| log1 | 172.29.236.14 | | |
+------------------+----------------+-------------------+----------------+
| compute1 | 172.29.236.16 | 172.29.240.16 | 172.29.244.16 |
+------------------+----------------+-------------------+----------------+
| compute2 | 172.29.236.17 | 172.29.240.17 | 172.29.244.17 |
+------------------+----------------+-------------------+----------------+
| osd1 | 172.29.236.18 | 172.29.240.18 | 172.29.244.18 |
+------------------+----------------+-------------------+----------------+
| osd2 | 172.29.236.19 | 172.29.240.19 | 172.29.244.19 |
+------------------+----------------+-------------------+----------------+
| osd3 | 172.29.236.20 | 172.29.240.20 | 172.29.244.20 |
+------------------+----------------+-------------------+----------------+
Host network configuration
--------------------------
Each host will require the correct network bridges to be implemented. The
following is the ``/etc/network/interfaces`` file for ``infra1``.
.. note::
If your environment does not have ``eth0``, but instead has ``p1p1`` or
some other interface name, ensure that all references to ``eth0`` in all
configuration files are replaced with the appropriate name. The same
applies to additional network interfaces.
.. literalinclude:: ../../etc/network/interfaces.d/openstack_interface.cfg.prod.example
Deployment configuration
~~~~~~~~~~~~~~~~~~~~~~~~
Environment layout
------------------
The ``/etc/openstack_deploy/openstack_user_config.yml`` file defines the
environment layout.
The following configuration describes the layout for this environment.
.. literalinclude:: ../../etc/openstack_deploy/openstack_user_config.yml.prod-ceph.example
Environment customizations
--------------------------
The optionally deployed files in ``/etc/openstack_deploy/env.d`` allow the
customization of Ansible groups. This allows the deployer to set whether
the services will run in a container (the default), or on the host (on
metal).
For this environment, the ``cinder-volume`` runs in a container on the
infrastructure hosts. To achieve this, implement
``/etc/openstack_deploy/env.d/cinder.yml`` with the following content:
.. literalinclude:: ../../etc/openstack_deploy/env.d/cinder-volume.yml.container.example
User variables
--------------
The ``/etc/openstack_deploy/user_variables.yml`` file defines the global
overrides for the default variables.
For this environment, implement the load balancer on the infrastructure
hosts. Ensure that keepalived is also configured with HAProxy in
``/etc/openstack_deploy/user_variables.yml`` with the following content.
.. literalinclude:: ../../etc/openstack_deploy/user_variables.yml.prod-ceph.example

View File

@ -1,5 +1,5 @@
================================================
Appendix D: Customizing host and service layouts
Appendix E: Customizing host and service layouts
================================================
The default layout of containers and services in OpenStack-Ansible (OSA) is

View File

@ -1,7 +1,7 @@
.. _limited-connectivity-appendix:
================================================
Appendix G: Installing with limited connectivity
Appendix H: Installing with limited connectivity
================================================
Many playbooks and roles in OpenStack-Ansible retrieve dependencies from the

View File

@ -1,7 +1,7 @@
.. _network-appendix:
================================
Appendix F: Container networking
Appendix G: Container networking
================================
OpenStack-Ansible deploys Linux containers (LXC) and uses Linux

View File

@ -1,5 +1,5 @@
================================
Appendix J: Additional resources
Appendix K: Additional resources
================================
Ansible resources:

View File

@ -1,5 +1,5 @@
====================
Appendix E: Security
Appendix F: Security
====================
Security is one of the top priorities within OpenStack-Ansible (OSA), and many

View File

@ -8,6 +8,7 @@ Appendices
app-config-test.rst
app-config-prod.rst
app-config-pod.rst
app-config-prod-ceph.rst
app-custom-layouts.rst
app-security.rst
app-networking.rst

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 34 KiB

View File

@ -0,0 +1,162 @@
---
cidr_networks: &cidr_networks
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- "172.29.236.1,172.29.236.50"
- "172.29.240.1,172.29.240.50"
- "172.29.244.1,172.29.244.50"
- "172.29.248.1,172.29.248.50"
global_overrides:
cidr_networks: *cidr_networks
internal_lb_vip_address: 172.29.236.9
#
# The below domain name must resolve to an IP address
# in the CIDR specified in haproxy_keepalived_external_vip_cidr.
# If using different protocols (https/http) for the public/internal
# endpoints the two addresses must be different.
#
external_lb_vip_address: openstack.example.com
tunnel_bridge: "br-vxlan"
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
is_ssh_address: true
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eth12"
type: "flat"
net_name: "flat"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: "1:1"
net_name: "vlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
- ceph-osd
###
### Infrastructure
###
_infrastructure_hosts: &infrastructure_hosts
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# nova hypervisors
compute_hosts: &compute_hosts
compute1:
ip: 172.29.236.16
compute2:
ip: 172.29.236.17
ceph-osd_hosts:
osd1:
ip: 172.29.236.18
osd2:
ip: 172.29.236.19
osd3:
ip: 172.29.236.20
# galera, memcache, rabbitmq, utility
shared-infra_hosts: *infrastructure_hosts
# ceph-mon containers
ceph-mon_hosts: *infrastructure_hosts
# repository (apt cache, python packages, etc)
repo-infra_hosts: *infrastructure_hosts
# load balancer
# Ideally the load balancer should not use the Infrastructure hosts.
# Dedicated hardware is best for improved performance and security.
haproxy_hosts: *infrastructure_hosts
# rsyslog server
log_hosts:
log1:
ip: 172.29.236.14
###
### OpenStack
###
# keystone
identity_hosts: *infrastructure_hosts
# cinder api services
storage-infra_hosts: *infrastructure_hosts
# cinder volume hosts (Ceph RBD-backed)
storage_hosts: *infrastructure_hosts
# glance
image_hosts: *infrastructure_hosts
# nova api, conductor, etc services
compute-infra_hosts: *infrastructure_hosts
# heat
orchestration_hosts: *infrastructure_hosts
# horizon
dashboard_hosts: *infrastructure_hosts
# neutron server, agents (L3, etc)
network_hosts: *infrastructure_hosts
# ceilometer (telemetry data collection)
metering-infra_hosts: *infrastructure_hosts
# aodh (telemetry alarm service)
metering-alarm_hosts: *infrastructure_hosts
# gnocchi (telemetry metrics storage)
metrics_hosts: *infrastructure_hosts
# ceilometer compute agent (telemetry data collection)
metering-compute_hosts: *compute_hosts

View File

@ -0,0 +1,41 @@
---
# This file contains an example of the global variable overrides
# which may need to be set for a production environment.
## Load Balancer Configuration (haproxy/keepalived)
haproxy_keepalived_external_vip_cidr: "1.2.3.4/32"
haproxy_keepalived_internal_vip_cidr: "172.29.236.0/22"
haproxy_keepalived_external_interface: ens2
haproxy_keepalived_internal_interface: br-mgmt
## Ceph cluster fsid (must be generated before first run)
## Generate a uuid using: python -c 'import uuid; print(str(uuid.uuid4()))'
generate_fsid: false
fsid: 116f14c4-7fe1-40e4-94eb-9240b63de5c1 # Replace with your generated UUID
## ceph-ansible settings
## See https://github.com/ceph/ceph-ansible/tree/master/group_vars for
## additional configuration options availble.
monitor_address_block: "{{ cidr_networks.container }}"
public_network: "{{ cidr_networks.container }}"
cluster_network: "{{ cidr_networks.storage }}"
osd_scenario: collocated
journal_size: 10240 # size in MB
# ceph-ansible automatically creates pools & keys for OpenStack services
openstack_config: true
cinder_ceph_client: cinder
glance_ceph_client: glance
glance_default_store: rbd
glance_rbd_store_pool: images
nova_libvirt_images_rbd_pool: vms
cinder_backends:
RBD:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_pool: volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_store_chunk_size: 8
volume_backend_name: rbddriver
rbd_user: "{{ cinder_ceph_client }}"
rbd_secret_uuid: "{{ fsid }}"
report_discard_supported: true