[docs] Apply provider network config on per-group basis

This patch provides documentation for 'Provider Network Groups',
aka 'the ability to define provider networks that are applied only to
certain groups'. This allows deployers to configure provider networks
across a heterogeneous environment as opposed to a homogeneous environment.
It also updates the documentation by providing examples of single and
multi-interface deployments.

The docs call out the steps necessary to configure the
openstack_user_config.yml file to configure provider networks
on a per-group basis. Only groups listed under group_binds for a given
provider network will utilize the given vars.

Change-Id: I9611810722283a8a0e4c57e60576bd0c3506eacc
Closes-Bug: 1814686
This commit is contained in:
James Denton 2019-02-05 19:19:26 +00:00
parent 52f37cdac0
commit df43c5119d
17 changed files with 1559 additions and 0 deletions

View File

@ -104,3 +104,18 @@ The following diagram shows how virtual machines connect to the ``br-vlan`` and
``br-vxlan`` bridges and send traffic to the network outside the host:
.. image:: ../figures/networking-compute.png
When Neutron agents are deployed "on metal" on a network node or collapsed
infra/network node, the ``Neutron Agents`` container and respective virtual
interfaces are no longer implemented. In addition, use of the
``host_bind_override`` override when defining provider networks allows
Neutron to interface directly with a physical interface or bond instead of the
``br-vlan`` bridge. The following diagram reflects the differences in the
virtual network layout.
.. image:: ../figures/networking-neutronagents-nobridge.png
The absence of ``br-vlan`` in-path of instance traffic is also reflected on
compute nodes, as shown in the following diagram.
.. image:: ../figures/networking-compute-nobridge.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 216 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 310 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 247 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 361 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 271 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 324 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 266 KiB

View File

@ -23,8 +23,10 @@ For in-depth technical information, see the
:maxdepth: 1
aio/quickstart.rst
network-arch/example.rst
test/example.rst
prod/example.rst
prod/provnet_groups.rst
limited-connectivity/index.rst
l3pods/example.rst
ceph/full-deploy.rst

View File

@ -0,0 +1,137 @@
.. _production-network-archs:
=====================
Network architectures
=====================
OpenStack-Ansible supports a number of different network architectures,
and can be deployed using a single network interface for non-production
workloads or using multiple network interfaces or bonded interfaces for
production workloads.
The OpenStack-Ansible reference architecture segments traffic using VLANs
across multiple network interfaces or bonds. Common networks used in an
OpenStack-Ansible deployment can be observed in the following table:
+-----------------------+-----------------+------+
| Network | CIDR | VLAN |
+=======================+=================+======+
| Management Network | 172.29.236.0/22 | 10 |
+-----------------------+-----------------+------+
| Overlay Network | 172.29.240.0/22 | 30 |
+-----------------------+-----------------+------+
| Storage Network | 172.29.244.0/22 | 20 |
+-----------------------+-----------------+------+
The ``Management Network``, also referred to as the ``container network``,
provides management of and communication between the infrastructure
and OpenStack services running in containers or on metal. The
``management network`` uses a dedicated VLAN typically connected to the
``br-mgmt`` bridge, and may also be used as the primary interface used
to interact with the server via SSH.
The ``Overlay Network``, also referred to as the ``tunnel network``,
provides connectivity between hosts for the purpose of tunnelling
encapsulated traffic using VXLAN, GENEVE, or other protocols. The
``overlay network`` uses a dedicated VLAN typically connected to the
``br-vxlan`` bridge.
The ``Storage Network`` provides segregated access to Block Storage from
OpenStack services such as Cinder and Glance. The ``storage network`` uses
a dedicated VLAN typically connected to the ``br-storage`` bridge.
.. note::
The CIDRs and VLANs listed for each network are examples and may
be different in your environment.
Additional VLANs may be required for the following purposes:
* External provider networks for Floating IPs and instances
* Self-service project/tenant networks for instances
* Other OpenStack services
Network interfaces
~~~~~~~~~~~~~~~~~~
Single interface or bond
------------------------
OpenStack-Ansible supports the use of a single interface or set of bonded
interfaces that carry traffic for OpenStack services as well as instances.
The following diagram demonstrates hosts using a single interface:
.. image:: ../figures/network-arch-single-interface.png
:width: 100%
:alt: Network Interface Layout - Single Interface
The following diagram demonstrates hosts using a single bond:
.. image:: ../figures/network-arch-single-bond.png
:width: 100%
:alt: Network Interface Layout - Single Bond
Each host will require the correct network bridges to be implemented.
The following is the ``/etc/network/interfaces`` file for ``infra1``
using a single bond.
.. note::
If your environment does not have ``eth0``, but instead has ``p1p1`` or some
other interface name, ensure that all references to ``eth0`` in all
configuration files are replaced with the appropriate name. The same applies
to additional network interfaces.
.. literalinclude:: ../../../../etc/network/interfaces.d/openstack_interface.cfg.singlebond.example
Multiple interfaces or bonds
----------------------------
OpenStack-Ansible supports the use of a multiple interfaces or sets of bonded
interfaces that carry traffic for OpenStack services and instances.
The following diagram demonstrates hosts using multiple interfaces:
.. image:: ../figures/network-arch-multiple-interfaces.png
:width: 100%
:alt: Network Interface Layout - Multiple Interfaces
The following diagram demonstrates hosts using multiple bonds:
.. image:: ../figures/network-arch-multiple-bonds.png
:width: 100%
:alt: Network Interface Layout - Multiple Bonds
Each host will require the correct network bridges to be implemented. The
following is the ``/etc/network/interfaces`` file for ``infra1`` using
multiple bonded interfaces.
.. note::
If your environment does not have ``eth0``, but instead has ``p1p1`` or
some other interface name, ensure that all references to ``eth0`` in all
configuration files are replaced with the appropriate name. The same
applies to additional network interfaces.
.. literalinclude:: ../../../../etc/network/interfaces.d/openstack_interface.cfg.multibond.example
Additional resources
~~~~~~~~~~~~~~~~~~~~
For more information on how to properly configure network interface files
and OpenStack-Ansible configuration files for different deployment scenarios,
please refer to the following:
* :dev_docs:`Configuring a test environment
<user/test/example.html>`
* :dev_docs:`Configuring a homogeneous production environment
<user/prod/example.html>`
* :dev_docs:`Using provider network groups for a heterogeneous environment
<user/prod/provnet_groups.html>`
For network agent and container networking toplogies, please refer to the
following:
* :dev_docs:`Container networking architecture
<reference/architecture/container-networking.html>`

View File

@ -0,0 +1,167 @@
.. _provider-network-groups-config:
=======================
Provider network groups
=======================
Many network configuration examples assume a homogenous environment,
where each server is configured identically and consistent network
interfaces and interface names can be assumed across all hosts.
Recent changes to OSA enables deployers to define provider networks
that apply to particular inventory groups and allows for a heterogeneous
network configuration within a cloud environment. New groups can be created
or existing inventory groups, such as ``network_hosts`` or
``compute_hosts``, can be used to ensure certain configurations are applied
only to hosts that meet the given parameters.
Before reading this document, please review the following scenario:
* :dev_docs:`Production environment <user/prod/example.html>`
This example environment has the following characteristics:
* A ``network_hosts`` group consisting of three collapsed
infrastructure/network (control plane) hosts
* A ``compute_hosts`` group consisting of two compute hosts
* Multiple Network Interface Cards (NIC) used as provider network interfaces
that vary between hosts
.. note::
The groups ``network_hosts`` and ``compute_hosts`` are pre-defined groups
in an OpenStack-Ansible deployment.
The following diagram demonstates servers with different network interface
names:
.. image:: ../figures/arch-layout-provnet-groups.png
:width: 100%
:alt: Production environment host layout
In this example environment, infrastructure/network nodes hosting L2/L3/DHCP
agents will utilize an interface named ``ens1f0`` for the provider network
``physnet1``. Compute nodes, on the other hand, will utilize an interface
named ``ens2f0`` for the same ``physnet1`` provider network.
.. note::
Differences in network interface names may be the result of a difference in
drivers and/or PCI slot locations.
Deployment configuration
~~~~~~~~~~~~~~~~~~~~~~~~
Environment layout
------------------
The ``/etc/openstack_deploy/openstack_user_config.yml`` file defines the
environment layout.
The following configuration describes the layout for this environment.
.. literalinclude:: ../../../../etc/openstack_deploy/openstack_user_config.yml.provnet-group.example
Hosts in the ``network_hosts`` group will map ``physnet1`` to the ``ens1f0``
interface, while hosts in the ``compute_hosts`` group will map ``physnet1``
to the ``ens2f0`` interface. Additional provider mappings can be established
using the same format in a separate definition.
An additional provider interface definition named ``physnet2`` using different
interfaces between hosts may resemble the following:
.. code:: console
- network:
container_bridge: "br-vlan2"
container_type: "veth"
container_interface: "eth13"
host_bind_override: "ens1f1"
type: "vlan"
range: "2000:2999"
net_name: "physnet2"
group_binds:
- network_hosts
- network:
container_bridge: "br-vlan2"
container_type: "veth"
host_bind_override: "ens2f1"
type: "vlan"
range: "2000:2999"
net_name: "physnet2"
group_binds:
- compute_hosts
.. note::
The ``container_interface`` parameter is only necessary when Neutron
agents are run in containers, and can be excluded in many cases. The
``container_bridge`` and ``container_type`` parameters also relate to
infrastructure containers, but should remain defined for legacy purposes.
Custom Groups
-------------
Custom inventory groups can be created to assist in segmenting hosts beyond
the built-in groups provided by OpenStack-Ansible.
Before creating custom groups, please review the following:
* :dev_docs:`OpenStack-Ansible Inventory
<contributor/inventory.html>`
* :dev_docs:`Configuring the inventory
<reference/inventory/configure-inventory.html>`
The following diagram demonstates how a custom group can be used to further
segment hosts:
.. image:: ../figures/arch-layout-provnet-groups-custom.png
:width: 100%
:alt: Production environment host layout
When creating a custom group, first create a skeleton in
``/etc/openstack_deploy/env.d/``. The following is an example of an inventory
skeleton for a group named ``custom2_hosts`` that will consist of bare metal
hosts, and has been created at
``/etc/openstack_deploy/env.d/custom2_hosts.yml``.
.. code-block:: console
---
physical_skel:
custom2_containers:
belongs_to:
- all_containers
custom2_hosts:
belongs_to:
- hosts
Define the group and its members in a corresponding file in
``/etc/openstack_deploy/conf.d/``. The following is an example of a group
named ``custom2_hosts`` defined in
``/etc/openstack_deploy/env.d/custom2_hosts.yml`` consisting of a single
member, ``compute2``:
.. code-block:: console
---
# custom example
custom2_hosts:
compute2:
ip: 172.29.236.17
The custom group can then be specifed when creating a provider network, as
shown here:
.. code:: console
- network:
container_bridge: "br-vlan"
container_type: "veth"
host_bind_override: "ens8f1"
type: "vlan"
range: "101:200,301:400"
net_name: "physnet1"
group_binds:
- custom2_hosts

View File

@ -0,0 +1,137 @@
# This is a multi-NIC bonded configuration to implement the required bridges
# for OpenStack-Ansible. This illustrates the configuration of the first
# Infrastructure host and the IP addresses assigned should be adapted
# for implementation on the other hosts.
#
# After implementing this configuration, the host will need to be
# rebooted.
# Assuming that eth0/1 and eth2/3 are dual port NIC's we pair
# eth0 with eth2 and eth1 with eth3 for increased resiliency
# in the case of one interface card failing.
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0
auto eth1
iface eth1 inet manual
bond-master bond1
bond-primary eth1
auto eth2
iface eth2 inet manual
bond-master bond0
auto eth3
iface eth3 inet manual
bond-master bond1
# Create a bonded interface. Note that the "bond-slaves" is set to none. This
# is because the bond-master has already been set in the raw interfaces for
# the new bond0.
auto bond0
iface bond0 inet manual
bond-slaves none
bond-mode active-backup
bond-miimon 100
bond-downdelay 200
bond-updelay 200
# This bond will carry VLAN and VXLAN traffic to ensure isolation from
# control plane traffic on bond0.
auto bond1
iface bond1 inet manual
bond-slaves none
bond-mode active-backup
bond-miimon 100
bond-downdelay 250
bond-updelay 250
# Container/Host management VLAN interface
auto bond0.10
iface bond0.10 inet manual
vlan-raw-device bond0
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
auto bond1.30
iface bond1.30 inet manual
vlan-raw-device bond1
# Storage network VLAN interface (optional)
auto bond0.20
iface bond0.20 inet manual
vlan-raw-device bond0
# Container/Host management bridge
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond0.10
address 172.29.236.11
netmask 255.255.252.0
gateway 172.29.236.1
dns-nameservers 8.8.8.8 8.8.4.4
# OpenStack Networking VXLAN (tunnel/overlay) bridge
#
# Nodes hosting Neutron agents must have an IP address on this interface,
# including COMPUTE, NETWORK, and collapsed INFRA/NETWORK nodes.
#
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond1.30
address 172.29.240.16
netmask 255.255.252.0
# OpenStack Networking VLAN bridge
#
# The "br-vlan" bridge is no longer necessary for deployments unless Neutron
# agents are deployed in a container. Instead, a direct interface such as
# bond1 can be specified via the "host_bind_override" override when defining
# provider networks.
#
#auto br-vlan
#iface br-vlan inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
# bridge_ports bond1
# compute1 Network VLAN bridge
#auto br-vlan
#iface br-vlan inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
#
# Storage bridge (optional)
#
# Only the COMPUTE and STORAGE nodes must have an IP address
# on this bridge. When used by infrastructure nodes, the
# IP addresses are assigned to containers which use this
# bridge.
#
auto br-storage
iface br-storage inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond0.20
# compute1 Storage bridge
#auto br-storage
#iface br-storage inet static
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
# bridge_ports bond0.20
# address 172.29.244.16
# netmask 255.255.252.0

View File

@ -0,0 +1,124 @@
# This is a multi-NIC bonded configuration to implement the required bridges
# for OpenStack-Ansible. This illustrates the configuration of the first
# Infrastructure host and the IP addresses assigned should be adapted
# for implementation on the other hosts.
#
# After implementing this configuration, the host will need to be
# rebooted.
# Assuming that eth0/1 and eth2/3 are dual port NIC's we pair
# eth0 with eth2 for increased resiliency in the case of one interface card
# failing.
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0
auto eth1
iface eth1 inet manual
auto eth2
iface eth2 inet manual
bond-master bond0
auto eth3
iface eth3 inet manual
# Create a bonded interface. Note that the "bond-slaves" is set to none. This
# is because the bond-master has already been set in the raw interfaces for
# the new bond0.
auto bond0
iface bond0 inet manual
bond-slaves none
bond-mode active-backup
bond-miimon 100
bond-downdelay 200
bond-updelay 200
# Container/Host management VLAN interface
auto bond0.10
iface bond0.10 inet manual
vlan-raw-device bond0
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
auto bond0.30
iface bond0.30 inet manual
vlan-raw-device bond0
# Storage network VLAN interface (optional)
auto bond0.20
iface bond0.20 inet manual
vlan-raw-device bond0
# Container/Host management bridge
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond0.10
address 172.29.236.11
netmask 255.255.252.0
gateway 172.29.236.1
dns-nameservers 8.8.8.8 8.8.4.4
# OpenStack Networking VXLAN (tunnel/overlay) bridge
#
# Nodes hosting Neutron agents must have an IP address on this interface,
# including COMPUTE, NETWORK, and collapsed INFRA/NETWORK nodes.
#
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond0.30
address 172.29.240.16
netmask 255.255.252.0
# OpenStack Networking VLAN bridge
#
# The "br-vlan" bridge is no longer necessary for deployments unless Neutron
# agents are deployed in a container. Instead, a direct interface such as
# bond0 can be specified via the "host_bind_override" override when defining
# provider networks.
#
#auto br-vlan
#iface br-vlan inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
# bridge_ports bond0
# compute1 Network VLAN bridge
#auto br-vlan
#iface br-vlan inet manual
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
#
# Storage bridge (optional)
#
# Only the COMPUTE and STORAGE nodes must have an IP address
# on this bridge. When used by infrastructure nodes, the
# IP addresses are assigned to containers which use this
# bridge.
#
auto br-storage
iface br-storage inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond0.20
# compute1 Storage bridge
#auto br-storage
#iface br-storage inet static
# bridge_stp off
# bridge_waitport 0
# bridge_fd 0
# bridge_ports bond0.20
# address 172.29.244.16
# netmask 255.255.252.0

View File

@ -0,0 +1,313 @@
---
cidr_networks:
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- "172.29.236.1,172.29.236.50"
- "172.29.240.1,172.29.240.50"
- "172.29.244.1,172.29.244.50"
- "172.29.248.1,172.29.248.50"
global_overrides:
internal_lb_vip_address: 172.29.236.9
#
# The below domain name must resolve to an IP address
# in the CIDR specified in haproxy_keepalived_external_vip_cidr.
# If using different protocols (https/http) for the public/internal
# endpoints the two addresses must be different.
#
external_lb_vip_address: openstack.example.com
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
#
# The below provider network defines details related to overlay traffic,
# including the range of VXLAN VNIs to assign to project/tenant networks
# and other attributes.
#
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
#
# The below provider network define details related to a given provider
# network: physnet1. Details include the name of the veth interface to
# connect to the bridge when agent on_metal is False (container_interface)
# or the physical interface to connect to the bridge when agent on_metal
# is True (host_bind_override), as well as the network type. The provider
# network name (net_name) will be used to build a physical network mapping
# to a network interface using either container_interface or
# host_bind_override (when defined).
#
# The network details will be used to populate the respective network
# configuration file(s) on the members of the listed groups. In this
# example, host_bind_override specifies the bond1 interface and applies
# only to the members of the neutron_linuxbridge_agent inventory group.
#
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
host_bind_override: "bond1"
type: "vlan"
range: "101:200,301:400"
net_name: "physnet1"
group_binds:
- neutron_linuxbridge_agent
#
# The below provider network defines details related to storage traffic.
#
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
###
### Infrastructure
###
# galera, memcache, rabbitmq, utility
shared-infra_hosts:
infra1:
ip: 172.29.236.11
container_vars:
# Optional | Example setting the container_tech for a target host.
container_tech: lxc
infra2:
ip: 172.29.236.12
container_vars:
# Optional | Example setting the container_tech for a target host.
container_tech: nspawn
infra3:
ip: 172.29.236.13
# repository (apt cache, python packages, etc)
repo-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# load balancer
# Ideally the load balancer should not use the Infrastructure hosts.
# Dedicated hardware is best for improved performance and security.
haproxy_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# rsyslog server
log_hosts:
log1:
ip: 172.29.236.14
###
### OpenStack
###
# keystone
identity_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# cinder api services
storage-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# glance
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
image_hosts:
infra1:
ip: 172.29.236.11
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
infra2:
ip: 172.29.236.12
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
infra3:
ip: 172.29.236.13
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
# nova api, conductor, etc services
compute-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# heat
orchestration_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# horizon
dashboard_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# neutron server, agents (L3, etc)
network_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# ceilometer (telemetry data collection)
metering-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# aodh (telemetry alarm service)
metering-alarm_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# gnocchi (telemetry metrics storage)
metrics_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# nova hypervisors
compute_hosts:
compute1:
ip: 172.29.236.16
compute2:
ip: 172.29.236.17
# ceilometer compute agent (telemetry data collection)
metering-compute_hosts:
compute1:
ip: 172.29.236.16
compute2:
ip: 172.29.236.17
# cinder volume hosts (NFS-backed)
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
storage_hosts:
infra1:
ip: 172.29.236.11
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"
infra2:
ip: 172.29.236.12
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"
infra3:
ip: 172.29.236.13
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"

View File

@ -0,0 +1,351 @@
---
cidr_networks:
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- "172.29.236.1,172.29.236.50"
- "172.29.240.1,172.29.240.50"
- "172.29.244.1,172.29.244.50"
- "172.29.248.1,172.29.248.50"
global_overrides:
internal_lb_vip_address: 172.29.236.9
#
# The below domain name must resolve to an IP address
# in the CIDR specified in haproxy_keepalived_external_vip_cidr.
# If using different protocols (https/http) for the public/internal
# endpoints the two addresses must be different.
#
external_lb_vip_address: openstack.example.com
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
#
# The below provider network defines details related to vxlan traffic,
# including the range of VNIs to assign to project/tenant networks and
# other attributes.
#
# The network details will be used to populate the respective network
# configuration file(s) on the members of the listed groups.
#
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- network_hosts
- compute_hosts
#
# The below provider network(s) define details related to a given provider
# network: physnet1. Details include the name of the veth interface to
# connect to the bridge when agent on_metal is False (container_interface)
# or the physical interface to connect to the bridge when agent on_metal
# is True (host_bind_override), as well as the network type. The provider
# network name (net_name) will be used to build a physical network mapping
# to a network interface; either container_interface or host_bind_override
# (when defined).
#
# The network details will be used to populate the respective network
# configuration file(s) on the members of the listed groups. In this
# example, host_bind_override specifies the ens1f0 interface and applies
# only to the members of network_hosts:
#
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "ens1f0"
type: "flat"
net_name: "physnet1"
group_binds:
- network_hosts
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
host_bind_override: "ens1f0"
type: "vlan"
range: "101:200,301:400"
net_name: "physnet1"
group_binds:
- network_hosts
#
# The below provider network(s) also define details related to the
# physnet1 provider network. In this example, however, host_bind_override
# specifies the ens2f0 interface and applies only to the members of
# compute_hosts:
#
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "ens2f0"
type: "flat"
net_name: "physnet1"
group_binds:
- compute_hosts
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
host_bind_override: "ens2f0"
type: "vlan"
range: "101:200,301:400"
net_name: "physnet1"
group_binds:
- compute_hosts
#
# The below provider network defines details related to storage traffic.
#
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
###
### Infrastructure
###
# galera, memcache, rabbitmq, utility
shared-infra_hosts:
infra1:
ip: 172.29.236.11
container_vars:
# Optional | Example setting the container_tech for a target host.
container_tech: lxc
infra2:
ip: 172.29.236.12
container_vars:
# Optional | Example setting the container_tech for a target host.
container_tech: nspawn
infra3:
ip: 172.29.236.13
# repository (apt cache, python packages, etc)
repo-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# load balancer
# Ideally the load balancer should not use the Infrastructure hosts.
# Dedicated hardware is best for improved performance and security.
haproxy_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# rsyslog server
log_hosts:
log1:
ip: 172.29.236.14
###
### OpenStack
###
# keystone
identity_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# cinder api services
storage-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# glance
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
image_hosts:
infra1:
ip: 172.29.236.11
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
infra2:
ip: 172.29.236.12
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
infra3:
ip: 172.29.236.13
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
# nova api, conductor, etc services
compute-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# heat
orchestration_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# horizon
dashboard_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# neutron server, agents (L3, etc)
network_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# ceilometer (telemetry data collection)
metering-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# aodh (telemetry alarm service)
metering-alarm_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# gnocchi (telemetry metrics storage)
metrics_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# nova hypervisors
compute_hosts:
compute1:
ip: 172.29.236.16
compute2:
ip: 172.29.236.17
# ceilometer compute agent (telemetry data collection)
metering-compute_hosts:
compute1:
ip: 172.29.236.16
compute2:
ip: 172.29.236.17
# cinder volume hosts (NFS-backed)
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
storage_hosts:
infra1:
ip: 172.29.236.11
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"
infra2:
ip: 172.29.236.12
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"
infra3:
ip: 172.29.236.13
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"

View File

@ -0,0 +1,313 @@
---
cidr_networks:
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- "172.29.236.1,172.29.236.50"
- "172.29.240.1,172.29.240.50"
- "172.29.244.1,172.29.244.50"
- "172.29.248.1,172.29.248.50"
global_overrides:
internal_lb_vip_address: 172.29.236.9
#
# The below domain name must resolve to an IP address
# in the CIDR specified in haproxy_keepalived_external_vip_cidr.
# If using different protocols (https/http) for the public/internal
# endpoints the two addresses must be different.
#
external_lb_vip_address: openstack.example.com
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
#
# The below provider network defines details related to overlay traffic,
# including the range of VXLAN VNIs to assign to project/tenant networks
# and other attributes.
#
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
#
# The below provider network define details related to a given provider
# network: physnet1. Details include the name of the veth interface to
# connect to the bridge when agent on_metal is False (container_interface)
# or the physical interface to connect to the bridge when agent on_metal
# is True (host_bind_override), as well as the network type. The provider
# network name (net_name) will be used to build a physical network mapping
# to a network interface using either container_interface or
# host_bind_override (when defined).
#
# The network details will be used to populate the respective network
# configuration file(s) on the members of the listed groups. In this
# example, host_bind_override specifies the bond0 interface and applies
# only to the members of the neutron_linuxbridge_agent inventory group.
#
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
host_bind_override: "bond0"
type: "vlan"
range: "101:200,301:400"
net_name: "physnet1"
group_binds:
- neutron_linuxbridge_agent
#
# The below provider network defines details related to storage traffic.
#
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
###
### Infrastructure
###
# galera, memcache, rabbitmq, utility
shared-infra_hosts:
infra1:
ip: 172.29.236.11
container_vars:
# Optional | Example setting the container_tech for a target host.
container_tech: lxc
infra2:
ip: 172.29.236.12
container_vars:
# Optional | Example setting the container_tech for a target host.
container_tech: nspawn
infra3:
ip: 172.29.236.13
# repository (apt cache, python packages, etc)
repo-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# load balancer
# Ideally the load balancer should not use the Infrastructure hosts.
# Dedicated hardware is best for improved performance and security.
haproxy_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# rsyslog server
log_hosts:
log1:
ip: 172.29.236.14
###
### OpenStack
###
# keystone
identity_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# cinder api services
storage-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# glance
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
image_hosts:
infra1:
ip: 172.29.236.11
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
infra2:
ip: 172.29.236.12
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
infra3:
ip: 172.29.236.13
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
# nova api, conductor, etc services
compute-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# heat
orchestration_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# horizon
dashboard_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# neutron server, agents (L3, etc)
network_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# ceilometer (telemetry data collection)
metering-infra_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# aodh (telemetry alarm service)
metering-alarm_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# gnocchi (telemetry metrics storage)
metrics_hosts:
infra1:
ip: 172.29.236.11
infra2:
ip: 172.29.236.12
infra3:
ip: 172.29.236.13
# nova hypervisors
compute_hosts:
compute1:
ip: 172.29.236.16
compute2:
ip: 172.29.236.17
# ceilometer compute agent (telemetry data collection)
metering-compute_hosts:
compute1:
ip: 172.29.236.16
compute2:
ip: 172.29.236.17
# cinder volume hosts (NFS-backed)
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
storage_hosts:
infra1:
ip: 172.29.236.11
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"
infra2:
ip: 172.29.236.12
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"
infra3:
ip: 172.29.236.13
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- ip: "172.29.244.15"
share: "/vol/cinder"