[docs] Split Network Architecture page

This splits the information in the Network Architecture page
into two parts. The one part remains in the install guide and
is revised to be appropriate. The other part is moved into an
Appendix.

Change-Id: I66fa73792ee21ddbc88ec71295900493a6eeb3a0
This commit is contained in:
Jesse Pretorius 2016-09-07 12:05:53 +01:00
parent a5a5bf80b6
commit 5cc5277939
2 changed files with 131 additions and 196 deletions

View File

@ -0,0 +1,89 @@
.. _network-appendix:
================================
Appendix F: Container networking
================================
OpenStack-Ansible deploys LXC machine containers and uses linux bridging
between the container interfaces and the host interfaces to ensure that
all traffic from containers flow over multiple host interfaces. This is
to avoid traffic flowing through the default LXC bridge which is a single
host interface (and therefore could become a bottleneck), and which is
interfered with by iptables.
This appendix intends to describe how the interfaces are connected and
how traffic flows.
For more details about how the OpenStack Networking service (neutron) uses
the interfaces for instance traffic, please see the
`OpenStack Networking Guide`_.
.. _OpenStack Networking Guide: http://docs.openstack.org/networking-guide/
Bonded network interfaces
~~~~~~~~~~~~~~~~~~~~~~~~~
A typical production environment uses multiple physical network interfaces
in a bonded pair for better redundancy and throughput. We recommend avoiding
the use of two ports on the same multi-port network card for the same bonded
interface. This is because a network card failure affects both physical
network interfaces used by the bond.
Linux bridges
~~~~~~~~~~~~~
The combination of containers and flexible deployment options require
implementation of advanced Linux networking features, such as bridges and
namespaces.
Bridges provide layer 2 connectivity (similar to switches) among
physical, logical, and virtual network interfaces within a host. After
creating a bridge, the network interfaces are virtually plugged in to
it.
OpenStack-Ansible uses bridges to connect physical and logical network
interfaces on the host to virtual network interfaces within containers.
Namespaces provide logically separate layer 3 environments (similar to
routers) within a host. Namespaces use virtual interfaces to connect
with other namespaces, including the host namespace. These interfaces,
often called ``veth`` pairs, are virtually plugged in between
namespaces similar to patch cables connecting physical devices such as
switches and routers.
Each container has a namespace that connects to the host namespace with
one or more ``veth`` pairs. Unless specified, the system generates
random names for ``veth`` pairs.
The following image demonstrates how the container network interfaces are
connected to the host's bridges and to the host's physical network interfaces:
.. image:: figures/networkcomponents.png
Network diagrams
~~~~~~~~~~~~~~~~
The following image shows how all of the interfaces and bridges interconnect
to provide network connectivity to the OpenStack deployment:
.. image:: figures/networkarch-container-external.png
OpenStack-Ansible deploys the Compute service on the physical host rather than
in a container. The following image shows how to use bridges for
network connectivity:
.. image:: figures/networkarch-bare-external.png
The following image shows how the neutron agents work with the bridges
``br-vlan`` and ``br-vxlan``. Neutron is configured to use a DHCP agent, L3
agent, and Linux Bridge agent within a ``networking-agents`` container. The
image shows how DHCP agents provide information (IP addresses and DNS servers)
to the instances, and how routing works on the image:
.. image:: figures/networking-neutronagents.png
The following image shows how virtual machines connect to the ``br-vlan`` and
``br-vxlan`` bridges and send traffic to the network outside the host:
.. image:: figures/networking-compute.png

View File

@ -4,238 +4,84 @@
Network architecture
====================
For a production environment, some components are mandatory, such as bridges
described below. We recommend other components such as a bonded network
interface.
.. important::
Follow the reference design as closely as possible.
Although Ansible automates most deployment operations, networking on
target hosts requires manual configuration as it varies
dramatically per environment. For demonstration purposes, these
instructions use a reference architecture with example network interface
names, networks, and IP addresses. Modify these values as needed for your
particular environment.
target hosts requires manual configuration as it varies from one use-case
to another.
Bonded network interfaces
~~~~~~~~~~~~~~~~~~~~~~~~~
The following section describes the network configuration that must be
implemented on all target hosts.
The reference architecture for a production environment includes bonded
network interfaces, which use multiple physical network interfaces for better
redundancy and throughput. Avoid using two ports on the same multi-port
network card for the same bonded interface since a network card failure
affects both physical network interfaces used by the bond.
A deeper explanation of how the networking works can be found in
:ref:`network-appendix`.
The ``bond0`` interface carries traffic from the containers
running your OpenStack infrastructure. Configure a static IP address on the
``bond0`` interface from your management network.
The ``bond1`` interface carries traffic from your virtual machines.
Do not configure a static IP on this interface, since neutron uses this
bond to handle VLAN and VXLAN networks for virtual machines.
Additional bridge networks are required for OpenStack-Ansible. These bridges
connect the two bonded network interfaces.
Adding bridges
~~~~~~~~~~~~~~
The combination of containers and flexible deployment options require
implementation of advanced Linux networking features, such as bridges and
namespaces.
Bridges provide layer 2 connectivity (similar to switches) among
physical, logical, and virtual network interfaces within a host. After
creating a bridge, the network interfaces are virtually plugged in to
it.
Host network bridges
~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible uses bridges to connect physical and logical network
interfaces on the host to virtual network interfaces within containers.
Namespaces provide logically separate layer 3 environments (similar to
routers) within a host. Namespaces use virtual interfaces to connect
with other namespaces, including the host namespace. These interfaces,
often called ``veth`` pairs, are virtually plugged in between
namespaces similar to patch cables connecting physical devices such as
switches and routers.
Target hosts are configured with the following network bridges:
Each container has a namespace that connects to the host namespace with
one or more ``veth`` pairs. Unless specified, the system generates
random names for ``veth`` pairs.
* LXC internal ``lxcbr0``:
The following image demonstrates how the container network interfaces are
connected to the host's bridges and to the host's physical network interfaces:
* This bridge is **required**, but OpenStack-Ansible configures it
automatically.
.. image:: figures/networkcomponents.png
* Provides external (typically internet) connectivity to containers.
Target hosts contain the following network bridges:
* This bridge does not directly attach to any physical or logical
interfaces on the host because iptables handles connectivity. It
attaches to ``eth0`` in each container, but the container network
interface it attaches to is configurable in
``openstack_user_config.yml`` in the ``provider_networks``
dictionary.
- LXC internal ``lxcbr0``:
* Container management ``br-mgmt``:
- This bridge is **required**, but LXC configures it automatically.
* This bridge is **required**.
- Provides external (typically internet) connectivity to containers.
* Provides management of and communication between the infrastructure
and OpenStack services.
- This bridge does not directly attach to any physical or logical
interfaces on the host because iptables handles connectivity. It
attaches to ``eth0`` in each container, but the container network
interface is configurable in ``openstack_user_config.yml`` in the
``provider_networks`` dictionary.
* Attaches to a physical or logical interface, typically a ``bond0`` VLAN
subinterface. Also attaches to ``eth1`` in each container. The container
network interface it attaches to is configurable in
``openstack_user_config.yml``.
- Container management ``br-mgmt``:
* Storage ``br-storage``:
- This bridge is **required**.
* This bridge is **optional**, but recommended for production
environments.
- Provides management of and communication among infrastructure and
OpenStack services.
* Provides segregated access to Block Storage devices between
OpenStack services and Block Storage devices.
- Attaches to a physical or logical interface, typically a ``bond0`` VLAN
subinterface. Also attaches to ``eth1`` in each container. The container
network interface is configurable in ``openstack_user_config.yml``.
- Storage ``br-storage``:
- This bridge is *optional*, but recommended.
- Provides segregated access to Block Storage devices between
Compute and Block Storage hosts.
- Attaches to a physical or logical interface, typically a ``bond0`` VLAN
* Attaches to a physical or logical interface, typically a ``bond0`` VLAN
subinterface. Also attaches to ``eth2`` in each associated container.
The container network interface is configurable in
The container network interface it attaches to is configurable in
``openstack_user_config.yml``.
- OpenStack Networking tunnel ``br-vxlan``:
* OpenStack Networking tunnel ``br-vxlan``:
- This bridge is **required**.
- This bridge is **required** if the environment is configured to allow
projects to create virtual networks.
- Provides infrastructure for VXLAN tunnel networks.
- Provides the interface for virtual (VXLAN) tunnel networks.
- Attaches to a physical or logical interface, typically a ``bond1`` VLAN
subinterface. Also attaches to ``eth10`` in each associated container.
The container network interface is configurable in
The container network interface it attaches to is configurable in
``openstack_user_config.yml``.
- OpenStack Networking provider ``br-vlan``:
- This bridge is **required**.
- Provides infrastructure for VLAN networks.
- Provides infrastructure for VLAN tagged or flat (no VLAN tag) networks.
- Attaches to a physical or logical interface, typically ``bond1``.
Attaches to ``eth11`` for vlan type networks in each associated
container. It is not assigned an IP address because it only handles
layer 2 connectivity. The container network interface is configurable in
``openstack_user_config.yml``.
- This interface supports flat networks with additional
bridge configuration. More details are available here:
:ref:`network_configuration`.
Network diagrams
~~~~~~~~~~~~~~~~
The following image shows how all of the interfaces and bridges interconnect
to provide network connectivity to the OpenStack deployment:
.. image:: figures/networkarch-container-external.png
OpenStack-Ansible deploys the compute service on the physical host rather than
in a container. The following image shows how to use bridges for
network connectivity:
.. image:: figures/networkarch-bare-external.png
The following image shows how the neutron agents work with the bridges
``br-vlan`` and ``br-vxlan``. OpenStack Networking (neutron) is
configured to use a DHCP agent, L3 agent, and Linux Bridge agent within a
``networking-agents`` container. The image shows how DHCP agents provide
information (IP addresses and DNS servers) to the instances, and how
routing works on the image:
.. image:: figures/networking-neutronagents.png
The following image shows how virtual machines connect to the ``br-vlan`` and
``br-vxlan`` bridges and send traffic to the network outside the host:
.. image:: figures/networking-compute.png
Network ranges
~~~~~~~~~~~~~~
.. TODO Edit this for production and test environment?
In this guide, the following IP addresses and hostnames are
used when installing OpenStack-Ansible.
+-----------------------+-----------------+
| Network | IP Range |
+=======================+=================+
| Management Network | 172.29.236.0/22 |
+-----------------------+-----------------+
| Tunnel (VXLAN) Network| 172.29.240.0/22 |
+-----------------------+-----------------+
| Storage Network | 172.29.244.0/22 |
+-----------------------+-----------------+
IP assignments
~~~~~~~~~~~~~~
+------------------+----------------+-------------------+----------------+
| Host name | Management IP | Tunnel (VxLAN) IP | Storage IP |
+==================+================+===================+================+
| infra1 | 172.29.236.101 | 172.29.240.101 | 172.29.244.101 |
+------------------+----------------+-------------------+----------------+
| infra2 | 172.29.236.102 | 172.29.240.102 | 172.29.244.102 |
+------------------+----------------+-------------------+----------------+
| infra3 | 172.29.236.103 | 172.29.240.103 | 172.29.244.103 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| net1 | 172.29.236.111 | 172.29.240.111 | |
+------------------+----------------+-------------------+----------------+
| net2 | 172.29.236.112 | 172.29.240.112 | |
+------------------+----------------+-------------------+----------------+
| net3 | 172.29.236.113 | 172.29.240.113 | |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| compute1 | 172.29.236.121 | 172.29.240.121 | 172.29.244.121 |
+------------------+----------------+-------------------+----------------+
| compute2 | 172.29.236.122 | 172.29.240.122 | 172.29.244.122 |
+------------------+----------------+-------------------+----------------+
| compute3 | 172.29.236.123 | 172.29.240.123 | 172.29.244.123 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| lvm-storage1 | 172.29.236.131 | | 172.29.244.131 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| nfs-storage1 | 172.29.236.141 | | 172.29.244.141 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| ceph-mon1 | 172.29.236.151 | | 172.29.244.151 |
+------------------+----------------+-------------------+----------------+
| ceph-mon2 | 172.29.236.152 | | 172.29.244.152 |
+------------------+----------------+-------------------+----------------+
| ceph-mon3 | 172.29.236.153 | | 172.29.244.153 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| swift1 | 172.29.236.161 | | 172.29.244.161 |
+------------------+----------------+-------------------+----------------+
| swift2 | 172.29.236.162 | | 172.29.244.162 |
+------------------+----------------+-------------------+----------------+
| swift3 | 172.29.236.163 | | 172.29.244.163 |
+------------------+----------------+-------------------+----------------+
| | | | |
+------------------+----------------+-------------------+----------------+
| log1 | 172.29.236.171 | | |
+------------------+----------------+-------------------+----------------+
layer 2 connectivity. The container network interface it attaches to is
configurable in ``openstack_user_config.yml``.