This patch does the following: - Simplifies the sphinx configuration introduced in https://review.openstack.org/371722 to reduce the number of variables involved. The variables are also ordered in the same way everywhere to make it easier to read and troubleshoot. - Simplifies some of the CLI guides to be more explicit about the tag to checkout when cloning the git repo. - Cleaned up some references which went to non-existant documents. - Added a link to the networking appendix. - As per https://review.openstack.org/369650 the backup directory for the upgrade process is now the name of the source version the upgrade process is working with. Change-Id: Iee30a32f99a66d9facb049311cadf1b9a8b2170e
3.5 KiB
Appendix E: Container networking
OpenStack-Ansible deploys LXC machine containers and uses linux bridging between the container interfaces and the host interfaces to ensure that all traffic from containers flow over multiple host interfaces. This is to avoid traffic flowing through the default LXC bridge which is a single host interface (and therefore could become a bottleneck), and which is interfered with by iptables.
This appendix intends to describe how the interfaces are connected and how traffic flows.
For more details about how the OpenStack Networking service (neutron) uses the interfaces for instance traffic, please see the OpenStack Networking Guide.
Bonded network interfaces
A typical production environment uses multiple physical network interfaces in a bonded pair for better redundancy and throughput. We recommend avoiding the use of two ports on the same multi-port network card for the same bonded interface. This is because a network card failure affects both physical network interfaces used by the bond.
Linux bridges
The combination of containers and flexible deployment options require implementation of advanced Linux networking features, such as bridges and namespaces.
Bridges provide layer 2 connectivity (similar to switches) among physical, logical, and virtual network interfaces within a host. After creating a bridge, the network interfaces are virtually plugged in to it.
OpenStack-Ansible uses bridges to connect physical and logical network interfaces on the host to virtual network interfaces within containers.
Namespaces provide logically separate layer 3 environments (similar
to routers) within a host. Namespaces use virtual interfaces to connect
with other namespaces, including the host namespace. These interfaces,
often called veth
pairs, are virtually plugged in between
namespaces similar to patch cables connecting physical devices such as
switches and routers.
Each container has a namespace that connects to the host namespace
with one or more veth
pairs. Unless specified, the system
generates random names for veth
pairs.
The following image demonstrates how the container network interfaces are connected to the host's bridges and to the host's physical network interfaces:
Network diagrams
The following image shows how all of the interfaces and bridges interconnect to provide network connectivity to the OpenStack deployment:
OpenStack-Ansible deploys the Compute service on the physical host rather than in a container. The following image shows how to use bridges for network connectivity:
The following image shows how the neutron agents work with the
bridges br-vlan
and br-vxlan
. Neutron is
configured to use a DHCP agent, L3 agent, and Linux Bridge agent within
a networking-agents
container. The image shows how DHCP
agents provide information (IP addresses and DNS servers) to the
instances, and how routing works on the image:
The following image shows how virtual machines connect to the
br-vlan
and br-vxlan
bridges and send traffic
to the network outside the host: