openstack-manuals/doc/ops-guide/source/arch-network-design.rst
Emma Foley 9a183c374c [glossary] Remove acronyms [D]
- Remove acronym-only entries starting with [D].
- Consolodate duplicate entries.
- Resolve glossary references

Change-Id: I29bfb6a4fb643549b1d56d8842886a9aceda20b4
Implements: blueprint improve-glossary-usage
2016-08-24 13:22:22 +00:00

294 lines
13 KiB
ReStructuredText
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

==============
Network Design
==============
OpenStack provides a rich networking environment, and this chapter
details the requirements and options to deliberate when designing your
cloud.
.. warning::
If this is the first time you are deploying a cloud infrastructure
in your organization, after reading this section, your first
conversations should be with your networking team. Network usage in
a running cloud is vastly different from traditional network
deployments and has the potential to be disruptive at both a
connectivity and a policy level.
For example, you must plan the number of IP addresses that you need for
both your guest instances as well as management infrastructure.
Additionally, you must research and discuss cloud network connectivity
through proxy servers and firewalls.
In this chapter, we'll give some examples of network implementations to
consider and provide information about some of the network layouts that
OpenStack uses. Finally, we have some brief notes on the networking
services that are essential for stable operation.
Management Network
~~~~~~~~~~~~~~~~~~
A :term:`management network` (a separate network for use by your cloud
operators) typically consists of a separate switch and separate NICs
(network interface cards), and is a recommended option. This segregation
prevents system administration and the monitoring of system access from
being disrupted by traffic generated by guests.
Consider creating other private networks for communication between
internal components of OpenStack, such as the message queue and
OpenStack Compute. Using a virtual local area network (VLAN) works well
for these scenarios because it provides a method for creating multiple
virtual networks on a physical network.
Public Addressing Options
~~~~~~~~~~~~~~~~~~~~~~~~~
There are two main types of IP addresses for guest virtual machines:
fixed IPs and floating IPs. Fixed IPs are assigned to instances on boot,
whereas floating IP addresses can change their association between
instances by action of the user. Both types of IP addresses can be
either public or private, depending on your use case.
Fixed IP addresses are required, whereas it is possible to run OpenStack
without floating IPs. One of the most common use cases for floating IPs
is to provide public IP addresses to a private cloud, where there are a
limited number of IP addresses available. Another is for a public cloud
user to have a "static" IP address that can be reassigned when an
instance is upgraded or moved.
Fixed IP addresses can be private for private clouds, or public for
public clouds. When an instance terminates, its fixed IP is lost. It is
worth noting that newer users of cloud computing may find their
ephemeral nature frustrating.
IP Address Planning
~~~~~~~~~~~~~~~~~~~
An OpenStack installation can potentially have many subnets (ranges of
IP addresses) and different types of services in each. An IP address
plan can assist with a shared understanding of network partition
purposes and scalability. Control services can have public and private
IP addresses, and as noted above, there are a couple of options for an
instance's public addresses.
An IP address plan might be broken down into the following sections:
Subnet router
Packets leaving the subnet go via this address, which could be a
dedicated router or a ``nova-network`` service.
Control services public interfaces
Public access to ``swift-proxy``, ``nova-api``, ``glance-api``, and
horizon come to these addresses, which could be on one side of a
load balancer or pointing at individual machines.
Object Storage cluster internal communications
Traffic among object/account/container servers and between these and
the proxy server's internal interface uses this private network.
Compute and storage communications
If ephemeral or block storage is external to the compute node, this
network is used.
Out-of-band remote management
If a dedicated remote access controller chip is included in servers,
often these are on a separate network.
In-band remote management
Often, an extra (such as 1 GB) interface on compute or storage nodes
is used for system administrators or monitoring tools to access the
host instead of going through the public interface.
Spare space for future growth
Adding more public-facing control services or guest instance IPs
should always be part of your plan.
For example, take a deployment that has both OpenStack Compute and
Object Storage, with private ranges 172.22.42.0/24 and 172.22.87.0/26
available. One way to segregate the space might be as follows:
.. code-block:: none
172.22.42.0/24:
172.22.42.1 - 172.22.42.3 - subnet routers
172.22.42.4 - 172.22.42.20 - spare for networks
172.22.42.21 - 172.22.42.104 - Compute node remote access controllers
(inc spare)
172.22.42.105 - 172.22.42.188 - Compute node management interfaces (inc spare)
172.22.42.189 - 172.22.42.208 - Swift proxy remote access controllers
(inc spare)
172.22.42.209 - 172.22.42.228 - Swift proxy management interfaces (inc spare)
172.22.42.229 - 172.22.42.252 - Swift storage servers remote access controllers
(inc spare)
172.22.42.253 - 172.22.42.254 - spare
172.22.87.0/26:
172.22.87.1 - 172.22.87.3 - subnet routers
172.22.87.4 - 172.22.87.24 - Swift proxy server internal interfaces
(inc spare)
172.22.87.25 - 172.22.87.63 - Swift object server internal interfaces
(inc spare)
A similar approach can be taken with public IP addresses, taking note
that large, flat ranges are preferred for use with guest instance IPs.
Take into account that for some OpenStack networking options, a public
IP address in the range of a guest instance public IP address is
assigned to the ``nova-compute`` host.
Network Topology
~~~~~~~~~~~~~~~~
OpenStack Compute with ``nova-network`` provides predefined network
deployment models, each with its own strengths and weaknesses. The
selection of a network manager changes your network topology, so the
choice should be made carefully. You also have a choice between the
tried-and-true legacy ``nova-network`` settings or the neutron project
for OpenStack Networking. Both offer networking for launched instances
with different implementations and requirements.
For OpenStack Networking with the neutron project, typical
configurations are documented with the idea that any setup you can
configure with real hardware you can re-create with a software-defined
equivalent. Each tenant can contain typical network elements such as
routers, and services such as :term:`DHCP <Dynamic Host Configuration
Protocol (DHCP)>`.
:ref:`table_networking_deployment` describes the networking deployment
options for both legacy ``nova-network`` options and an equivalent
neutron configuration.
.. _table_networking_deployment:
.. list-table:: Networking deployment options
:widths: 10 30 30 30
:header-rows: 1
* - Network deployment model
- Strengths
- Weaknesses
- Neutron equivalent
* - Flat
- Extremely simple topology. No DHCP overhead.
- Requires file injection into the instance to configure network
interfaces.
- Configure a single bridge as the integration bridge (br-int) and
connect it to a physical network interface with the Modular Layer 2
(ML2) plug-in, which uses Open vSwitch by default.
* - FlatDHCP
- Relatively simple to deploy. Standard networking. Works with all guest
operating systems.
- Requires its own DHCP broadcast domain.
- Configure DHCP agents and routing agents. Network Address Translation
(NAT) performed outside of compute nodes, typically on one or more
network nodes.
* - VlanManager
- Each tenant is isolated to its own VLANs.
- More complex to set up. Requires its own DHCP broadcast domain.
Requires many VLANs to be trunked onto a single port. Standard VLAN
number limitation. Switches must support 802.1q VLAN tagging.
- Isolated tenant networks implement some form of isolation of layer 2
traffic between distinct networks. VLAN tagging is key concept, where
traffic is “tagged” with an ordinal identifier for the VLAN. Isolated
network implementations may or may not include additional services like
DHCP, NAT, and routing.
* - FlatDHCP Multi-host with high availability (HA)
- Networking failure is isolated to the VMs running on the affected
hypervisor. DHCP traffic can be isolated within an individual host.
Network traffic is distributed to the compute nodes.
- More complex to set up. Compute nodes typically need IP addresses
accessible by external networks. Options must be carefully configured
for live migration to work with networking services.
- Configure neutron with multiple DHCP and layer-3 agents. Network nodes
are not able to failover to each other, so the controller runs
networking services, such as DHCP. Compute nodes run the ML2 plug-in
with support for agents such as Open vSwitch or Linux Bridge.
Both ``nova-network`` and neutron services provide similar capabilities,
such as VLAN between VMs. You also can provide multiple NICs on VMs with
either service. Further discussion follows.
VLAN Configuration Within OpenStack VMs
---------------------------------------
VLAN configuration can be as simple or as complicated as desired. The
use of VLANs has the benefit of allowing each project its own subnet and
broadcast segregation from other projects. To allow OpenStack to
efficiently use VLANs, you must allocate a VLAN range (one for each
project) and turn each compute node switch port into a trunk
port.
For example, if you estimate that your cloud must support a maximum of
100 projects, pick a free VLAN range that your network infrastructure is
currently not using (such as VLAN 200299). You must configure OpenStack
with this range and also configure your switch ports to allow VLAN
traffic from that range.
Multi-NIC Provisioning
----------------------
OpenStack Networking with ``neutron`` and OpenStack Compute with
``nova-network`` have the ability to assign multiple NICs to instances. For
``nova-network`` this can be done on a per-request basis, with each
additional NIC using up an entire subnet or VLAN, reducing the total
number of supported projects.
Multi-Host and Single-Host Networking
-------------------------------------
The ``nova-network`` service has the ability to operate in a multi-host
or single-host mode. Multi-host is when each compute node runs a copy of
``nova-network`` and the instances on that compute node use the compute
node as a gateway to the Internet. The compute nodes also host the
floating IPs and security groups for instances on that node. Single-host
is when a central server—for example, the cloud controller—runs the
``nova-network`` service. All compute nodes forward traffic from the
instances to the cloud controller. The cloud controller then forwards
traffic to the Internet. The cloud controller hosts the floating IPs and
security groups for all instances on all compute nodes in the
cloud.
There are benefits to both modes. Single-node has the downside of a
single point of failure. If the cloud controller is not available,
instances cannot communicate on the network. This is not true with
multi-host, but multi-host requires that each compute node has a public
IP address to communicate on the Internet. If you are not able to obtain
a significant block of public IP addresses, multi-host might not be an
option.
Services for Networking
~~~~~~~~~~~~~~~~~~~~~~~
OpenStack, like any network application, has a number of standard
considerations to apply, such as NTP and DNS.
NTP
---
Time synchronization is a critical element to ensure continued operation
of OpenStack components. Correct time is necessary to avoid errors in
instance scheduling, replication of objects in the object store, and
even matching log timestamps for debugging.
All servers running OpenStack components should be able to access an
appropriate NTP server. You may decide to set up one locally or use the
public pools available from the `Network Time Protocol
project <http://www.pool.ntp.org/>`_.
DNS
---
OpenStack does not currently provide DNS services, aside from the
dnsmasq daemon, which resides on ``nova-network`` hosts. You could
consider providing a dynamic DNS service to allow instances to update a
DNS entry with new IP addresses. You can also consider making a generic
forward and reverse DNS mapping for instances' IP addresses, such as
vm-203-0-113-123.example.com.
Conclusion
~~~~~~~~~~
Armed with your IP address layout and numbers and knowledge about the
topologies and services you can use, it's now time to prepare the
network for your installation. Be sure to also check out the `OpenStack
Security Guide <http://docs.openstack.org/sec/>`_ for tips on securing
your network. We wish you a good relationship with your networking team!