diff --git a/doc/networking-guide/source/config-bgp-dynamic-routing.rst b/doc/networking-guide/source/config-bgp-dynamic-routing.rst index 0839b8414a..8c8da66e8a 100644 --- a/doc/networking-guide/source/config-bgp-dynamic-routing.rst +++ b/doc/networking-guide/source/config-bgp-dynamic-routing.rst @@ -64,7 +64,7 @@ The example configuration involves the following components: The example configuration assumes sufficient knowledge about the Networking service, routing, and BGP. For basic deployment of the Networking service, consult one of the - :ref:`deployment-scenarios`. For more information on BGP, see + :ref:`deploy`. For more information on BGP, see `RFC 4271 `_. Controller node diff --git a/doc/networking-guide/source/config-dvr-ha-snat.rst b/doc/networking-guide/source/config-dvr-ha-snat.rst index a0296003f5..d90659ab15 100644 --- a/doc/networking-guide/source/config-dvr-ha-snat.rst +++ b/doc/networking-guide/source/config-dvr-ha-snat.rst @@ -1,20 +1,20 @@ .. _config-dvr-snat-ha-ovs: -============================================ -Distributed Virtual Routing with VRRP (L3HA) -============================================ +===================================== +Distributed Virtual Routing with VRRP +===================================== -:ref:`Distributed Virtual Routing ` supports augmentation -using :ref:`VRRP (L3HA) `. Using this configuration, +:ref:`deploy-ovs-ha-dvr` supports augmentation +using Virtual Router Redundancy Protocol (VRRP). Using this configuration, virtual routers support both the ``--distributed`` and ``--ha`` options. Similar to legacy HA routers, DVR/SNAT HA routers provide a quick fail over of the SNAT service to a backup DVR/SNAT router on an l3-agent running on a different node. -SNAT high availability is implemented in a manner similar to -:ref:`scenario-l3ha-ovs`, where ``keepalived`` uses the Virtual Router -Redundancy Protocol (VRRP) to provide a quick fail over of SNAT services. +SNAT high availability is implemented in a manner similar to the +:ref:`deploy-lb-ha-vrrp` and :ref:`deploy-ovs-ha-vrrp` examples where +``keepalived`` uses VRRP to provide quick failover of SNAT services. During normal operation, the master router periodically transmits *heartbeat* packets over a hidden project network that connects all HA routers for a @@ -54,8 +54,6 @@ Controller node configuration max_l3_agents_per_router = 3 min_l3_agents_per_router = 2 - - When the ``router_distributed = True`` flag is configured, routers created by all users are distributed. Without it, only privileged users can create distributed routers by using :option:`--distributed True`. diff --git a/doc/networking-guide/source/config-macvtap.rst b/doc/networking-guide/source/config-macvtap.rst new file mode 100644 index 0000000000..63cd2473bf --- /dev/null +++ b/doc/networking-guide/source/config-macvtap.rst @@ -0,0 +1,178 @@ +.. _config-macvtap: + +======================== +Macvtap mechanism driver +======================== + +The Macvtap mechanism driver for the ML2 plug-in generally increases +network performance of instances. + +Consider the following attributes of this mechanism driver to determine +practicality in your environment: + +* Supports only instance ports. Ports for DHCP and layer-3 (routing) + services must use another mechanism driver such as Linux bridge or + Open vSwitch (OVS). + +* Supports only untagged (flat) and tagged (VLAN) networks. + +* Lacks support for security groups including basic (sanity) and + anti-spoofing rules. + +* Lacks support for layer-3 high-availability mechanisms such as + Virtual Router Redundancy Protocol (VRRP) and Distributed Virtual + Routing (DVR). + +* Only compute resources can be attached via macvtap. Attaching other + resources like DHCP, Routers and others is not supported. Therefore run + either OVS or linux bridge in VLAN or flat mode on the controller node. + +* Instance migration requires the same values for the + ``physical_interface_mapping`` configuration option on each compute node. + For more information, see + ``_. + +Prerequisites +~~~~~~~~~~~~~ + +You can add this mechanism driver to an existing environment using either +the Linux bridge or OVS mechanism drivers with only provider networks or +provider and self-service networks. You can change the configuration of +existing compute nodes or add compute nodes with the Macvtap mechanism +driver. The example configuration assumes addition of compute nodes with +the Macvtap mechanism driver to the :ref:`deploy-lb-selfservice` or +:ref:`deploy-ovs-selfservice` deployment examples. + +Add one or more compute nodes with the following components: + +* Three network interfaces: management, provider, and overlay. +* OpenStack Networking Macvtap layer-2 agent and any dependencies. + +.. note:: + + To support integration with the deployment examples, this content + configures the Macvtap mechanism driver to use the overlay network + for untagged (flat) or tagged (VLAN) networks in addition to overlay + networks such as VXLAN. Your physical network infrastructure + must support VLAN (802.1q) tagging on the overlay network. + +Architecture +~~~~~~~~~~~~ + +The Macvtap mechanism driver only applies to compute nodes. Otherwise, +the environment resembles the prerequisite deployment example. + +.. image:: figures/config-macvtap-compute1.png + :alt: Macvtap mechanism driver - compute node components + +.. image:: figures/config-macvtap-compute2.png + :alt: Macvtap mechanism driver - compute node connectivity + +Example configuration +~~~~~~~~~~~~~~~~~~~~~ + +Use the following example configuration as a template to add support for +the Macvtap mechanism driver to an existing operational environment. + +Controller node +--------------- + +#. In the ``ml2_conf.ini`` file: + + * Add ``macvtap`` to mechanism drivers. + + .. code-block:: ini + + [ml2] + mechanism_drivers = macvtap + + * Configure network mappings. + + .. code-block: ini + + [ml2_type_flat] + flat_networks = provider,macvtap + + [ml2_type_vlan] + network_vlan_ranges = provider,macvtap:VLAN_ID_START:VLAN_ID_END + + .. note:: + + Use of ``macvtap`` is arbitrary. Only the self-service deployment + examples require VLAN ID ranges. Replace ``VLAN_ID_START`` and + ``VLAN_ID_END`` with appropriate numerical values. + +#. Restart the following services: + + * Server + +Network nodes +------------- + +No changes. + +Compute nodes +------------- + +#. Install the Networking service Macvtap layer-2 agent. + +#. In the ``neutron.conf`` file, configure common options: + + .. include:: shared/deploy-config-neutron-common.txt + +#. In the ``macvtap_agent.ini`` file, configure the layer-2 agent. + + .. code-block:: ini + + [macvtap] + physical_interface_mappings = macvtap:MACVTAP_INTERFACE + + [securitygroup] + firewall_driver = noop + + Replace ``MACVTAP_INTERFACE`` with the name of the underlying + interface that handles Macvtap mechanism driver interfaces. + If using a prerequisite deployment example, replace + ``MACVTAP_INTERFACE`` with the name of the underlying interface + that handles overlay networks. For example, ``eth1``. + +#. Start the following services: + + * Macvtap agent + +Verify service operation +------------------------ + +#. Source the administrative project credentials. +#. Verify presence and operation of the agents: + + .. code-block:: console + + $ neutron agent-list + +--------------------------------------+---------------+----------+-------------------+-------+----------------+---------------------------+ + | id | agent_type | host | availability_zone | alive | admin_state_up | binary | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | 7af923a4-8be6-11e6-afc3-3762f3c3cf6e | Macvtap agent | compute1 | | :-) | True | neutron-macvtap-agent | + | 80af6934-8be6-11e6-a046-7b842f93bb23 | Macvtap agent | compute2 | | :-) | True | neutron-macvtap-agent | + +--------------------------------------+---------------+----------+-------------------+-------+----------------+---------------------------+ + +Create initial networks +----------------------- + +This mechanism driver simply changes the virtual network interface driver +for instances. Thus, you can reference the ``Create initial networks`` +content for the prerequisite deployment example. + +Verify network operation +------------------------ + +This mechanism driver simply changes the virtual network interface driver +for instances. Thus, you can reference the ``Verify network operation`` +content for the prerequisite deployment example. + +Network traffic flow +~~~~~~~~~~~~~~~~~~~~ + +This mechanism driver simply removes the Linux bridge handling security +groups on the compute nodes. Thus, you can reference the network traffic +flow scenarios for the prerequisite deployment example. diff --git a/doc/networking-guide/source/config.rst b/doc/networking-guide/source/config.rst index a67363f2c8..fe059264fa 100644 --- a/doc/networking-guide/source/config.rst +++ b/doc/networking-guide/source/config.rst @@ -19,8 +19,9 @@ Configuration config-ipam config-ipv6 config-lbaas + config-macvtap config-mtu - config-ovs-dpdk.rst + config-ovs-dpdk config-ovsfwdriver config-qos config-rbac diff --git a/doc/networking-guide/source/deploy-lb-ha-vrrp.rst b/doc/networking-guide/source/deploy-lb-ha-vrrp.rst new file mode 100644 index 0000000000..54de56f5d4 --- /dev/null +++ b/doc/networking-guide/source/deploy-lb-ha-vrrp.rst @@ -0,0 +1,173 @@ +.. _deploy-lb-ha-vrrp: + +========================================== +Linux bridge: High availability using VRRP +========================================== + +.. include:: shared/deploy-ha-vrrp.txt + +.. warning:: + + This high-availability mechanism is not compatible with the layer-2 + population mechanism. You must disable layer-2 population in the + ``linuxbridge_agent.ini`` file and restart the Linux bridge agent + on all existing network and compute nodes prior to deploying the example + configuration. + +Prerequisites +~~~~~~~~~~~~~ + +Add one network node with the following components: + +* Three network interfaces: management, provider, and overlay. +* OpenStack Networking layer-2 agent, layer-3 agent, and any + dependencies. + +.. note:: + + You can keep the DHCP and metadata agents on each compute node or + move them to the network nodes. + +Architecture +~~~~~~~~~~~~ + +.. image:: figures/deploy-lb-ha-vrrp-overview.png + :alt: High-availability using Linux bridge with VRRP - overview + +The following figure shows components and connectivity for one self-service +network and one untagged (flat) network. The master router resides on network +node 1. In this particular case, the instance resides on the same compute +node as the DHCP agent for the network. If the DHCP agent resides on another +compute node, the latter only contains a DHCP namespace and Linux bridge +with a port on the overlay physical network interface. + +.. image:: figures/deploy-lb-ha-vrrp-compconn1.png + :alt: High-availability using Linux bridge with VRRP - components and connectivity - one network + +Example configuration +~~~~~~~~~~~~~~~~~~~~~ + +Use the following example configuration as a template to add support for +high-availability using VRRP to an existing operational environment that +supports self-service networks. + +Controller node +--------------- + +#. In the ``neutron.conf`` file: + + * Enable VRRP. + + .. code-block:: ini + + [DEFAULT] + l3_ha = True + +#. Restart the following services: + + * Server + +Network node 1 +-------------- + +No changes. + +Network node 2 +-------------- + +#. Install the Networking service Linux bridge layer-2 agent and layer-3 + agent. + +#. In the ``neutron.conf`` file, configure common options: + + .. include:: shared/deploy-config-neutron-common.txt + +#. In the ``linuxbridge_agent.ini`` file, configure the layer-2 agent. + + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE + + [vxlan] + enable_vxlan = True + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + + [securitygroup] + firewall_driver = iptables + + Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface + that handles provider networks. For example, ``eth1``. + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + interface that handles VXLAN overlays for self-service networks. + +#. In the ``l3_agent.ini`` file, configure the layer-3 agent. + + .. code-block:: ini + + [DEFAULT] + interface_driver = linuxbridge + external_network_bridge = + + .. note:: + + The ``external_network_bridge`` option intentionally contains + no value. + +#. Start the following services: + + * Linux bridge agent + * Layer-3 agent + +Compute nodes +------------- + +No changes. + +Verify service operation +------------------------ + +#. Source the administrative project credentials. +#. Verify presence and operation of the agents. + + .. code-block:: console + + $ neutron agent-list + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | id | agent_type | host | availability_zone | alive | admin_state_up | binary | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | | :-) | True | neutron-linuxbridge-agent | + | 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent | + | e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent | + | e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent | + | e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent | + | ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent | + | 598f6357-4331-4da5-a420-0f5be000bec9 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent | + | f4734e0f-bcd5-4922-a19d-e31d56b0a7ae | Linux bridge agent | network1 | | :-) | True | neutron-linuxbridge-agent | + | 670e5805-340b-4182-9825-fa8319c99f23 | Linux bridge agent | network2 | | :-) | True | neutron-linuxbridge-agent | + | 96224e89-7c15-42e9-89c4-8caac7abdd54 | L3 agent | network2 | nova | :-) | True | neutron-l3-agent | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + +Create initial networks +----------------------- + +.. include:: shared/deploy-ha-vrrp-initialnetworks.txt + +Verify network operation +------------------------ + +.. include:: shared/deploy-ha-vrrp-verifynetworkoperation.txt + +Verify failover operation +------------------------- + +.. include:: shared/deploy-ha-vrrp-verifyfailoveroperation.txt + +Network traffic flow +~~~~~~~~~~~~~~~~~~~~ + +This high-availability mechanism simply augments :ref:`deploy-lb-selfservice` +with failover of layer-3 services to another router if the master router +fails. Thus, you can reference :ref:`Self-service network traffic flow +` for normal operation. diff --git a/doc/networking-guide/source/deploy-lb-provider.rst b/doc/networking-guide/source/deploy-lb-provider.rst new file mode 100644 index 0000000000..b963c5486f --- /dev/null +++ b/doc/networking-guide/source/deploy-lb-provider.rst @@ -0,0 +1,363 @@ +.. _deploy-lb-provider: + +=============================== +Linux bridge: Provider networks +=============================== + +The provider networks architecture example provides layer-2 connectivity +between instances and the physical network infrastructure using VLAN +(802.1q) tagging. It supports one untagged (flat) network and and up to +4095 tagged (VLAN) networks. The actual quantity of VLAN networks depends +on the physical network infrastructure. For more information on provider +networks, see :ref:`intro-os-networking-provider`. + +Prerequisites +~~~~~~~~~~~~~ + +One controller node with the following components: + +* Two network interfaces: management and provider. +* OpenStack Networking server service and ML2 plug-in. + +Two compute nodes with the following components: + +* Two network interfaces: management and provider. +* OpenStack Networking Linux bridge layer-2 agent, DHCP agent, metadata agent, + and any dependencies. + +.. note:: + + Larger deployments typically deploy the DHCP and metadata agents on a + subset of compute nodes to increase performance and redundancy. However, + too many agents can overwhelm the message bus. Also, to further simplify + any deployment, you can omit the metadata agent and use a configuration + drive to provide metadata to instances. + +Architecture +~~~~~~~~~~~~ + +.. image:: figures/deploy-lb-provider-overview.png + :alt: Provider networks using Linux bridge - overview + +The following figure shows components and connectivity for one untagged +(flat) network. In this particular case, the instance resides on the +same compute node as the DHCP agent for the network. If the DHCP agent +resides on another compute node, the latter only contains a DHCP namespace +and Linux bridge with a port on the provider physical network interface. + +.. image:: figures/deploy-lb-provider-compconn1.png + :alt: Provider networks using Linux bridge - components and connectivity - one network + +The following figure describes virtual connectivity among components for +two tagged (VLAN) networks. Essentially, each network uses a separate +bridge that contains a port on the VLAN sub-interface on the provider +physical network interface. Similar to the single untagged network case, +the DHCP agent may reside on a different compute node. + +.. image:: figures/deploy-lb-provider-compconn2.png + :alt: Provider networks using Linux bridge - components and connectivity - multiple networks + +.. note:: + + These figures omit the controller node because it does not handle instance + network traffic. + +Example configuration +~~~~~~~~~~~~~~~~~~~~~ + +Use the following example configuration as a template to deploy provider +networks in your environment. + +Controller node +--------------- + +#. Install the Networking service components that provides the + ``neutron-server`` service and ML2 plug-in. + +#. In the ``neutron.conf`` file: + + * Configure common options: + + .. include:: shared/deploy-config-neutron-common.txt + + * Disable service plug-ins because provider networks do not require + any. However, this breaks portions of the dashboard that manage + the Networking service. See the + `Installation Guide `__ for more + information. + + .. code-block:: ini + + [DEFAULT] + service_plugins = + + * Enable two DHCP agents per network so both compute nodes can + provide DHCP service provider networks. + + .. code-block:: ini + + [DEFAULT] + dhcp_agents_per_network = 2 + + * If necessary, :ref:`configure MTU `. + +#. In the ``ml2_conf.ini`` file: + + * Configure drivers and network types: + + .. code-block:: ini + + [ml2] + type_drivers = flat,vlan + tenant_network_types = + mechanism_drivers = linuxbridge + extension_drivers = port_security + + * Configure network mappings: + + .. code-block:: ini + + [ml2_type_flat] + flat_networks = provider + + [ml2_type_vlan] + network_vlan_ranges = provider + + .. note:: + + The ``tenant_network_types`` option contains no value because the + architecture does not support self-service networks. + + .. note:: + + The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN + ID ranges to support use of arbitrary VLAN IDs. + + * Configure the security group driver: + + .. code-block:: ini + + [securitygroup] + firewall_driver = iptables + +#. Populate the database. + + .. code-block:: console + + # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ + --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron + +#. Start the following services: + + * Server + +Compute nodes +------------- + +#. Install the Networking service Linux bridge layer-2 agent. + +#. In the ``neutron.conf`` file, configure common options: + + .. include:: shared/deploy-config-neutron-common.txt + +#. In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent: + + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE + + [vxlan] + enable_vxlan = False + + [securitygroup] + firewall_driver = iptables + + Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface + that handles provider networks. For example, ``eth1``. + +#. In the ``dhcp_agent.ini`` file, configure the DHCP agent: + + .. code-block:: ini + + [DEFAULT] + interface_driver = linuxbridge + enable_isolated_metadata = True + +#. In the ``metadata_agent.ini`` file, configure the metadata agent: + + .. code-block:: ini + + [DEFAULT] + nova_metadata_ip = controller + metadata_proxy_shared_secret = METADATA_SECRET + + The value of ``METADATA_SECRET`` must match the value of the same option + in the ``[neutron]`` section of the ``nova.conf`` file. + +#. Start the following services: + + * Linux bridge agent + * DHCP agent + * Metadata agent + +Verify service operation +------------------------ + +#. Source the administrative project credentials. +#. Verify presence and operation of the agents: + + .. code-block:: console + + $ neutron agent-list + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | id | agent_type | host | availability_zone | alive | admin_state_up | binary | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | | :-) | True | neutron-linuxbridge-agent | + | 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent | + | e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent | + | e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent | + | e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent | + | ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + +Create initial networks +----------------------- + +.. include:: shared/deploy-provider-initialnetworks.txt + +Verify network operation +------------------------ + +.. include:: shared/deploy-provider-verifynetworkoperation.txt + +Network traffic flow +~~~~~~~~~~~~~~~~~~~~ + +.. include:: shared/deploy-provider-networktrafficflow.txt + +North-south scenario: Instance with a fixed IP address +------------------------------------------------------ + +* The instance resides on compute node 1 and uses provider network 1. +* The instance sends a packet to a host on the Internet. + +The following steps involve compute node 1. + +#. The instance interface (1) forwards the packet to the provider + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the provider bridge handle firewalling + and connection tracking for the packet. +#. The VLAN sub-interface port (4) on the provider bridge forwards + the packet to the physical network interface (5). +#. The physical network interface (5) adds VLAN tag 101 to the packet and + forwards it to the physical network infrastructure switch (6). + +The following steps involve the physical network infrastructure: + +#. The switch removes VLAN tag 101 from the packet and forwards it to the + router (7). +#. The router routes the packet from the provider network (8) to the + external network (9) and forwards the packet to the switch (10). +#. The switch forwards the packet to the external network (11). +#. The external network (12) receives the packet. + +.. image:: figures/deploy-lb-provider-flowns1.png + :alt: Provider networks using Linux bridge - network traffic flow - north/south + +.. note:: + + Return traffic follows similar steps in reverse. + +East-west scenario 1: Instances on the same network +--------------------------------------------------- + +Instances on the same network communicate directly between compute nodes +containing those instances. + +* Instance 1 resides on compute node 1 and uses provider network 1. +* Instance 2 resides on compute node 2 and uses provider network 1. +* Instance 1 sends a packet to instance 2. + +The following steps involve compute node 1: + +#. The instance 1 interface (1) forwards the packet to the provider + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the provider bridge handle firewalling + and connection tracking for the packet. +#. The VLAN sub-interface port (4) on the provider bridge forwards + the packet to the physical network interface (5). +#. The physical network interface (5) adds VLAN tag 101 to the packet and + forwards it to the physical network infrastructure switch (6). + +The following steps involve the physical network infrastructure: + +#. The switch forwards the packet from compute node 1 to compute node 2 (7). + +The following steps involve compute node 2: + +#. The physical network interface (8) removes VLAN tag 101 from the packet + and forwards it to the VLAN sub-interface port (9) on the provider bridge. +#. Security group rules (10) on the provider bridge handle firewalling + and connection tracking for the packet. +#. The provider bridge instance port (11) forwards the packet to + the instance 2 interface (12) via ``veth`` pair. + +.. image:: figures/deploy-lb-provider-flowew1.png + :alt: Provider networks using Linux bridge - network traffic flow - east/west scenario 1 + +.. note:: + + Return traffic follows similar steps in reverse. + +East-west scenario 2: Instances on different networks +----------------------------------------------------- + +Instances communicate via router on the physical network infrastructure. + +* Instance 1 resides on compute node 1 and uses provider network 1. +* Instance 2 resides on compute node 1 and uses provider network 2. +* Instance 1 sends a packet to instance 2. + +.. note:: + + Both instances reside on the same compute node to illustrate how VLAN + tagging enables multiple logical layer-2 networks to use the same + physical layer-2 network. + +The following steps involve the compute node: + +#. The instance 1 interface (1) forwards the packet to the provider + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the provider bridge handle firewalling + and connection tracking for the packet. +#. The VLAN sub-interface port (4) on the provider bridge forwards + the packet to the physical network interface (5). +#. The physical network interface (5) adds VLAN tag 101 to the packet and + forwards it to the physical network infrastructure switch (6). + +The following steps involve the physical network infrastructure: + +#. The switch removes VLAN tag 101 from the packet and forwards it to the + router (7). +#. The router routes the packet from provider network 1 (8) to provider + network 2 (9). +#. The router forwards the packet to the switch (10). +#. The switch adds VLAN tag 102 to the packet and forwards it to compute + node 1 (11). + +The following steps involve the compute node: + +#. The physical network interface (12) removes VLAN tag 102 from the packet + and forwards it to the VLAN sub-interface port (13) on the provider bridge. +#. Security group rules (14) on the provider bridge handle firewalling + and connection tracking for the packet. +#. The provider bridge instance port (15) forwards the packet to + the instance 2 interface (16) via ``veth`` pair. + +.. image:: figures/deploy-lb-provider-flowew2.png + :alt: Provider networks using Linux bridge - network traffic flow - east/west scenario 2 + +.. note:: + + Return traffic follows similar steps in reverse. diff --git a/doc/networking-guide/source/deploy-lb-selfservice.rst b/doc/networking-guide/source/deploy-lb-selfservice.rst new file mode 100644 index 0000000000..baa50e4c21 --- /dev/null +++ b/doc/networking-guide/source/deploy-lb-selfservice.rst @@ -0,0 +1,422 @@ +.. _deploy-lb-selfservice: + +=================================== +Linux bridge: Self-service networks +=================================== + +This architecture example augments :ref:`deploy-lb-provider` to support +a nearly limitless quantity of entirely virtual networks. Although the +Networking service supports VLAN self-service networks, this example +focuses on VXLAN self-service networks. For more information on +self-service networks, see :ref:`intro-os-networking-selfservice`. + +.. note:: + + The Linux bridge agent lacks support for other overlay protocols such + as GRE and Geneve. + +Prerequisites +~~~~~~~~~~~~~ + +Add one network node with the following components: + +* Three network interfaces: management, provider, and overlay. +* OpenStack Networking Linux bridge layer-2 agent, layer-3 agent, and any + dependencies. + +Modify the compute nodes with the following components: + +* Add one network interface: overlay. + +.. note:: + + You can keep the DHCP and metadata agents on each compute node or + move them to the network node. + +Architecture +~~~~~~~~~~~~ + +.. image:: figures/deploy-lb-selfservice-overview.png + :alt: Self-service networks using Linux bridge - overview + +The following figure shows components and connectivity for one self-service +network and one untagged (flat) provider network. In this particular case, the +instance resides on the same compute node as the DHCP agent for the network. +If the DHCP agent resides on another compute node, the latter only contains +a DHCP namespace and Linux bridge with a port on the overlay physical network +interface. + +.. image:: figures/deploy-lb-selfservice-compconn1.png + :alt: Self-service networks using Linux bridge - components and connectivity - one network + +Example configuration +~~~~~~~~~~~~~~~~~~~~~ + +Use the following example configuration as a template to add support for +self-service networks to an existing operational environment that supports +provider networks. + +Controller node +--------------- + +#. In the ``neutron.conf`` file: + + * Enable routing and allow overlapping IP address ranges. + + .. code-block:: ini + + [DEFAULT] + service_plugins = router + allow_overlapping_ips = True + +#. In the ``ml2_conf.ini`` file: + + * Add ``vxlan`` to type drivers and project network types. + + .. code-block:: ini + + [ml2] + type_drivers = flat,vlan,vxlan + tenant_network_types = vxlan + + * Enable the layer-2 population mechanism driver. + + .. code-block:: ini + + [ml2] + mechanism_drivers = linuxbridge,l2population + + * Configure the VXLAN network ID (VNI) range. + + .. code-block:: ini + + [ml2_type_vxlan] + vni_ranges = VNI_START:VNI_END + + Replace ``VNI_START`` and ``VNI_END`` with appropriate numerical + values. + +#. Restart the following services: + + * Server + +Network node +------------ + +#. Install the Networking service layer-3 agent. + +#. In the ``neutron.conf`` file, configure common options: + + .. include:: shared/deploy-config-neutron-common.txt + +#. In the ``linuxbridge_agent.ini`` file, configure the layer-2 agent. + + .. code-block:: ini + + [linux_bridge] + physical_interface_mappings = provider:PROVIDER_INTERFACE + + [vxlan] + enable_vxlan = True + l2_population = True + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + + [securitygroup] + firewall_driver = iptables + + Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface + that handles provider networks. For example, ``eth1``. + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + interface that handles VXLAN overlays for self-service networks. + +#. In the ``l3_agent.ini`` file, configure the layer-3 agent. + + .. code-block:: ini + + [DEFAULT] + interface_driver = linuxbridge + external_network_bridge = + + .. note:: + + The ``external_network_bridge`` option intentionally contains + no value. + +#. Start the following services: + + * Linux bridge agent + * Layer-3 agent + +Compute nodes +------------- + +#. In the ``linuxbridge_agent.ini`` file, enable VXLAN support including + layer-2 population. + + .. code-block:: ini + + [vxlan] + enable_vxlan = True + l2_population = True + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + interface that handles VXLAN overlays for self-service networks. + +#. Restart the following services: + + * Linux bridge agent + +Verify service operation +------------------------ + +#. Source the administrative project credentials. +#. Verify presence and operation of the agents. + + .. code-block:: console + + $ neutron agent-list + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | id | agent_type | host | availability_zone | alive | admin_state_up | binary | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | | :-) | True | neutron-linuxbridge-agent | + | 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent | + | e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent | + | e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent | + | e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent | + | ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent | + | 598f6357-4331-4da5-a420-0f5be000bec9 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent | + | f4734e0f-bcd5-4922-a19d-e31d56b0a7ae | Linux bridge agent | network1 | | :-) | True | neutron-linuxbridge-agent | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + +Create initial networks +----------------------- + +.. include:: shared/deploy-selfservice-initialnetworks.txt + +Verify network operation +------------------------ + +.. include:: shared/deploy-selfservice-verifynetworkoperation.txt + +.. _deploy-lb-selfservice-networktrafficflow: + +Network traffic flow +~~~~~~~~~~~~~~~~~~~~ + +.. include:: shared/deploy-selfservice-networktrafficflow.txt + +North-south scenario 1: Instance with a fixed IP address +-------------------------------------------------------- + +For instances with a fixed IPv4 address, the network node performs SNAT +on north-south traffic passing from self-service to external networks +such as the Internet. For instances with a fixed IPv6 address, the network +node performs conventional routing of traffic between self-service and +external networks. + +* The instance resides on compute node 1 and uses self-service network 1. +* The instance sends a packet to a host on the Internet. + +The following steps involve compute node 1: + +#. The instance interface (1) forwards the packet to the self-service + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the self-service bridge handle + firewalling and connection tracking for the packet. +#. The self-service bridge forwards the packet to the VXLAN interface (4) + which wraps the packet using VNI 101. +#. The underlying physical interface (5) for the VXLAN interface forwards + the packet to the network node via the overlay network (6). + +The following steps involve the network node: + +#. The underlying physical interface (7) for the VXLAN interface forwards + the packet to the VXLAN interface (8) which unwraps the packet. +#. The self-service bridge router port (9) forwards the packet to the + self-service network interface (10) in the router namespace. + + * For IPv4, the router performs SNAT on the packet which changes the + source IP address to the router IP address on the provider network + and sends it to the gateway IP address on the provider network via + the gateway interface on the provider network (11). + * For IPv6, the router sends the packet to the next-hop IP address, + typically the gateway IP address on the provider network, via the + provider gateway interface (11). + +#. The router forwards the packet to the provider bridge router + port (12). +#. The VLAN sub-interface port (13) on the provider bridge forwards + the packet to the provider physical network interface (14). +#. The provider physical network interface (14) adds VLAN tag 101 to the packet + and forwards it to the Internet via physical network infrastructure (15). + +.. note:: + + Return traffic follows similar steps in reverse. However, without a + floating IPv4 address, hosts on the provider or external networks cannot + originate connections to instances on the self-service network. + +.. image:: figures/deploy-lb-selfservice-flowns1.png + :alt: Self-service networks using Linux bridge - network traffic flow - north/south scenario 1 + +North-south scenario 2: Instance with a floating IPv4 address +------------------------------------------------------------- + +For instances with a floating IPv4 address, the network node performs SNAT +on north-south traffic passing from the instance to external networks +such as the Internet and DNAT on north-south traffic passing from external +networks to the instance. Floating IP addresses and NAT do not apply to IPv6. +Thus, the network node routes IPv6 traffic in this scenario. + +* The instance resides on compute node 1 and uses self-service network 1. +* A host on the Internet sends a packet to the instance. + +The following steps involve the network node: + +#. The physical network infrastructure (1) forwards the packet to the + provider physical network interface (2). +#. The provider physical network interface removes VLAN tag 101 and forwards + the packet to the VLAN sub-interface on the provider bridge. +#. The provider bridge forwards the packet to the self-service + router gateway port on the provider network (5). + + * For IPv4, the router performs DNAT on the packet which changes the + destination IP address to the instance IP address on the self-service + network and sends it to the gateway IP address on the self-service + network via the self-service interface (6). + * For IPv6, the router sends the packet to the next-hop IP address, + typically the gateway IP address on the self-service network, via + the self-service interface (6). + +#. The router forwards the packet to the self-service bridge router + port (7). +#. The self-service bridge forwards the packet to the VXLAN interface (8) + which wraps the packet using VNI 101. +#. The underlying physical interface (9) for the VXLAN interface forwards + the packet to the network node via the overlay network (10). + +The following steps involve the compute node: + +#. The underlying physical interface (11) for the VXLAN interface forwards + the packet to the VXLAN interface (12) which unwraps the packet. +#. Security group rules (13) on the self-service bridge handle firewalling + and connection tracking for the packet. +#. The self-service bridge instance port (14) forwards the packet to + the instance interface (15) via ``veth`` pair. + +.. note:: + + Egress instance traffic flows similar to north-south scenario 1, except SNAT + changes the source IP address of the packet to the floating IPv4 address + rather than the router IP address on the provider network. + +.. image:: figures/deploy-lb-selfservice-flowns2.png + :alt: Self-service networks using Linux bridge - network traffic flow - north/south scenario 2 + +East-west scenario 1: Instances on the same network +--------------------------------------------------- + +Instances with a fixed IPv4/IPv6 or floating IPv4 address on the same network +communicate directly between compute nodes containing those instances. + +By default, the VXLAN protocol lacks knowledge of target location +and uses multicast to discover it. After discovery, it stores the +location in the local forwarding database. In large deployments, +the discovery process can generate a significant amount of network +that all nodes must process. To eliminate the latter and generally +increase efficiency, the Networking service includes the layer-2 +population mechanism driver that automatically populates the +forwarding database for VXLAN interfaces. The example configuration +enables this driver. For more information, see :ref:`config-plugin-ml2`. + +* Instance 1 resides on compute node 1 and uses self-service network 1. +* Instance 2 resides on compute node 2 and uses self-service network 1. +* Instance 1 sends a packet to instance 2. + +The following steps involve compute node 1: + +#. The instance 1 interface (1) forwards the packet to the + self-service bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the self-service bridge handle firewalling + and connection tracking for the packet. +#. The self-service bridge forwards the packet to the VXLAN interface (4) + which wraps the packet using VNI 101. +#. The underlying physical interface (5) for the VXLAN interface forwards + the packet to compute node 2 via the overlay network (6). + +The following steps involve compute node 2: + +#. The underlying physical interface (7) for the VXLAN interface forwards + the packet to the VXLAN interface (8) which unwraps the packet. +#. Security group rules (9) on the self-service bridge handle firewalling + and connection tracking for the packet. +#. The self-service bridge instance port (10) forwards the packet to + the instance 1 interface (11) via ``veth`` pair. + +.. note:: + + Return traffic follows similar steps in reverse. + +.. image:: figures/deploy-lb-selfservice-flowew1.png + :alt: Self-service networks using Linux bridge - network traffic flow - east/west scenario 1 + +East-west scenario 2: Instances on different networks +----------------------------------------------------- + +Instances using a fixed IPv4/IPv6 address or floating IPv4 address communicate +via router on the network node. The self-service networks must reside on the +same router. + +* Instance 1 resides on compute node 1 and uses self-service network 1. +* Instance 2 resides on compute node 1 and uses self-service network 2. +* Instance 1 sends a packet to instance 2. + +.. note:: + + Both instances reside on the same compute node to illustrate how VXLAN + enables multiple overlays to use the same layer-3 network. + +The following steps involve the compute node: + +#. The instance 1 interface (1) forwards the packet to the self-service + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the self-service bridge handle + firewalling and connection tracking for the packet. +#. The self-service bridge forwards the packet to the VXLAN interface (4) + which wraps the packet using VNI 101. +#. The underlying physical interface (5) for the VXLAN interface forwards + the packet to the network node via the overlay network (6). + +The following steps involve the network node: + +#. The underlying physical interface (7) for the VXLAN interface forwards + the packet to the VXLAN interface (8) which unwraps the packet. +#. The self-service bridge router port (9) forwards the packet to the + self-service network 1 interface (10) in the router namespace. +#. The router sends the packet to the next-hop IP address, typically the + gateway IP address on self-service network 2, via the self-service + network 2 interface (11). +#. The router forwards the packet to the self-service network 2 bridge router + port (12). +#. The self-service network 2 bridge forwards the packet to the VXLAN + interface (13) which wraps the packet using VNI 102. +#. The physical network interface (14) for the VXLAN interface sends the + packet to the compute node via the overlay network (15). + +The following steps involve the compute node: + +#. The underlying physical interface (16) for the VXLAN interface sends + the packet to the VXLAN interface (17) which unwraps the packet. +#. Security group rules (18) on the self-service bridge handle firewalling + and connection tracking for the packet. +#. The self-service bridge instance port (19) forwards the packet to + the instance 2 interface (20) via ``veth`` pair. + +.. note:: + + Return traffic follows similar steps in reverse. + +.. image:: figures/deploy-lb-selfservice-flowew2.png + :alt: Self-service networks using Linux bridge - network traffic flow - east/west scenario 2 diff --git a/doc/networking-guide/source/deploy-lb.rst b/doc/networking-guide/source/deploy-lb.rst new file mode 100644 index 0000000000..6cb646f3d1 --- /dev/null +++ b/doc/networking-guide/source/deploy-lb.rst @@ -0,0 +1,17 @@ +.. _deploy-lb: + +============================= +Linux bridge mechanism driver +============================= + +The Linux bridge mechanism driver uses only Linux bridges and ``veth`` pairs +as interconnection devices. A layer-2 agent manages Linux bridges on each +compute node and any other node that provides layer-3 (routing), DHCP, +metadata, or other network services. + +.. toctree:: + :maxdepth: 2 + + deploy-lb-provider + deploy-lb-selfservice + deploy-lb-ha-vrrp diff --git a/doc/networking-guide/source/deploy-ovs-ha-dvr.rst b/doc/networking-guide/source/deploy-ovs-ha-dvr.rst new file mode 100644 index 0000000000..fe4087cbce --- /dev/null +++ b/doc/networking-guide/source/deploy-ovs-ha-dvr.rst @@ -0,0 +1,535 @@ +.. _deploy-ovs-ha-dvr: + +========================================= +Open vSwitch: High availability using DVR +========================================= + +This architecture example augments the self-service deployment example +with the Distributed Virtual Router (DVR) high-availability mechanism that +provides connectivity between self-service and provider networks on compute +nodes rather than network nodes for specific scenarios. For instances with a +floating IPv4 address, routing between self-service and provider networks +resides completely on the compute nodes to eliminate single point of +failure and performance issues with network nodes. Routing also resides +completely on the compute nodes for instances with a fixed or floating IPv4 +address using self-service networks on the same distributed virtual router. +However, instances with a fixed IP address still rely on the network node for +routing and SNAT services between self-service and provider networks. + +Consider the following attributes of this high-availability mechanism to +determine practicality in your environment: + +* Only provides connectivity to an instance via the compute node on which + the instance resides if the instance resides on a self-service network + with a floating IPv4 address. Instances on self-service networks with + only an IPv6 address or both IPv4 and IPv6 addresses rely on the network + node for IPv6 connectivity. + +* The instance of a router on each compute node consumes an IPv4 address + on the provider network on which it contains a gateway. + +Prerequisites +~~~~~~~~~~~~~ + +Modify the compute nodes with the following components: + +* Install the OpenStack Networking layer-3 agent. + +.. note:: + + Consider adding at least one additional network node to provide + high-availability for instances with a fixed IP address. See + See :ref:`config-dvr-snat-ha-ovs` for more information. + +Architecture +~~~~~~~~~~~~ + +.. image:: figures/deploy-ovs-ha-dvr-overview.png + :alt: High-availability using Open vSwitch with DVR - overview + +The following figure shows components and connectivity for one self-service +network and one untagged (flat) network. In this particular case, the +instance resides on the same compute node as the DHCP agent for the network. +If the DHCP agent resides on another compute node, the latter only contains +a DHCP namespace with a port on the OVS integration bridge. + +.. image:: figures/deploy-ovs-ha-dvr-compconn1.png + :alt: High-availability using Open vSwitch with DVR - components and connectivity - one network + +Example configuration +~~~~~~~~~~~~~~~~~~~~~ + +Use the following example configuration as a template to add support for +high-availability using DVR to an existing operational environment that +supports self-service networks. + +Controller node +--------------- + +#. In the ``neutron.conf`` file: + + * Enable distributed routing by default for all routers. + + .. code-block:: ini + + [DEFAULT] + router_distributed = true + +#. Restart the following services: + + * Server + +Network node +------------ + +#. In the ``openswitch_agent.ini`` file, enable distributed routing. + + .. code-block:: ini + + [DEFAULT] + enable_distributed_routing = true + +#. In the ``l3_agent.ini`` file, configure the layer-3 agent to provide + SNAT services. + + .. code-block:: ini + + [DEFAULT] + agent_mode = dvr_snat + + .. note:: + + The ``external_network_bridge`` option intentionally contains + no value. + +#. Restart the following services: + + * Open vSwitch agent + * Layer-3 agent + +Compute nodes +------------- + +#. Install the Networking service layer-3 agent. + +#. In the ``openswitch_agent.ini`` file, enable distributed routing. + + .. code-block:: ini + + [DEFAULT] + enable_distributed_routing = true + +#. Restart the following services: + + * Open vSwitch agent + +#. In the ``l3_agent.ini`` file, configure the layer-3 agent. + + .. code-block:: ini + + [DEFAULT] + interface_driver = openvswitch + external_network_bridge = + agent_mode = dvr + + .. note:: + + The ``external_network_bridge`` option intentionally contains + no value. + +Verify service operation +------------------------ + +#. Source the administrative project credentials. +#. Verify presence and operation of the agents. + + .. code-block:: console + + +--------------------------------------+--------------------+-------------+----------------+-------+----------------+---------------------------+ + | id | agent_type | host | availability_zone | alive | admin_state_up | binary | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | 05d980f2-a4fc-4815-91e7-a7f7e118c0db | L3 agent | compute1 | nova | :-) | True | neutron-l3-agent | + | 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent | + | 2a2e9a90-51b8-4163-a7d6-3e199ba2374b | L3 agent | compute2 | nova | :-) | True | neutron-l3-agent | + | 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | :-) | True | neutron-openvswitch-agent | + | 513caa68-0391-4e53-a530-082e2c23e819 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent | + | 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent | + | 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent | + | a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | | :-) | True | neutron-openvswitch-agent | + | a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent | + | af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent | + | bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | :-) | True | neutron-openvswitch-agent | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + +Create initial networks +----------------------- + +Similar to the self-service deployment example, this configuration supports +multiple VXLAN self-service networks. After enabling high-availability, all +additional routers use distributed routing. The following procedure creates +an additional self-service network and router. The Networking service also +supports adding distributed routing to existing routers. + +#. Source a regular (non-administrative) project credentials. +#. Create a self-service network. + + .. code-block:: console + + $ neutron net-create selfservice2 + Created a new network: + +-------------------------+--------------------------------------+ + | Field | Value | + +-------------------------+--------------------------------------+ + | admin_state_up | True | + | availability_zone_hints | | + | availability_zones | | + | description | | + | id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 | + | ipv4_address_scope | | + | ipv6_address_scope | | + | mtu | 1450 | + | name | selfservice2 | + | port_security_enabled | True | + | router:external | False | + | shared | False | + | status | ACTIVE | + | subnets | | + | tags | | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------------+--------------------------------------+ + +#. Create a IPv4 subnet on the self-service network. + + .. code-block:: console + + $ neutron subnet-create --name selfservice2-v4 --ip-version 4 \ + --dns-nameserver 8.8.4.4 selfservice2 192.168.2.0/24 + Created a new subnet: + +-------------------+--------------------------------------------------+ + | Field | Value | + +-------------------+--------------------------------------------------+ + | allocation_pools | {"start": "192.168.2.2", "end": "192.168.2.254"} | + | cidr | 192.168.2.0/24 | + | description | | + | dns_nameservers | 8.8.4.4 | + | enable_dhcp | True | + | gateway_ip | 192.168.2.1 | + | host_routes | | + | id | 12a41804-18bf-4cec-bde8-174cbdbf1573 | + | ip_version | 4 | + | ipv6_address_mode | | + | ipv6_ra_mode | | + | name | selfservice2-v4 | + | network_id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 | + | subnetpool_id | | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------+--------------------------------------------------+ + +#. Create a IPv6 subnet on the self-service network. + + .. code-block:: console + + $ neutron subnet-create --name selfservice2-v6 --ip-version 6 \ + --ipv6-address-mode slaac --ipv6-ra-mode slaac \ + --dns-nameserver 2001:4860:4860::8844 selfservice2 \ + fd00:192:168:2::/64 + Created a new subnet: + +-------------------+-----------------------------------------------------------------------------+ + | Field | Value | + +-------------------+-----------------------------------------------------------------------------+ + | allocation_pools | {"start": "fd00:192:168:2::2", "end": "fd00:192:168:2:ffff:ffff:ffff:ffff"} | + | cidr | fd00:192:168:2::/64 | + | description | | + | dns_nameservers | 2001:4860:4860::8844 | + | enable_dhcp | True | + | gateway_ip | fd00:192:168:2::1 | + | host_routes | | + | id | b0f122fe-0bf9-4f31-975d-a47e58aa88e3 | + | ip_version | 6 | + | ipv6_address_mode | slaac | + | ipv6_ra_mode | slaac | + | name | selfservice2-v6 | + | network_id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 | + | subnetpool_id | | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------+-----------------------------------------------------------------------------+ + +#. Create a router. + + .. code-block:: console + + $ neutron router-create router2 + Created a new router: + +-------------------------+--------------------------------------+ + | Field | Value | + +-------------------------+--------------------------------------+ + | admin_state_up | True | + | availability_zone_hints | | + | availability_zones | | + | description | | + | external_gateway_info | | + | id | b6206312-878e-497c-8ef7-eb384f8add96 | + | name | router2 | + | routes | | + | status | ACTIVE | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------------+--------------------------------------+ + +#. Add the IPv4 and IPv6 subnets as interfaces on the router. + + .. code-block:: console + + $ neutron router-interface-add router2 selfservice2-v4 + Added interface da3504ad-ba70-4b11-8562-2e6938690878 to router router2. + + $ neutron router-interface-add router2 selfservice2-v6 + Added interface 442e36eb-fce3-4cb5-b179-4be6ace595f0 to router router2. + +#. Add the provider network as a gateway on the router. + + .. code-block:: console + + $ neutron router-gateway-set router2 provider1 + Set gateway for router router2 + +Verify network operation +------------------------ + +#. Source the administrative project credentials. +#. Verify distributed routing on the router. + + .. code-block:: console + + $ neutron router-show router2 + +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Field | Value | + +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | distributed | True | + +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. On each compute node, verify creation of a ``qrouter`` namespace with + the same ID. + + Compute node 1: + + .. code-block:: console + + # ip netns + qrouter-78d2f628-137c-4f26-a257-25fc20f203c1 + + Compute node 2: + + .. code-block:: console + + # ip netns + qrouter-78d2f628-137c-4f26-a257-25fc20f203c1 + +#. On the network node, verify creation of the ``snat`` and ``qrouter`` + namespaces with the same ID. + + .. code-block:: console + + # ip netns + snat-78d2f628-137c-4f26-a257-25fc20f203c1 + qrouter-78d2f628-137c-4f26-a257-25fc20f203c1 + + .. note:: + + The namespace for router 1 from :ref:`deploy-ovs-selfservice` should + also appear on network node 1 because of creation prior to enabling + distributed routing. + +#. Launch an instance with an interface on the addtional self-service network. + For example, a CirrOS image using flavor ID 1. + + .. code-block:: console + + $ openstack server create --flavor 1 --image cirros --nic net-id=NETWORK_ID selfservice-instance2 + + Replace ``NETWORK_ID`` with the ID of the additional self-service + network. + +#. Determine the IPv4 and IPv6 addresses of the instance. + + .. code-block:: console + + $ openstack server list + +--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+ + | ID | Name | Status | Networks | + +--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+ + | bde64b00-77ae-41b9-b19a-cd8e378d9f8b | selfservice-instance2 | ACTIVE | selfservice2=fd00:192:168:2:f816:3eff:fe71:e93e, 192.168.2.4 | + +--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+ + +#. Create a floating IPv4 address on the provider network. + + .. code-block:: console + + $ openstack ip floating create provider1 + +-------------+--------------------------------------+ + | Field | Value | + +-------------+--------------------------------------+ + | fixed_ip | None | + | id | 0174056a-fa56-4403-b1ea-b5151a31191f | + | instance_id | None | + | ip | 203.0.113.17 | + | pool | provider1 | + +-------------+--------------------------------------+ + +#. Associate the floating IPv4 address with the instance. + + .. code-block:: console + + $ openstack ip floating add 203.0.113.17 selfservice-instance2 + + .. note:: + + This command provides no output. + +#. On the compute node containing the instance, verify creation of the + ``fip`` namespace with the same ID as the provider network. + + .. code-block:: console + + # ip netns + fip-4bfa3075-b4b2-4f7d-b88e-df1113942d43 + +Network traffic flow +~~~~~~~~~~~~~~~~~~~~ + +.. include:: shared/deploy-selfservice-networktrafficflow.txt + +This section only contains flow scenarios that benefit from distributed +virtual routing or that differ from conventional operation. For other +flow scenarios, see :ref:`deploy-ovs-selfservice-networktrafficflow`. + +North-south scenario 1: Instance with a fixed IP address +-------------------------------------------------------- + +Similar to :ref:`deploy-ovs-selfservice-networktrafficflow-ns1`, except +the router namespace on the network node becomes the SNAT namespace. The +network node still contains the router namespace, but it serves no purpose +in this case. + +.. image:: figures/deploy-ovs-ha-dvr-flowns1.png + :alt: High-availability using Open vSwitch with DVR - network traffic flow - north/south scenario 1 + +North-south scenario 2: Instance with a floating IPv4 address +------------------------------------------------------------- + +For instances with a floating IPv4 address using a self-service network +on a distributed router, the compute node containing the instance performs +SNAT on north-south traffic passing from the instance to external networks +such as the Internet and DNAT on north-south traffic passing from external +networks to the instance. Floating IP addresses and NAT do not apply to +IPv6. Thus, the network node routes IPv6 traffic in this scenario. +north-south traffic passing between the instance and external networks +such as the Internet. + +* Instance 1 resides on compute node 1 and uses self-service network 1. +* A host on the Internet sends a packet to the instance. + +The following steps involve the compute node: + +#. The physical network infrastructure (1) forwards the packet to the + provider physical network interface (2). +#. The provider physical network interface forwards the packet to the + OVS provider bridge provider network port (3). +#. The OVS provider bridge swaps actual VLAN tag 101 with the internal + VLAN tag. +#. The OVS provider bridge ``phy-br-provider`` port (4) forwards the + packet to the OVS integration bridge ``int-br-provider`` port (5). +#. The OVS integration bridge port for the provider network (6) removes + the internal VLAN tag and forwards the packet to the provider network + interface (7) in the floating IP namespace. This interface responds + to any ARP requests for the instance floating IPv4 address. +#. The floating IP namespace routes the packet (8) to the distributed + router namespace (9) using a pair of IP addresses on the DVR internal + network. This namespace contains the instance floating IPv4 address. +#. The router performs DNAT on the packet which changes the destination + IP address to the instance IP address on the self-service network via + the self-service network interface (10). +#. The router forwards the packet to the OVS integration bridge port for + the self-service network (11). +#. The OVS integration bridge adds an internal VLAN tag to the packet. +#. The OVS integration bridge removes the internal VLAN tag from the packet. +#. The OVS integration bridge security group port (12) forwards the packet + to the security group bridge OVS port (13) via ``veth`` pair. +#. Security group rules (14) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge instance port (15) forwards the packet to the + instance interface (16) via ``veth`` pair. + +.. image:: figures/deploy-ovs-ha-dvr-flowns2.png + :alt: High-availability using Open vSwitch with DVR - network traffic flow - north/south scenario 2 + +.. note:: + + Egress traffic follows similar steps in reverse, except SNAT changes + the source IPv4 address of the packet to the floating IPv4 address. + +East-west scenario 1: Instances on different networks on the same router +------------------------------------------------------------------------ + +Instances with fixed IPv4/IPv6 address or floating IPv4 address on the +same compute node communicate via router on the compute node. Instances +on different compute nodes communicate via an instance of the router on +each compute node. + +.. note:: + + This scenario places the instances on different compute nodes to + show the most complex situation. + +The following steps involve compute node 1: + +#. The instance interface (1) forwards the packet to the security group + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge OVS port (4) forwards the packet to the OVS + integration bridge security group port (5) via ``veth`` pair. +#. The OVS integration bridge adds an internal VLAN tag to the packet. +#. The OVS integration bridge port for self-service network 1 (6) removes the + internal VLAN tag and forwards the packet to the self-service network 1 + interface in the distributed router namespace (6). +#. The distributed router namespace routes the packet to self-service network + 2. +#. The self-service network 2 interface in the distributed router namespace + (8) forwards the packet to the OVS integration bridge port for + self-service network 2 (9). +#. The OVS integration bridge adds an internal VLAN tag to the packet. +#. The OVS integration bridge exchanges the internal VLAN tag for an + internal tunnel ID. +#. The OVS integration bridge ``patch-tun`` port (10) forwards the packet + to the OVS tunnel bridge ``patch-int`` port (11). +#. The OVS tunnel bridge (12) wraps the packet using VNI 101. +#. The underlying physical interface (13) for overlay networks forwards + the packet to compute node 2 via the overlay network (14). + +The following steps involve compute node 2: + +#. The underlying physical interface (15) for overlay networks forwards + the packet to the OVS tunnel bridge (16). +#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID + to it. +#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal + VLAN tag. +#. The OVS tunnel bridge ``patch-int`` patch port (17) forwards the packet + to the OVS integration bridge ``patch-tun`` patch port (18). +#. The OVS integration bridge removes the internal VLAN tag from the packet. +#. The OVS integration bridge security group port (19) forwards the packet + to the security group bridge OVS port (20) via ``veth`` pair. +#. Security group rules (21) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge instance port (22) forwards the packet to the + instance 2 interface (23) via ``veth`` pair. + +.. note:: + + Routing between self-service networks occurs on the compute node containing + the instance sending the packet. In this scenario, routing occurs on + compute node 1 for packets from instance 1 to instance 2 and on compute + node 2 for packets from instance 2 to instance 1. + +.. image:: figures/deploy-ovs-ha-dvr-flowew1.png + :alt: High-availability using Open vSwitch with DVR - network traffic flow - east/west scenario 2 diff --git a/doc/networking-guide/source/deploy-ovs-ha-vrrp.rst b/doc/networking-guide/source/deploy-ovs-ha-vrrp.rst new file mode 100644 index 0000000000..e66cf9f5ee --- /dev/null +++ b/doc/networking-guide/source/deploy-ovs-ha-vrrp.rst @@ -0,0 +1,174 @@ +.. _deploy-ovs-ha-vrrp: + +========================================== +Open vSwitch: High availability using VRRP +========================================== + +.. include:: shared/deploy-ha-vrrp.txt + +Prerequisites +~~~~~~~~~~~~~ + +Add one network node with the following components: + +* Three network interfaces: management, provider, and overlay. +* OpenStack Networking layer-2 agent, layer-3 agent, and any + dependencies. + +.. note:: + + You can keep the DHCP and metadata agents on each compute node or + move them to the network nodes. + +Architecture +~~~~~~~~~~~~ + +.. image:: figures/deploy-ovs-ha-vrrp-overview.png + :alt: High-availability using VRRP with Linux bridge - overview + +The following figure shows components and connectivity for one self-service +network and one untagged (flat) network. The master router resides on network +node 1. In this particular case, the instance resides on the same compute +node as the DHCP agent for the network. If the DHCP agent resides on another +compute node, the latter only contains a DHCP namespace and Linux bridge +with a port on the overlay physical network interface. + +.. image:: figures/deploy-ovs-ha-vrrp-compconn1.png + :alt: High-availability using VRRP with Linux bridge - components and connectivity - one network + +Example configuration +~~~~~~~~~~~~~~~~~~~~~ + +Use the following example configuration as a template to add support for +high-availability using VRRP to an existing operational environment that +supports self-service networks. + +Controller node +--------------- + +#. In the ``neutron.conf`` file: + + * Enable VRRP. + + .. code-block:: ini + + [DEFAULT] + l3_ha = True + +#. Restart the following services: + + * Server + +Network node 1 +-------------- + +No changes. + +Network node 2 +-------------- + +#. Install the Networking service OVS layer-2 agent and layer-3 agent. + +#. Install OVS. + +#. In the ``neutron.conf`` file, configure common options: + + .. include:: shared/deploy-config-neutron-common.txt + +#. Start the following services: + + * OVS + +#. Create the OVS provider bridge ``br-provider``: + + .. code-block:: console + + $ ovs-vsctl add-br br-provider + +#. In the ``openvswitch_agent.ini`` file, configure the layer-2 agent. + + .. code-block:: ini + + [ovs] + bridge_mappings = provider:br-provider + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + + [agent] + tunnel_types = vxlan + l2_population = true + + [securitygroup] + firewall_driver = iptables_hybrid + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + interface that handles VXLAN overlays for self-service networks. + +#. In the ``l3_agent.ini`` file, configure the layer-3 agent. + + .. code-block:: ini + + [DEFAULT] + interface_driver = openvswitch + external_network_bridge = + + .. note:: + + The ``external_network_bridge`` option intentionally contains + no value. + +#. Start the following services: + + * Open vSwitch agent + * Layer-3 agent + +Compute nodes +------------- + +No changes. + +Verify service operation +------------------------ + +#. Source the administrative project credentials. +#. Verify presence and operation of the agents. + + .. code-block:: console + + $ neutron agent-list + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | id | agent_type | host | availability_zone | alive | admin_state_up | binary | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent | + | 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | :-) | True | neutron-openvswitch-agent | + | 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent | + | 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent | + | a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | | :-) | True | neutron-openvswitch-agent | + | a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent | + | af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent | + | bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | :-) | True | neutron-openvswitch-agent | + | 7f00d759-f2c9-494a-9fbf-fd9118104d03 | Open vSwitch agent | network2 | | :-) | True | neutron-openvswitch-agent | + | b28d8818-9e32-4888-930b-29addbdd2ef9 | L3 agent | network2 | nova | :-) | True | neutron-l3-agent | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + +Create initial networks +----------------------- + +.. include:: shared/deploy-ha-vrrp-initialnetworks.txt + +Verify network operation +------------------------ + +.. include:: shared/deploy-ha-vrrp-verifynetworkoperation.txt + +Verify failover operation +------------------------- + +.. include:: shared/deploy-ha-vrrp-verifyfailoveroperation.txt + +Network traffic flow +~~~~~~~~~~~~~~~~~~~~ + +This high-availability mechanism simply augments :ref:`deploy-ovs-selfservice` +with failover of layer-3 services to another router if the master router +fails. Thus, you can reference :ref:`Self-service network traffic flow +` for normal operation. diff --git a/doc/networking-guide/source/deploy-ovs-provider.rst b/doc/networking-guide/source/deploy-ovs-provider.rst new file mode 100644 index 0000000000..17740c072d --- /dev/null +++ b/doc/networking-guide/source/deploy-ovs-provider.rst @@ -0,0 +1,425 @@ +.. _deploy-ovs-provider: + +=============================== +Open vSwitch: Provider networks +=============================== + +This architecture example provides layer-2 connectivity between instances +and the physical network infrastructure using VLAN (802.1q) tagging. It +supports one untagged (flat) network and up to 4095 tagged (VLAN) networks. +The actual quantity of VLAN networks depends on the physical network +infrastructure. For more information on provider networks, see +:ref:`intro-os-networking-provider`. + +.. warning:: + + Linux distributions often package older releases of Open vSwitch that can + introduce issues during operation with the Networking service. We recommend + using at least the latest long-term stable (LTS) release of Open vSwitch + for the best experience and support from Open vSwitch. See + ``__ for available releases and the + `installation instructions + `__ for + +Prerequisites +~~~~~~~~~~~~~ + +One controller node with the following components: + +* Two network interfaces: management and provider. +* OpenStack Networking server service and ML2 plug-in. + +Two compute nodes with the following components: + +* Two network interfaces: management and provider. +* OpenStack Networking Open vSwitch (OVS) layer-2 agent, DHCP agent, metadata + agent, and any dependencies including OVS. + +.. note:: + + Larger deployments typically deploy the DHCP and metadata agents on a + subset of compute nodes to increase performance and redundancy. However, + too many agents can overwhelm the message bus. Also, to further simplify + any deployment, you can omit the metadata agent and use a configuration + drive to provide metadata to instances. + +Architecture +~~~~~~~~~~~~ + +.. image:: figures/deploy-ovs-provider-overview.png + :alt: Provider networks using OVS - overview + +The following figure shows components and connectivity for one untagged +(flat) network. In this particular case, the instance resides on the +same compute node as the DHCP agent for the network. If the DHCP agent +resides on another compute node, the latter only contains a DHCP namespace +with a port on the OVS integration bridge. + +.. image:: figures/deploy-ovs-provider-compconn1.png + :alt: Provider networks using OVS - components and connectivity - one network + +The following figure describes virtual connectivity among components for +two tagged (VLAN) networks. Essentially, all networks use a single OVS +integration bridge with different internal VLAN tags. The internal VLAN +tags almost always differ from the network VLAN assignment in the Networking +service. Similar to the untagged network case, the DHCP agent may reside on +a different compute node. + +.. image:: figures/deploy-ovs-provider-compconn2.png + :alt: Provider networks using OVS - components and connectivity - multiple networks + +.. note:: + + These figures omit the controller node because it does not handle instance + network traffic. + +Example configuration +~~~~~~~~~~~~~~~~~~~~~ + +Use the following example configuration as a template to deploy provider +networks in your environment. + +Controller node +--------------- + +#. Install the Networking service components that provide the + ``neutron-server`` service and ML2 plug-in. + +#. In the ``neutron.conf`` file: + + * Configure common options: + + .. include:: shared/deploy-config-neutron-common.txt + + * Disable service plug-ins because provider networks do not require + any. However, this breaks portions of the dashboard that manage + the Networking service. See the + `Installation Guide `__ for more + information. + + .. code-block:: ini + + [DEFAULT] + service_plugins = + + * Enable two DHCP agents per network so both compute nodes can + provide DHCP service provider networks. + + .. code-block:: ini + + [DEFAULT] + dhcp_agents_per_network = 2 + + * If necessary, :ref:`configure MTU `. + +#. In the ``ml2_conf.ini`` file: + + * Configure drivers and network types: + + .. code-block:: ini + + [ml2] + type_drivers = flat,vlan + tenant_network_types = + mechanism_drivers = openvswitch + extension_drivers = port_security + + * Configure network mappings: + + .. code-block:: ini + + [ml2_type_flat] + flat_networks = provider + + [ml2_type_vlan] + network_vlan_ranges = provider + + .. note:: + + The ``tenant_network_types`` option contains no value because the + architecture does not support self-service networks. + + .. note:: + + The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN + ID ranges to support use of arbitrary VLAN IDs. + + * Configure the security group driver: + + .. code-block:: ini + + [securitygroup] + firewall_driver = iptables_hybrid + +#. Populate the database. + + .. code-block:: console + + # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ + --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron + +#. Start the following services: + + * Server + +Compute nodes +------------- + +#. Install the Networking service OVS layer-2 agent, DHCP agent, and + metadata agent. + +#. Install OVS. + +#. In the ``neutron.conf`` file, configure common options: + + .. include:: shared/deploy-config-neutron-common.txt + +#. In the ``openvswitch_agent.ini`` file, configure the OVS agent: + + .. code-block:: ini + + [ovs] + bridge_mappings = provider:br-provider + + [securitygroup] + firewall_driver = iptables_hybrid + +#. In the ``dhcp_agent.ini`` file, configure the DHCP agent: + + .. code-block:: ini + + [DEFAULT] + interface_driver = openvswitch + enable_isolated_metadata = True + +#. In the ``metadata_agent.ini`` file, configure the metadata agent: + + .. code-block:: ini + + [DEFAULT] + nova_metadata_ip = controller + metadata_proxy_shared_secret = METADATA_SECRET + + The value of ``METADATA_SECRET`` must match the value of the same option + in the ``[neutron]`` section of the ``nova.conf`` file. + +#. Start the following services: + + * OVS + +#. Create the OVS provider bridge ``br-provider``: + + .. code-block:: console + + $ ovs-vsctl add-br br-provider + +#. Add the provider network interface as a port on the OVS provider + bridge ``br-provider``: + + .. code-block:: console + + $ ovs-vsctl add-port br-provider PROVIDER_INTERFACE + + Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface + that handles provider networks. For example, ``eth1``. + +#. Start the following services: + + * OVS agent + * DHCP agent + * Metadata agent + +Verify service operation +------------------------ + +#. Source the administrative project credentials. +#. Verify presence and operation of the agents: + + .. code-block:: console + + $ neutron agent-list + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | id | agent_type | host | availability_zone | alive | admin_state_up | binary | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent | + | 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | :-) | True | neutron-openvswitch-agent | + | 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent | + | a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent | + | af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent | + | bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | :-) | True | neutron-openvswitch-agent | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + +Create initial networks +----------------------- + +.. include:: shared/deploy-provider-initialnetworks.txt + +Verify network operation +------------------------ + +.. include:: shared/deploy-provider-verifynetworkoperation.txt + +Network traffic flow +~~~~~~~~~~~~~~~~~~~~ + +.. include:: shared/deploy-provider-networktrafficflow.txt + +North-south +----------- + +* The instance resides on compute node 1 and uses provider network 1. +* The instance sends a packet to a host on the Internet. + +The following steps involve compute node 1. + +#. The instance interface (1) forwards the packet to the security group + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge OVS port (4) forwards the packet to the OVS + integration bridge security group port (5) via ``veth`` pair. +#. The OVS integration bridge adds an internal VLAN tag to the packet. +#. The OVS integration bridge ``int-br-provider`` patch port (6) forwards + the packet to the OVS provider bridge ``phy-br-provider`` patch port (7). +#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag + 101. +#. The OVS provider bridge provider network port (8) forwards the packet to + the physical network interface (9). +#. The physical network interface forwards the packet to the physical + network infrastructure switch (10). + +The following steps involve the physical network infrastructure: + +#. The switch removes VLAN tag 101 from the packet and forwards it to the + router (11). +#. The router routes the packet from the provider network (12) to the + external network (13) and forwards the packet to the switch (14). +#. The switch forwards the packet to the external network (15). +#. The external network (16) receives the packet. + +.. image:: figures/deploy-ovs-provider-flowns1.png + :alt: Provider networks using Open vSwitch - network traffic flow - north/south + +.. note:: + + Return traffic follows similar steps in reverse. + +East-west scenario 1: Instances on the same network +--------------------------------------------------- + +Instances on the same network communicate directly between compute nodes +containing those instances. + +* Instance 1 resides on compute node 1 and uses provider network 1. +* Instance 2 resides on compute node 2 and uses provider network 1. +* Instance 1 sends a packet to instance 2. + +The following steps involve compute node 1: + +#. The instance 1 interface (1) forwards the packet to the security group + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge OVS port (4) forwards the packet to the OVS + integration bridge security group port (5) via ``veth`` pair. +#. The OVS integration bridge adds an internal VLAN tag to the packet. +#. The OVS integration bridge ``int-br-provider`` patch port (6) forwards + the packet to the OVS provider bridge ``phy-br-provider`` patch port (7). +#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag + 101. +#. The OVS provider bridge provider network port (8) forwards the packet to + the physical network interface (9). +#. The physical network interface forwards the packet to the physical + network infrastructure switch (10). + +The following steps involve the physical network infrastructure: + +#. The switch forwards the packet from compute node 1 to compute node 2 (11). + +The following steps involve compute node 2: + +#. The physical network interface (12) forwards the packet to the OVS + provider bridge provider network port (13). +#. The OVS provider bridge ``phy-br-provider`` patch port (14) forwards the + packet to the OVS integration bridge ``int-br-provider`` patch port (15). +#. The OVS integration bridge swaps the actual VLAN tag 101 with the internal + VLAN tag. +#. The OVS integration bridge security group port (16) forwards the packet + to the security group bridge OVS port (17). +#. Security group rules (18) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge instance port (19) forwards the packet to the + instance 2 interface (20) via ``veth`` pair. + +.. image:: figures/deploy-ovs-provider-flowew1.png + :alt: Provider networks using Open vSwitch - network traffic flow - east/west scenario 1 + +.. note:: + + Return traffic follows similar steps in reverse. + +East-west scenario 2: Instances on different networks +----------------------------------------------------- + +Instances communicate via router on the physical network infrastructure. + +* Instance 1 resides on compute node 1 and uses provider network 1. +* Instance 2 resides on compute node 1 and uses provider network 2. +* Instance 1 sends a packet to instance 2. + +.. note:: + + Both instances reside on the same compute node to illustrate how VLAN + tagging enables multiple logical layer-2 networks to use the same + physical layer-2 network. + +The following steps involve the compute node: + +#. The instance 1 interface (1) forwards the packet to the security group + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge OVS port (4) forwards the packet to the OVS + integration bridge security group port (5) via ``veth`` pair. +#. The OVS integration bridge adds an internal VLAN tag to the packet. +#. The OVS integration bridge ``int-br-provider`` patch port (6) forwards + the packet to the OVS provider bridge ``phy-br-provider`` patch port (7). +#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag + 101. +#. The OVS provider bridge provider network port (8) forwards the packet to + the physical network interface (9). +#. The physical network interface forwards the packet to the physical + network infrastructure switch (10). + +The following steps involve the physical network infrastructure: + +#. The switch removes VLAN tag 101 from the packet and forwards it to the + router (11). +#. The router routes the packet from provider network 1 (12) to provider + network 2 (13). +#. The router forwards the packet to the switch (14). +#. The switch adds VLAN tag 102 to the packet and forwards it to compute + node 1 (15). + +The following steps involve the compute node: + +#. The physical network interface (16) forwards the packet to the OVS + provider bridge provider network port (17). +#. The OVS provider bridge ``phy-br-provider`` patch port (18) forwards the + packet to the OVS integration bridge ``int-br-provider`` patch port (19). +#. The OVS integration bridge swaps the actual VLAN tag 102 with the internal + VLAN tag. +#. The OVS integration bridge security group port (20) removes the internal + VLAN tag and forwards the packet to the security group bridge OVS port + (21). +#. Security group rules (22) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge instance port (23) forwards the packet to the + instance 2 interface (24) via ``veth`` pair. + +.. image:: figures/deploy-ovs-provider-flowew2.png + :alt: Provider networks using Open vSwitch - network traffic flow - east/west scenario 2 + +.. note:: + + Return traffic follows similar steps in reverse. diff --git a/doc/networking-guide/source/deploy-ovs-selfservice.rst b/doc/networking-guide/source/deploy-ovs-selfservice.rst new file mode 100644 index 0000000000..2d08b25972 --- /dev/null +++ b/doc/networking-guide/source/deploy-ovs-selfservice.rst @@ -0,0 +1,507 @@ +.. _deploy-ovs-selfservice: + +=================================== +Open vSwitch: Self-service networks +=================================== + +This architecture example augments :ref:`deploy-ovs-provider` to support +a nearly limitless quantity of entirely virtual networks. Although the +Networking service supports VLAN self-service networks, this example +focuses on VXLAN self-service networks. For more information on +self-service networks, see :ref:`intro-os-networking-selfservice`. + +Prerequisites +~~~~~~~~~~~~~ + +Add one network node with the following components: + +* Three network interfaces: management, provider, and overlay. +* OpenStack Networking Open vSwitch (OVS) layer-2 agent, layer-3 agent, and + any including OVS. + +Modify the compute nodes with the following components: + +* Add one network interface: overlay. + +.. note:: + + You can keep the DHCP and metadata agents on each compute node or + move them to the network node. + +Architecture +~~~~~~~~~~~~ + +.. image:: figures/deploy-ovs-selfservice-overview.png + :alt: Self-service networks using OVS - overview + +The following figure shows components and connectivity for one self-service +network and one untagged (flat) provider network. In this particular case, the +instance resides on the same compute node as the DHCP agent for the network. +If the DHCP agent resides on another compute node, the latter only contains +a DHCP namespace and with a port on the OVS integration bridge. + +.. image:: figures/deploy-ovs-selfservice-compconn1.png + :alt: Self-service networks using OVS - components and connectivity - one network + +Example configuration +~~~~~~~~~~~~~~~~~~~~~ + +Use the following example configuration as a template to add support for +self-service networks to an existing operational environment that supports +provider networks. + +Controller node +--------------- + +#. In the ``neutron.conf`` file: + + * Enable routing and allow overlapping IP address ranges. + + .. code-block:: ini + + [DEFAULT] + service_plugins = router + allow_overlapping_ips = True + +#. In the ``ml2_conf.ini`` file: + + * Add ``vxlan`` to type drivers and project network types. + + .. code-block:: ini + + [ml2] + type_drivers = flat,vlan,vxlan + tenant_network_types = vxlan + + * Enable the layer-2 population mechanism driver. + + .. code-block:: ini + + [ml2] + mechanism_drivers = linuxbridge,l2population + + * Configure the VXLAN network ID (VNI) range. + + .. code-block:: ini + + [ml2_type_vxlan] + vni_ranges = VNI_START:VNI_END + + Replace ``VNI_START`` and ``VNI_END`` with appropriate numerical + values. + +#. Restart the following services: + + * Server + +Network node +------------ + +#. Install the Networking service OVS layer-2 agent and layer-3 agent. + +#. Install OVS. + +#. In the ``neutron.conf`` file, configure common options: + + .. include:: shared/deploy-config-neutron-common.txt + +#. Start the following services: + + * OVS + +#. Create the OVS provider bridge ``br-provider``: + + .. code-block:: console + + $ ovs-vsctl add-br br-provider + +#. In the ``openvswitch_agent.ini`` file, configure the layer-2 agent. + + .. code-block:: ini + + [ovs] + bridge_mappings = provider:br-provider + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + + [agent] + tunnel_types = vxlan + l2_population = true + + [securitygroup] + firewall_driver = iptables_hybrid + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + interface that handles VXLAN overlays for self-service networks. + +#. In the ``l3_agent.ini`` file, configure the layer-3 agent. + + .. code-block:: ini + + [DEFAULT] + interface_driver = openvswitch + external_network_bridge = + + .. note:: + + The ``external_network_bridge`` option intentionally contains + no value. + +#. Start the following services: + + * Open vSwitch agent + * Layer-3 agent + +Compute nodes +------------- + +#. In the ``openvswitch_agent.ini`` file, enable VXLAN support including + layer-2 population. + + .. code-block:: ini + + [ovs] + local_ip = OVERLAY_INTERFACE_IP_ADDRESS + + [agent] + tunnel_types = vxlan + l2_population = true + + Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the + interface that handles VXLAN overlays for self-service networks. + +#. Restart the following services: + + * Open vSwitch agent + +Verify service operation +------------------------ + +#. Source the administrative project credentials. +#. Verify presence and operation of the agents. + + .. code-block:: console + + $ neutron agent-list + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | id | agent_type | host | availability_zone | alive | admin_state_up | binary | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + | 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent | + | 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | :-) | True | neutron-openvswitch-agent | + | 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent | + | 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent | + | a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | | :-) | True | neutron-openvswitch-agent | + | a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent | + | af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent | + | bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | :-) | True | neutron-openvswitch-agent | + +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+ + +Create initial networks +----------------------- + +.. include:: shared/deploy-selfservice-initialnetworks.txt + +Verify network operation +------------------------ + +.. include:: shared/deploy-selfservice-verifynetworkoperation.txt + +.. _deploy-ovs-selfservice-networktrafficflow: + +Network traffic flow +~~~~~~~~~~~~~~~~~~~~ + +.. include:: shared/deploy-selfservice-networktrafficflow.txt + +.. _deploy-ovs-selfservice-networktrafficflow-ns1: + +North-south scenario 1: Instance with a fixed IP address +-------------------------------------------------------- + +For instances with a fixed IPv4 address, the network node performs SNAT +on north-south traffic passing from self-service to external networks +such as the Internet. For instances with a fixed IPv6 address, the network +node performs conventional routing of traffic between self-service and +external networks. + +* The instance resides on compute node 1 and uses self-service network 1. +* The instance sends a packet to a host on the Internet. + +The following steps involve compute node 1: + +#. The instance interface (1) forwards the packet to the security group + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge OVS port (4) forwards the packet to the OVS + integration bridge security group port (5) via ``veth`` pair. +#. The OVS integration bridge adds an internal VLAN tag to the packet. +#. The OVS integration bridge exchanges the internal VLAN tag for an internal + tunnel ID. +#. The OVS integration bridge patch port (6) forwards the packet to the + OVS tunnel bridge patch port (7). +#. The OVS tunnel bridge (8) wraps the packet using VNI 101. +#. The underlying physical interface (9) for overlay networks forwards + the packet to the network node via the overlay network (10). + +The following steps involve the network node: + +#. The underlying physical interface (11) for overlay networks forwards + the packet to the OVS tunnel bridge (12). +#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID + to it. +#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal + VLAN tag. +#. The OVS tunnel bridge patch port (13) forwards the packet to the OVS + integration bridge patch port (14). +#. The OVS integration bridge port for the self-service network (15) + removes the internal VLAN tag and forwards the packet to the self-service + network interface (16) in the router namespace. + + * For IPv4, the router performs SNAT on the packet which changes the + source IP address to the router IP address on the provider network + and sends it to the gateway IP address on the provider network via + the gateway interface on the provider network (17). + * For IPv6, the router sends the packet to the next-hop IP address, + typically the gateway IP address on the provider network, via the + provider gateway interface (17). + +#. The router forwards the packet to the OVS integration bridge port for + the provider network (18). +#. The OVS integration bridge adds the internal VLAN tag to the packet. +#. The OVS integration bridge ``int-br-provider`` patch port (19) forwards + the packet to the OVS provider bridge ``phy-br-provider`` patch port (20). +#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag + 101. +#. The OVS provider bridge provider network port (21) forwards the packet to + the physical network interface (22). +#. The physical network interface forwards the packet to the Internet via + physical network infrastructure (23). + +.. note:: + + Return traffic follows similar steps in reverse. However, without a + floating IPv4 address, hosts on the provider or external networks cannot + originate connections to instances on the self-service network. + +.. image:: figures/deploy-ovs-selfservice-flowns1.png + :alt: Self-service networks using Open vSwitch - network traffic flow - north/south scenario 1 + +North-south scenario 2: Instance with a floating IPv4 address +------------------------------------------------------------- + +For instances with a floating IPv4 address, the network node performs SNAT +on north-south traffic passing from the instance to external networks +such as the Internet and DNAT on north-south traffic passing from external +networks to the instance. Floating IP addresses and NAT do not apply to IPv6. +Thus, the network node routes IPv6 traffic in this scenario. + +* The instance resides on compute node 1 and uses self-service network 1. +* A host on the Internet sends a packet to the instance. + +The following steps involve the network node: + +#. The physical network infrastructure (1) forwards the packet to the + provider physical network interface (2). +#. The provider physical network interface forwards the packet to the + OVS provider bridge provider network port (3). +#. The OVS provider bridge swaps actual VLAN tag 101 with the internal + VLAN tag. +#. The OVS provider bridge ``phy-br-provider`` port (4) forwards the + packet to the OVS integration bridge ``int-br-provider`` port (5). +#. The OVS integration bridge port for the provider network (6) removes + the internal VLAN tag and forwards the packet to the provider network + interface (6) in the router namespace. + + * For IPv4, the router performs DNAT on the packet which changes the + destination IP address to the instance IP address on the self-service + network and sends it to the gateway IP address on the self-service + network via the self-service interface (7). + * For IPv6, the router sends the packet to the next-hop IP address, + typically the gateway IP address on the self-service network, via + the self-service interface (8). + +#. The router forwards the packet to the OVS integration bridge port for + the self-service network (9). +#. The OVS integration bridge adds an internal VLAN tag to the packet. +#. The OVS integration bridge exchanges the internal VLAN tag for an internal + tunnel ID. +#. The OVS integration bridge ``patch-tun`` patch port (10) forwards the + packet to the OVS tunnel bridge ``patch-int`` patch port (11). +#. The OVS tunnel bridge (12) wraps the packet using VNI 101. +#. The underlying physical interface (13) for overlay networks forwards + the packet to the network node via the overlay network (14). + +The following steps involve the compute node: + +#. The underlying physical interface (15) for overlay networks forwards + the packet to the OVS tunnel bridge (16). +#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID + to it. +#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal + VLAN tag. +#. The OVS tunnel bridge ``patch-int`` patch port (17) forwards the packet + to the OVS integration bridge ``patch-tun`` patch port (18). +#. The OVS integration bridge removes the internal VLAN tag from the packet. +#. The OVS integration bridge security group port (19) forwards the packet + to the security group bridge OVS port (20) via ``veth`` pair. +#. Security group rules (21) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge instance port (22) forwards the packet to the + instance interface (23) via ``veth`` pair. + +.. image:: figures/deploy-ovs-selfservice-flowns2.png + :alt: Self-service networks using Open vSwitch - network traffic flow - north/south scenario 2 + +.. note:: + + Egress instance traffic flows similar to north-south scenario 1, except SNAT + changes the source IP address of the packet to the floating IPv4 address + rather than the router IP address on the provider network. + +East-west scenario 1: Instances on the same network +--------------------------------------------------- + +Instances with a fixed IPv4/IPv6 address or floating IPv4 address on the +same network communicate directly between compute nodes containing those +instances. + +By default, the VXLAN protocol lacks knowledge of target location +and uses multicast to discover it. After discovery, it stores the +location in the local forwarding database. In large deployments, +the discovery process can generate a significant amount of network +that all nodes must process. To eliminate the latter and generally +increase efficiency, the Networking service includes the layer-2 +population mechanism driver that automatically populates the +forwarding database for VXLAN interfaces. The example configuration +enables this driver. For more information, see :ref:`config-plugin-ml2`. + +* Instance 1 resides on compute node 1 and uses self-service network 1. +* Instance 2 resides on compute node 2 and uses self-service network 1. +* Instance 1 sends a packet to instance 2. + +The following steps involve compute node 1: + +#. The instance 1 interface (1) forwards the packet to the security group + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge OVS port (4) forwards the packet to the OVS + integration bridge security group port (5) via ``veth`` pair. +#. The OVS integration bridge adds an internal VLAN tag to the packet. +#. The OVS integration bridge exchanges the internal VLAN tag for an internal + tunnel ID. +#. The OVS integration bridge patch port (6) forwards the packet to the + OVS tunnel bridge patch port (7). +#. The OVS tunnel bridge (8) wraps the packet using VNI 101. +#. The underlying physical interface (9) for overlay networks forwards + the packet to compute node 2 via the overlay network (10). + +The following steps involve compute node 2: + +#. The underlying physical interface (11) for overlay networks forwards + the packet to the OVS tunnel bridge (12). +#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID + to it. +#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal + VLAN tag. +#. The OVS tunnel bridge ``patch-int`` patch port (13) forwards the packet + to the OVS integration bridge ``patch-tun`` patch port (14). +#. The OVS integration bridge removes the internal VLAN tag from the packet. +#. The OVS integration bridge security group port (15) forwards the packet + to the security group bridge OVS port (16) via ``veth`` pair. +#. Security group rules (17) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge instance port (18) forwards the packet to the + instance 2 interface (19) via ``veth`` pair. + +.. image:: figures/deploy-ovs-selfservice-flowew1.png + :alt: Self-service networks using Open vSwitch - network traffic flow - east/west scenario 1 + +.. note:: + + Return traffic follows similar steps in reverse. + +East-west scenario 2: Instances on different networks +----------------------------------------------------- + +Instances using a fixed IPv4/IPv6 address or floating IPv4 address communicate +via router on the network node. The self-service networks must reside on the +same router. + +* Instance 1 resides on compute node 1 and uses self-service network 1. +* Instance 2 resides on compute node 1 and uses self-service network 2. +* Instance 1 sends a packet to instance 2. + +.. note:: + + Both instances reside on the same compute node to illustrate how VXLAN + enables multiple overlays to use the same layer-3 network. + +The following steps involve the compute node: + +#. The instance interface (1) forwards the packet to the security group + bridge instance port (2) via ``veth`` pair. +#. Security group rules (3) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge OVS port (4) forwards the packet to the OVS + integration bridge security group port (5) via ``veth`` pair. +#. The OVS integration bridge adds an internal VLAN tag to the packet. +#. The OVS integration bridge exchanges the internal VLAN tag for an internal + tunnel ID. +#. The OVS integration bridge ``patch-tun`` patch port (6) forwards the + packet to the OVS tunnel bridge ``patch-int`` patch port (7). +#. The OVS tunnel bridge (8) wraps the packet using VNI 101. +#. The underlying physical interface (9) for overlay networks forwards + the packet to the network node via the overlay network (10). + +The following steps involve the network node: + +#. The underlying physical interface (11) for overlay networks forwards + the packet to the OVS tunnel bridge (12). +#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID + to it. +#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal + VLAN tag. +#. The OVS tunnel bridge ``patch-int`` patch port (13) forwards the packet to + the OVS integration bridge ``patch-tun`` patch port (14). +#. The OVS integration bridge port for self-service network 1 (15) + removes the internal VLAN tag and forwards the packet to the self-service + network 1 interface (16) in the router namespace. +#. The router sends the packet to the next-hop IP address, typically the + gateway IP address on self-service network 2, via the self-service + network 2 interface (17). +#. The router forwards the packet to the OVS integration bridge port for + self-service network 2 (18). +#. The OVS integration bridge adds the internal VLAN tag to the packet. +#. The OVS integration bridge exchanges the internal VLAN tag for an internal + tunnel ID. +#. The OVS integration bridge ``patch-tun`` patch port (19) forwards the + packet to the OVS tunnel bridge ``patch-int`` patch port (20). +#. The OVS tunnel bridge (21) wraps the packet using VNI 102. +#. The underlying physical interface (22) for overlay networks forwards + the packet to the compute node via the overlay network (23). + +The following steps involve the compute node: + +#. The underlying physical interface (24) for overlay networks forwards + the packet to the OVS tunnel bridge (25). +#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel + ID to it. +#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal + VLAN tag. +#. The OVS tunnel bridge ``patch-int`` patch port (26) forwards the packet + to the OVS integration bridge ``patch-tun`` patch port (27). +#. The OVS integration bridge removes the internal VLAN tag from the packet. +#. The OVS integration bridge security group port (28) forwards the packet + to the security group bridge OVS port (29) via ``veth`` pair. +#. Security group rules (30) on the security group bridge handle firewalling + and connection tracking for the packet. +#. The security group bridge instance port (31) forwards the packet to the + instance interface (32) via ``veth`` pair. + +.. note:: + + Return traffic follows similar steps in reverse. + +.. image:: figures/deploy-ovs-selfservice-flowew2.png + :alt: Self-service networks using Open vSwitch - network traffic flow - east/west scenario 2 diff --git a/doc/networking-guide/source/deploy-ovs.rst b/doc/networking-guide/source/deploy-ovs.rst new file mode 100644 index 0000000000..00677e606c --- /dev/null +++ b/doc/networking-guide/source/deploy-ovs.rst @@ -0,0 +1,21 @@ +.. _deploy-ovs: + +============================= +Open vSwitch mechanism driver +============================= + +The Open vSwitch (OVS) mechanism driver uses a combination of OVS and Linux +bridges as interconnection devices. However, optionally enabling the OVS +native implementation of security groups removes the dependency on Linux +bridges. + +We recommend using Open vSwitch version 2.4 or higher. Optional features +may require a higher minimum version. + +.. toctree:: + :maxdepth: 2 + + deploy-ovs-provider + deploy-ovs-selfservice + deploy-ovs-ha-vrrp + deploy-ovs-ha-dvr diff --git a/doc/networking-guide/source/deploy.rst b/doc/networking-guide/source/deploy.rst index 4961acb5c9..7a17865090 100644 --- a/doc/networking-guide/source/deploy.rst +++ b/doc/networking-guide/source/deploy.rst @@ -1,17 +1,137 @@ -.. _deployment-scenarios: +.. _deploy: -==================== -Deployment scenarios -==================== +=================== +Deployment examples +=================== + +The following deployment examples provide building blocks of increasing +architectural complexity using the Networking service reference architecture +which implements the Modular Layer 2 (ML2) plug-in and either the Open +vSwitch (OVS) or Linux bridge mechanism drivers. Both mechanism drivers support +the same basic features such as provider networks, self-service networks, +and routers. However, more complex features often require a particular +mechanism driver. Thus, you should consider the requirements (or goals) of +your cloud before choosing a mechanism driver. + +After choosing a :ref:`mechanism driver `, the +deployment examples generally include the following building blocks: + +#. Provider (public/external) networks using IPv4 and IPv6 + +#. Self-service (project/private/internal) networks including routers using + IPv4 and IPv6 + +#. High-availability features + +#. Other features such as BGP dynamic routing + +Prerequisites +~~~~~~~~~~~~~ + +Prerequisites, typically hardware requirements, generally increase with each +building block. Each building block depends on proper deployment and operation +of prior building blocks. For example, the first building block (provider +networks) only requires one controller and two compute nodes, the second +building block (self-service networks) adds a network node, and the +high-availability building blocks typically add a second network node for a +total of five nodes. Each building block could also require additional +infrastructure or changes to existing infrastructure such as networks. + +For basic configuration of prerequisites, see the +`Installation Guide `_ for your OpenStack release. + +Nodes +----- + +The deployment examples refer one or more of the following nodes: + +* Controller: Contains control plane components of OpenStack services + and their dependencies. + + * Two network interfaces: management and provider. + * Operational SQL server with databases necessary for each OpenStack + service. + * Operational message queue service. + * Operational OpenStack Identity (keystone) service. + * Operational OpenStack Image Service (glance). + * Operational management components of the OpenStack Compute (nova) service + with appropriate configuration to use the Networking service. + * OpenStack Networking (neutron) server service and ML2 plug-in. + +* Network: Contains the OpenStack Networking service layer-3 (routing) + component. High availability options may include additional components. + + * Three network interfaces: management, overlay, and provider. + * Openstack Networking layer-2 (switching) agent, layer-3 agent, and any + dependencies. + +* Compute: Contains the hypervisor component of the OpenStack Compute service + and the OpenStack Networking layer-2, DHCP, and metadata components. + High-availability options may include additional components. + + * Two network interfaces: management and provider. + * Operational hypervisor components of the OpenStack Compute (nova) service + with appropriate configuration to use the Networking service. + * OpenStack Networking layer-2 agent, DHCP agent, metadata agent, and any + dependencies. + +Each building block defines the quantity and types of nodes including the +components on each node. + +.. note:: + + You can virtualize these nodes for demonstration, training, or + proof-of-concept purposes. However, you must use physical hosts for + for evaluation of performance or scaling. + +Networks and network interfaces +------------------------------- + +The deployment examples refer to one or more of the following networks +and network interfaces: + +* Management: Handles API requests from clients and control plane traffic for + OpenStack services including their dependencies. +* Overlay: Handles self-service networks using an overlay protocol such as + VXLAN or GRE. +* Provider: Connects virtual and physical networks at layer-2. Typically + uses physical network infrastructure for switching/routing traffic to + external networks such as the Internet. + +.. note:: + + For best performance, 10+ Gbps physical network infrastructure should + support jumbo frames. + +For illustration purposes, the configuration examples typically reference +the following IP address ranges: + +* Management network: 10.0.0.0/24 +* Overlay (tunnel) network: 10.0.1.0/24 +* Provider network 1: + + * IPv4: 203.0.113.0/24 + * IPv6: fd00:203:0:113::/64 + +* Provider network 2: + + * IPv4: 192.0.2.0/24 + * IPv6: fd00:192:0:2::/64 + +* Self-service networks: + + * IPv4: 192.168.0.0/16 in /24 segments + * IPv6: fd00:192:168::/48 in /64 segments + +You may change them to work with your particular network infrastructure. + +.. _deploy-mechanism-drivers: + +Mechanism drivers +~~~~~~~~~~~~~~~~~ .. toctree:: - :maxdepth: 2 + :maxdepth: 1 - scenario-classic-ovs - scenario-classic-lb - scenario-classic-mt - scenario-dvr-ovs - scenario-l3ha-ovs - scenario-l3ha-lb - scenario-provider-ovs - scenario-provider-lb + deploy-lb + deploy-ovs diff --git a/doc/networking-guide/source/figures/scenario-classic-mt-compute1.png b/doc/networking-guide/source/figures/config-macvtap-compute1.png similarity index 100% rename from doc/networking-guide/source/figures/scenario-classic-mt-compute1.png rename to doc/networking-guide/source/figures/config-macvtap-compute1.png diff --git a/doc/networking-guide/source/figures/scenario-classic-mt-compute2.png b/doc/networking-guide/source/figures/config-macvtap-compute2.png similarity index 100% rename from doc/networking-guide/source/figures/scenario-classic-mt-compute2.png rename to doc/networking-guide/source/figures/config-macvtap-compute2.png diff --git a/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-compconn1.graffle b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-compconn1.graffle new file mode 100644 index 0000000000..78f379cb0f Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-compconn1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-compconn1.png b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-compconn1.png new file mode 100644 index 0000000000..d2e9a84565 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-compconn1.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-compconn1.svg b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-compconn1.svg new file mode 100644 index 0000000000..ddfa554672 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-compconn1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-09-20 23:43:13 +0000Canvas 1Layer 1Network NodeCompute NodeLinux Bridge - High-availability with VRRPComponents and ConnectivitySelf-service networkVNI 101InstanceLinux BridgebrqDHCP NamespaceqdhcpMetadataProcessvethvethtapeth0iptablesPorttaptapPorttapPortVXLAN 101Interface 3Overlay network10.0.1.0/24Linux BridgebrqLinux BridgebrqMaster Router NamespaceqrouterPorttapPortVXLAN 101PorttapPortInterface 2PorttapPorttapInterface 3Interface 2VLAN 1InternetProvider networkVLAN 1 (untagged)vethvethVNI 101VNI 101Provider networkAggregateNetwork NodeLinux BridgebrqLinux BridgebrqBackup Router NamespaceqrouterPorttapPortVXLAN 101PorttapPortInterface 2PorttapPorttapInterface 3Interface 2VLAN 1vethvethVNI 101Physical Network Infrastructure diff --git a/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-overview.graffle b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-overview.graffle new file mode 100644 index 0000000000..3e7896f02c Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-overview.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-overview.png b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-overview.png new file mode 100644 index 0000000000..ab60fb3f78 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-overview.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-overview.svg b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-overview.svg new file mode 100644 index 0000000000..3c98f805fd --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-ha-vrrp-overview.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-04 14:46:19 +0000Canvas 1Layer 1 Compute NodesLinux Bridge - High-availability with VRRPOverviewInternetProvider network Controller NodeSQLDatabaseMessageBusNetworkingManagementML2 Plug-inAPIManagement network10.0.0.0/24Interface 1Metadata AgentDHCP AgentLinux Bridge AgentInterface 1InstanceInterface 2MetadataProcessDHCP NamespaceBridgeBridgeFirewall Network NodesLinux Bridge AgentLayer-3 AgentInterface 3BridgeBridgeRouterNamespaceInterface 1Interface 2Interface 3 Physical Network InfrastructureOverlay network10.0.1.0/24Provider networkAggregate diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-compconn1.graffle b/doc/networking-guide/source/figures/deploy-lb-provider-compconn1.graffle new file mode 100644 index 0000000000..e5abe24509 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-compconn1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-compconn1.png b/doc/networking-guide/source/figures/deploy-lb-provider-compconn1.png new file mode 100644 index 0000000000..919b8eec8d Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-compconn1.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-compconn1.svg b/doc/networking-guide/source/figures/deploy-lb-provider-compconn1.svg new file mode 100644 index 0000000000..ac2b83e4fd --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-provider-compconn1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-09-20 23:42:18 +0000Canvas 1Layer 1Compute NodeLinux Bridge - Provider NetworksComponents and ConnectivityProvider network 1VLAN 1 (untagged)InstanceLinux BridgebrqDHCP NamespaceqdhcpMetadataProcessvethvethtapeth0iptablesPorttaptapPorttapPortInterface 2Interface 2VLAN 1Provider networkAggregatePhysical Network InfrastructureInternet diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-compconn2.graffle b/doc/networking-guide/source/figures/deploy-lb-provider-compconn2.graffle new file mode 100644 index 0000000000..ff024f0042 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-compconn2.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-compconn2.png b/doc/networking-guide/source/figures/deploy-lb-provider-compconn2.png new file mode 100644 index 0000000000..1038a20289 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-compconn2.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-compconn2.svg b/doc/networking-guide/source/figures/deploy-lb-provider-compconn2.svg new file mode 100644 index 0000000000..5c982d2a1a --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-provider-compconn2.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-09-20 23:42:33 +0000Canvas 1Layer 1Compute NodeLinux Bridge - Provider NetworksComponents and ConnectivityProvider network 1VLAN 101Instance 1Linux Bridge 1brqDHCP Namespace 1qdhcpMetadataProcessvethvethtapeth0iptablesPorttaptapPorttapPortSub-Interface 2.101Instance 2Linux Bridge 2brqDHCP Namespace 2qdhcpMetadataProcessvethvethtapeth0iptablesPorttaptapPorttapPortSub-Interface 2.102Interface 2VLAN 101VLAN 102Provider network 2VLAN 102Provider networkAggregatePhysical Network InfrastructureInternet diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-flowew1.graffle b/doc/networking-guide/source/figures/deploy-lb-provider-flowew1.graffle new file mode 100644 index 0000000000..b44116bb5d Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-flowew1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-flowew1.png b/doc/networking-guide/source/figures/deploy-lb-provider-flowew1.png new file mode 100644 index 0000000000..3eb9d184ac Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-flowew1.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-flowew1.svg b/doc/networking-guide/source/figures/deploy-lb-provider-flowew1.svg new file mode 100644 index 0000000000..f6a7569e1e --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-provider-flowew1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:02:59 +0000Canvas 1Layer 1Linux Bridge - Provider NetworksNetwork Traffic Flow - East/West Scenario 1Provider network 1VLAN 101, 203.0.113.0/24Compute Node 1Instance 1Linux Bridgebrqveth(1)(3)(2)(4)VLAN 101Compute Node 2Instance 2Linux Bridgebrqveth(12)(10)(11)(9)VLAN 101Physical Network InfrastructureSwitch(8)(7)(6)(5)Provider networkAggregate diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-flowew2.graffle b/doc/networking-guide/source/figures/deploy-lb-provider-flowew2.graffle new file mode 100644 index 0000000000..ab875018a1 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-flowew2.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-flowew2.png b/doc/networking-guide/source/figures/deploy-lb-provider-flowew2.png new file mode 100644 index 0000000000..c7d31e5e74 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-flowew2.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-flowew2.svg b/doc/networking-guide/source/figures/deploy-lb-provider-flowew2.svg new file mode 100644 index 0000000000..0ec8796130 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-provider-flowew2.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:02:35 +0000Canvas 1Layer 1Compute Node 1Linux Bridge - Provider NetworksNetwork Traffic Flow - East/West Scenario 2Provider network 1VLAN 101, 203.0.113.0/24Provider network 2VLAN 102, 192.0.2.0/24Physical Network InfrastructureSwitchRouterInstance 1Linux Bridgebrqveth(1)(3)(2)(4)VLAN 101VLAN 102Instance 2Linux Bridgebrqveth(16)(14)(15)(6)(11)(7)(10)(8)(9)(5)(12)(13)VLAN 101VLAN 102Provider networkAggregate diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-flowns1.graffle b/doc/networking-guide/source/figures/deploy-lb-provider-flowns1.graffle new file mode 100644 index 0000000000..c6c9eaacfa Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-flowns1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-flowns1.png b/doc/networking-guide/source/figures/deploy-lb-provider-flowns1.png new file mode 100644 index 0000000000..82e03cb0ba Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-flowns1.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-flowns1.svg b/doc/networking-guide/source/figures/deploy-lb-provider-flowns1.svg new file mode 100644 index 0000000000..c7e1aab672 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-provider-flowns1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:01:59 +0000Canvas 1Layer 1Physical Network InfrastructureLinux Bridge - Provider NetworksNetwork Traffic Flow - North/South ScenarioProvider network 1VLAN 101, 203.0.113.0/24Compute NodeSwitchRouter(12)InstanceLinux Bridgebrq(1)(3)(2)(4)(5)VLAN 101(8)(9)(11)(6)(7)(10)Provider networkAggregate diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-overview.graffle b/doc/networking-guide/source/figures/deploy-lb-provider-overview.graffle new file mode 100644 index 0000000000..d1e8e39dab Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-overview.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-overview.png b/doc/networking-guide/source/figures/deploy-lb-provider-overview.png new file mode 100644 index 0000000000..8d1a0cc106 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-provider-overview.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-provider-overview.svg b/doc/networking-guide/source/figures/deploy-lb-provider-overview.svg new file mode 100644 index 0000000000..faa93c0762 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-provider-overview.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-09-21 18:40:05 +0000Canvas 1Layer 1 Compute NodesLinux Bridge - Provider NetworksOverviewInternetProvider network Controller NodeSQLDatabaseMessageBusNetworkingManagementML2 Plug-inAPIManagement network10.0.0.0/24Interface 1Linux Bridge AgentInterface 1InstanceInterface 2BridgeBridgeFirewall Physical Network InfrastructureProvider networkAggregateMetadata AgentDHCP AgentMetadataProcessDHCP Namespace diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-compconn1.graffle b/doc/networking-guide/source/figures/deploy-lb-selfservice-compconn1.graffle new file mode 100644 index 0000000000..afd7f1393f Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-compconn1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-compconn1.png b/doc/networking-guide/source/figures/deploy-lb-selfservice-compconn1.png new file mode 100644 index 0000000000..db0e9b3b89 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-compconn1.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-compconn1.svg b/doc/networking-guide/source/figures/deploy-lb-selfservice-compconn1.svg new file mode 100644 index 0000000000..cc32b58fc2 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-selfservice-compconn1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-09-20 23:43:48 +0000Canvas 1Layer 1Network NodeCompute NodeLinux Bridge - Self-service NetworksComponents and ConnectivitySelf-service networkVNI 101InstanceLinux BridgebrqDHCP NamespaceqdhcpMetadataProcessvethvethtapeth0iptablesPorttaptapPorttapPortVXLAN 101Interface 3Overlay network10.0.1.0/24Linux BridgebrqLinux BridgebrqRouter NamespaceqrouterPorttapPortVXLAN 101PorttapPortInterface 2PorttapPorttapInterface 3Interface 2VLAN 1Physical Network InfrastructureInternetProvider networkVLAN 1 (untagged)vethvethVNI 101VNI 101Provider networkAggregate diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew1.graffle b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew1.graffle new file mode 100644 index 0000000000..c153ae8d73 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew1.png b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew1.png new file mode 100644 index 0000000000..a5d9445a7e Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew1.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew1.svg b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew1.svg new file mode 100644 index 0000000000..d47cfe974c --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:01:04 +0000Canvas 1Layer 1Linux Bridge - Self-service NetworksNetwork Traffic Flow - East/West Scenario 1Compute Node 1InstanceLinux Bridgebrq(1)(3)(2)(4)(5)VNI 101Self-service network 1VNI 101, 192.168.1.0/24Overlay network10.0.1.0/24Compute Node 2InstanceLinux Bridgebrq(11)(9)(10)(8)(7)VNI 101(6) diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew2.graffle b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew2.graffle new file mode 100644 index 0000000000..290912bc38 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew2.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew2.png b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew2.png new file mode 100644 index 0000000000..30980a333d Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew2.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew2.svg b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew2.svg new file mode 100644 index 0000000000..85d4863554 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowew2.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:00:33 +0000Canvas 1Layer 1Linux Bridge - Self-service NetworksNetwork Traffic Flow - East/West Scenario 2Compute NodeInstanceLinux Bridgebrq(1)(3)(2)(4)Network NodeLinux BridgebrqLinux BridgebrqRouter Namespaceqrouter(9)(8)(10)(6)(15)VNI 101VNI 102VNI 101VNI 102Self-service network 1VNI 101, 192.168.1.0/24Overlay network10.0.1.0/24InstanceLinux Bridgebrq(20)(18)(19)(17)(5)(16)(7)(14)(12)(13)(11)Self-service network 2VNI 102, 192.168.2.0/24 diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns1.graffle b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns1.graffle new file mode 100644 index 0000000000..c6eac60a73 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns1.png b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns1.png new file mode 100644 index 0000000000..806e0a412d Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns1.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns1.svg b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns1.svg new file mode 100644 index 0000000000..8b437c68cf --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 17:59:10 +0000Canvas 1Layer 1Linux Bridge - Self-service NetworksNetwork Traffic Flow - North/South Scenario 1Compute NodeInstanceLinux Bridgebrq(1)(3)(2)(4)Network NodeLinux BridgebrqLinux BridgebrqRouter Namespaceqrouter(5)(9)(8)(12)(13)(10)(11)(7)(6)(14)(15)VLAN 101VNI 101VNI 101Self-service networkVNI 101, 192.168.1.0/24Overlay network10.0.1.0/24Provider networkVLAN 101, 203.0.113.0/24Provider networkAggregate diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns2.graffle b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns2.graffle new file mode 100644 index 0000000000..c9071181ac Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns2.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns2.png b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns2.png new file mode 100644 index 0000000000..3d3d64b796 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns2.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns2.svg b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns2.svg new file mode 100644 index 0000000000..cd64cd86b8 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-selfservice-flowns2.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:00:08 +0000Canvas 1Layer 1Linux Bridge - Self-service NetworksNetwork Traffic Flow - North/South Scenario 2Compute NodeInstanceLinux Bridgebrq(15)(13)(14)(12)Network NodeLinux BridgebrqLinux BridgebrqRouter Namespaceqrouter(11)(7)(8)(4)(3)(6)(5)(9)(10)(2)(1)VLAN 101VNI 101VNI 101Self-service networkVNI 101, 192.168.1.0/24Overlay network10.0.1.0/24Provider networkVLAN 101, 203.0.113.0/24Provider networkAggregate diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-overview.graffle b/doc/networking-guide/source/figures/deploy-lb-selfservice-overview.graffle new file mode 100644 index 0000000000..9859ee24a0 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-overview.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-overview.png b/doc/networking-guide/source/figures/deploy-lb-selfservice-overview.png new file mode 100644 index 0000000000..bce458eec7 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-lb-selfservice-overview.png differ diff --git a/doc/networking-guide/source/figures/deploy-lb-selfservice-overview.svg b/doc/networking-guide/source/figures/deploy-lb-selfservice-overview.svg new file mode 100644 index 0000000000..81e524e5fb --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-lb-selfservice-overview.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-09-21 18:41:59 +0000Canvas 1Layer 1 Compute NodesLinux Bridge - Self-service NetworksOverviewInternetProvider network Controller NodeSQLDatabaseMessageBusNetworkingManagementML2 Plug-inAPIManagement network10.0.0.0/24Interface 1Linux Bridge AgentInterface 1InstanceInterface 2BridgeBridgeFirewall Network NodeLinux Bridge AgentLayer-3 AgentInterface 3BridgeBridgeRouterNamespaceInterface 1Interface 2Interface 3 Physical Network InfrastructureOverlay network10.0.1.0/24Provider networkAggregateSelf-service networkMetadata AgentDHCP AgentMetadataProcessDHCP Namespace diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-compconn1.graffle b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-compconn1.graffle new file mode 100644 index 0000000000..94fd9983ea Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-compconn1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-compconn1.png b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-compconn1.png new file mode 100644 index 0000000000..d1bb34c4b3 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-compconn1.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-compconn1.svg b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-compconn1.svg new file mode 100644 index 0000000000..d2ec7345e3 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-compconn1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-04 23:06:33 +0000Canvas 1Layer 1Network NodeCompute NodeOpen vSwitch - High-availability with DVRComponents and ConnectivityProvider network 1VLAN 1 (untagged)InstanceLinux BridgeqbrDHCP NamespaceqdhcpMetadataProcessvethtapeth0iptablesPorttapVNI 101Provider networkAggregateInternet OVS Tunnel Bridgebr-tunOVS Integration Bridgebr-inttapInterface 3PortqvoPortPatchtunPatchintPortInterface 3PortqvbvethRouter NamespaceqrouterVLAN 1 OVS Provider Bridgebr-providerOVS Integration Bridgebr-intInterface 2Patchint-br-providerPatchphy-br-providerPortInterface 2OVS Tunnel Bridgebr-tunPatchintPatchtunInterface 3PortInterface 3Self-service networkVNI 101Overlay network10.0.1.0/24VNI 101PortPortqrInternalTunnel IDInternalTunnel IDInternalVLAN OVS Provider Bridgebr-providerDist Router NamespaceqrouterSNAT NamespacesnatFIP NamespacefipPortInterface 2Patchphy-br-providerPortrfpPortqrPortfprPortfgPortfgPortqrPatchint-br-providerPhysical Network InfrastructureInterface 2VLAN 1InternalVLANPortPortPortsgPortqgDVR internal network diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowew1.graffle b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowew1.graffle new file mode 100644 index 0000000000..1bd2dbba01 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowew1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowew1.png b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowew1.png new file mode 100644 index 0000000000..8f63e417a8 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowew1.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowew1.svg b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowew1.svg new file mode 100644 index 0000000000..9f849e636a --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowew1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 17:54:05 +0000Canvas 1Layer 1Compute Node 2Compute Node 1Open vSwitch - High-availability with DVRNetwork Traffic Flow - East/West Scenario 1Instance 1Linux Bridgeqbr(1)(3)(2)OVS Integration Bridgebr-intSelf-service network 1VNI 101, 192.168.1.0/24Overlay network10.0.1.0/24Distributed Router Namespaceqrouter(7)(8)(6)(9)Self-service network 2VNI 102, 192.168.2.0/24VNI 101VNI 102OVS Tunnel Bridgebr-tun(11)(10)(13)(12)Instance 2Linux Bridgeqbr(23)(21)(22)OVS Integration Bridgebr-intDistributed Router NamespaceqrouterOVS Tunnel Bridgebr-tun(17)(18)(16)(14)(15)VNI 101VNI 102(20)(19)(4)(5) diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns1.graffle b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns1.graffle new file mode 100644 index 0000000000..e007d507f9 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns1.png b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns1.png new file mode 100644 index 0000000000..bf7af55c86 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns1.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns1.svg b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns1.svg new file mode 100644 index 0000000000..9270e15dab --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 17:55:27 +0000Canvas 1Layer 1Network NodeOpen vSwitch - High-availability with DVRNetwork Traffic Flow - North/South Scenario 1Provider network 1VLAN 101, 203.0.113.0/24Compute NodeInstanceLinux Bridgeqbr(1)(3)(2)VNI 101Provider networkAggregate OVS Tunnel Bridgebr-tunOVS Integration Bridgebr-int(4)(5)(8)OVS Integration Bridgebr-intOVS Tunnel Bridgebr-tun OVS Provider Bridgebr-providerSNAT Namespacesnat(9)(20)(21)(18)(11)(13)(12)(16)(17)(15)(14)(19)(22)(10)(6)(7)(23)Self-service networkVNI 101, 192.168.1.0/24Overlay network10.0.1.0/24VNI 101VLAN 101 diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns2.graffle b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns2.graffle new file mode 100644 index 0000000000..27fd411700 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns2.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns2.png b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns2.png new file mode 100644 index 0000000000..c1c67085e4 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns2.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns2.svg b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns2.svg new file mode 100644 index 0000000000..e72bc61010 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-flowns2.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 17:55:59 +0000Canvas 1Layer 1Open vSwitch - High-availability with DVRNetwork Traffic Flow - North/South Scenario 2Provider network 1VLAN 101, 203.0.113.0/24Compute NodeInstanceLinux Bridgeqbr(16)(14)(15)Provider networkAggregateOVS Integration Bridgebr-int(13)(12)Self-service networkVNI 101, 192.168.1.0/24DVR internal networkDistributed Router NamespaceqrouterFloating IPNamespacefip OVS Provider Bridgebr-provider(5)(9)(10)(8)(7)(4)(3)(2)(1)(6)(11)VLAN 101 diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-overview.graffle b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-overview.graffle new file mode 100644 index 0000000000..69ed88d35e Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-overview.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-overview.png b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-overview.png new file mode 100644 index 0000000000..1b12b3e0fc Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-overview.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-overview.svg b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-overview.svg new file mode 100644 index 0000000000..09560e94a0 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-ha-dvr-overview.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-04 15:57:53 +0000Canvas 1Layer 1 Network Nodes Compute NodesOpen vSwitch - High-availability with DVROverviewInternetProvider network Controller NodeSQLDatabaseMessageBusNetworkingManagementML2 Plug-inAPIManagement network10.0.0.0/24Interface 1Open vSwitch AgentInterface 1Provider networkAggregateInstanceInterface 2FirewallOpen vSwitch AgentOverlay network10.0.1.0/24Self-service networkInterface 1Interface 2IntegrationBridgeProviderBridgeInterface 3TunnelBridgeTunnelBridgeInterface 3 Physical Network InfrastructureDHCP AgentMetadata AgentMetadataProcessDHCP NamespaceIntegrationBridgeLayer-3 AgentRouterNamespaceSNATNamespaceProviderBridgeLayer-3 AgentDist RouterNamespaceFloating IPNamespace diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-compconn1.graffle b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-compconn1.graffle new file mode 100644 index 0000000000..5675be93b4 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-compconn1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-compconn1.png b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-compconn1.png new file mode 100644 index 0000000000..cafc1ba663 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-compconn1.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-compconn1.svg b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-compconn1.svg new file mode 100644 index 0000000000..1685a0f340 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-compconn1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-09-29 22:17:07 +0000Canvas 1Layer 1Network NodeCompute NodeOpen vSwitch - High-availability with VRRPComponents and ConnectivityProvider network 1VLAN 1 (untagged)InstanceLinux BridgeqbrDHCP NamespaceqdhcpMetadataProcessvethtapeth0iptablesPorttapVNI 101Provider networkAggregateInternet OVS Tunnel Bridgebr-tunOVS Integration Bridgebr-inttapInterface 3PortqvoPortPatchtunPatchintPortInterface 3PortqvbvethMasterRouter NamespaceqrouterVLAN 1 OVS Provider Bridgebr-providerOVS Integration Bridgebr-intInterface 2Patchint-br-providerPatchphy-br-providerPortInterface 2OVS Tunnel Bridgebr-tunPatchintPatchtunInterface 3PortInterface 3Self-service networkVNI 101Overlay network10.0.1.0/24VNI 101PortPortPortPortInternalTunnel IDInternalTunnel IDInternalVLANNetwork NodeBackupRouter NamespaceqrouterVLAN 1 OVS Provider Bridgebr-providerOVS Integration Bridgebr-intInterface 2Patchint-br-providerPatchphy-br-providerPortInterface 2OVS Tunnel Bridgebr-tunPatchintPatchtunInterface 3PortInterface 3VNI 101PortPortPortPortInternalTunnel IDInternalVLANPhysical Network Infrastructure diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-overview.graffle b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-overview.graffle new file mode 100644 index 0000000000..b5b8fbdab8 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-overview.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-overview.png b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-overview.png new file mode 100644 index 0000000000..9f1e69b93f Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-overview.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-overview.svg b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-overview.svg new file mode 100644 index 0000000000..0833cb2670 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-ha-vrrp-overview.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-04 15:21:58 +0000Canvas 1Layer 1 Network Nodes Compute NodesOpen vSwitch - High-availability with VRRPOverviewInternetProvider network Controller NodeSQLDatabaseMessageBusNetworkingManagementML2 Plug-inAPIManagement network10.0.0.0/24Interface 1Open vSwitch AgentInterface 1Provider networkAggregateInstanceInterface 2FirewallProviderBridgeOpen vSwitch AgentLayer-3 AgentOverlay network10.0.1.0/24Self-service networkInterface 1Interface 2IntegrationBridgeProviderBridgeRouterNamespaceInterface 3TunnelBridgeIntegrationBridgeTunnelBridgeInterface 3 Physical Network InfrastructureDHCP AgentMetadata AgentDHCP NamespaceMetadataProcess diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-compconn1.graffle b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn1.graffle new file mode 100644 index 0000000000..b41509c4d1 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-compconn1.png b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn1.png new file mode 100644 index 0000000000..dce7d3899a Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn1.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-compconn1.svg b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn1.svg new file mode 100644 index 0000000000..4ec9028d9a --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-04 15:54:45 +0000Canvas 1Layer 1Compute NodeOpen vSwitch - Provider NetworksComponents and ConnectivityProvider network 1VLAN 1 (untagged)InstanceLinux BridgeqbrDHCP NamespaceqdhcpMetadataProcessvethtapeth0iptablesPorttapVLAN 1Provider networkAggregatePhysical Network InfrastructureInternet OVS Provider Bridgebr-providerOVS Integration Bridgebr-inttapInterface 2PortqvoPortPatchint-br-providerPatchphy-br-providerPortInterface 2PortqvbvethInternal VLANVLAN 1 diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-compconn2.graffle b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn2.graffle new file mode 100644 index 0000000000..f01b00ad98 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn2.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-compconn2.png b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn2.png new file mode 100644 index 0000000000..ce72ca1a2a Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn2.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-compconn2.svg b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn2.svg new file mode 100644 index 0000000000..481da72132 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-provider-compconn2.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-09-27 19:22:48 +0000Canvas 1Layer 1Compute NodeOpen vSwitch - Provider NetworksComponents and ConnectivityProvider network 1VLAN 101Instance 1DHCP Namespace 1qdhcpMetadataProcessvethInstance 2DHCP Namespace 2qdhcpMetadataProcessvethVLAN 101VLAN 102Provider network 2VLAN 102Provider networkAggregatePhysical Network InfrastructureInternetLinux BridgeqbriptablesLinux BridgeqbriptablesOVS Provider Bridgebr-provider OVS Integration Bridgebr-intPortqvbPortqvbPortqvoPatchint-br-providerPatchphy-br-providerPortqvoInterface 2PortInterface 2tapeth0taptapeth0tapPorttapPorttapPortPortvethvethInternal VLANsVLAN 101VLAN 102 diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-flowew1.graffle b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew1.graffle new file mode 100644 index 0000000000..37ed2786cd Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-flowew1.png b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew1.png new file mode 100644 index 0000000000..d6bd50a4b8 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew1.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-flowew1.svg b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew1.svg new file mode 100644 index 0000000000..7b0b24efff --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:12:30 +0000Canvas 1Layer 1Compute Node 2Open vSwitch - Provider NetworksNetwork Traffic Flow - East/West Scenario 1Provider network 1VLAN 101, 203.0.113.0/24VLAN 101Physical Network InfrastructureSwitchProvider networkAggregateCompute Node 1Instance 1Linux Bridgeqbr(1)(3)(2)VLAN 101 OVS Provider Bridgebr-providerOVS Integration Bridgebr-int(4)(5)(6)(7)Instance 2Linux Bridgebrq(20)(18)(19) OVS Provider Bridgebr-providerOVS Integration Bridgebr-int(17)(16)(15)(14)(11)(8)(13)(10)(9)(12) diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-flowew2.graffle b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew2.graffle new file mode 100644 index 0000000000..d6e37f334b Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew2.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-flowew2.png b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew2.png new file mode 100644 index 0000000000..cc2723d9da Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew2.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-flowew2.svg b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew2.svg new file mode 100644 index 0000000000..a67769522a --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-provider-flowew2.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:12:04 +0000Canvas 1Layer 1Open vSwitch - Provider NetworksNetwork Traffic Flow - East/West Scenario 2Provider network 1VLAN 101, 203.0.113.0/24Provider network 2VLAN 102, 192.0.2.0/24Physical Network InfrastructureSwitchRouterVLAN 101VLAN 102(11)(14)(12)(13)Provider networkAggregateCompute NodeInstance 1Linux Bridgeqbr(1)(3)(2) OVS Provider Bridgebr-providerOVS Integration Bridgebr-int(4)(5)(6)(19)(7)(18)Instance 2Linux Bridgeqbr(24)(22)(21)(23)(20)(10)(15)(8)(17)(9)(16) diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-flowns1.graffle b/doc/networking-guide/source/figures/deploy-ovs-provider-flowns1.graffle new file mode 100644 index 0000000000..7eb3edd254 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-flowns1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-flowns1.png b/doc/networking-guide/source/figures/deploy-ovs-provider-flowns1.png new file mode 100644 index 0000000000..ae00b628e1 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-flowns1.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-flowns1.svg b/doc/networking-guide/source/figures/deploy-ovs-provider-flowns1.svg new file mode 100644 index 0000000000..171fc0f6e8 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-provider-flowns1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:11:40 +0000Canvas 1Layer 1Physical Network InfrastructureOpen vSwitch - Provider NetworksNetwork Traffic Flow - North/South ScenarioProvider network 1VLAN 101, 203.0.113.0/24Compute NodeSwitchRouter(16)InstanceLinux Bridgeqbr(1)(3)(2)VLAN 101(12)(13)(15)(11)(14)Provider networkAggregate OVS Provider Bridgebr-providerOVS Integration Bridgebr-int(4)(10)(5)(6)(7)(8)(9) diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-overview.graffle b/doc/networking-guide/source/figures/deploy-ovs-provider-overview.graffle new file mode 100644 index 0000000000..9d5abfa6aa Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-overview.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-overview.png b/doc/networking-guide/source/figures/deploy-ovs-provider-overview.png new file mode 100644 index 0000000000..678828d24a Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-provider-overview.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-provider-overview.svg b/doc/networking-guide/source/figures/deploy-ovs-provider-overview.svg new file mode 100644 index 0000000000..f09c846996 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-provider-overview.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-09-21 16:20:48 +0000Canvas 1Layer 1 Compute NodesOpen vSwitch - Provider NetworksOverviewInternetProvider network Controller NodeSQLDatabaseMessageBusNetworkingManagementML2 Plug-inAPIManagement network10.0.0.0/24Interface 1Metadata AgentDHCP AgentOpen vSwitch AgentInterface 1MetadataProcessDHCP Namespace Physical Network InfrastructureProvider networkAggregateInstanceInterface 2IntegrationBridgeFirewallProviderBridge diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-compconn1.graffle b/doc/networking-guide/source/figures/deploy-ovs-selfservice-compconn1.graffle new file mode 100644 index 0000000000..21cae57e03 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-compconn1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-compconn1.png b/doc/networking-guide/source/figures/deploy-ovs-selfservice-compconn1.png new file mode 100644 index 0000000000..24af2c4e12 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-compconn1.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-compconn1.svg b/doc/networking-guide/source/figures/deploy-ovs-selfservice-compconn1.svg new file mode 100644 index 0000000000..1e7f11528f --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-selfservice-compconn1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-09-27 23:26:12 +0000Canvas 1Layer 1Network NodeCompute NodeOpen vSwitch - Self-service NetworksComponents and ConnectivityProvider network 1VLAN 1 (untagged)InstanceLinux BridgeqbrDHCP NamespaceqdhcpMetadataProcessvethtapeth0iptablesPorttapVNI 101Provider networkAggregateInternet OVS Tunnel Bridgebr-tunOVS Integration Bridgebr-inttapInterface 3PortqvoPortPatchtunPatchintPortInterface 3PortqvbvethRouter NamespaceqrouterVLAN 1 OVS Provider Bridgebr-providerOVS Integration Bridgebr-intInterface 2Patchint-br-providerPatchphy-br-providerPortInterface 2OVS Tunnel Bridgebr-tunPatchintPatchtunPhysical Network InfrastructureInterface 3PortInterface 3Self-service networkVNI 101Overlay network10.0.1.0/24VNI 101PortPortPortPortInternalTunnel IDInternalTunnel IDInternalVLAN diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew1.graffle b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew1.graffle new file mode 100644 index 0000000000..791db17a14 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew1.png b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew1.png new file mode 100644 index 0000000000..5523459063 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew1.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew1.svg b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew1.svg new file mode 100644 index 0000000000..c47b148e73 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:10:26 +0000Canvas 1Layer 1Open vSwitch - Self-service NetworksNetwork Traffic Flow - East/West Scenario 1Compute Node 1Instance 1Linux Bridgeqbr(1)(3)(2) OVS Tunnel Bridgebr-tunOVS Integration Bridgebr-int(4)(5)(8)(9)(6)(7)Self-service network 1VNI 101, 192.168.1.0/24Overlay network10.0.1.0/24VNI 101Compute Node 2Instance 2Linux Bridgeqbr(19)(17)(18) OVS Tunnel Bridgebr-tunOVS Integration Bridgebr-int(16)(15)(12)(14)(13)(10)(11)VNI 101Self-service network 2VNI 102, 192.168.2.0/24 diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew2.graffle b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew2.graffle new file mode 100644 index 0000000000..ec61be50f4 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew2.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew2.png b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew2.png new file mode 100644 index 0000000000..c29a7c9d05 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew2.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew2.svg b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew2.svg new file mode 100644 index 0000000000..451cec0e1e --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowew2.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:10:04 +0000Canvas 1Layer 1Open vSwitch - Self-service NetworksNetwork Traffic Flow - East/West Scenario 2Compute NodeInstance 1Linux Bridgeqbr(1)(3)(2) OVS Tunnel Bridgebr-tunOVS Integration Bridgebr-int(4)(8)(25)(9)(24)(6)(27)(7)(26)Self-service network 1VNI 101, 192.168.1.0/24Overlay network10.0.1.0/24VNI 101VNI 102(10)(23)Network NodeVNI 101VNI 102OVS Integration Bridgebr-intOVS Tunnel Bridgebr-tunRouter Namespaceqrouter(18)(11)(22)(13)(20)(12)(21)(16)(17)(15)(14)(19)Instance 2Linux Bridgeqbr(32)(30)(31)(5)(28)(29)Self-service network 2VNI 102, 192.168.2.0/24 diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns1.graffle b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns1.graffle new file mode 100644 index 0000000000..005c9be0ea Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns1.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns1.png b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns1.png new file mode 100644 index 0000000000..f1d83ce183 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns1.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns1.svg b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns1.svg new file mode 100644 index 0000000000..c48b5ee898 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns1.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:09:40 +0000Canvas 1Layer 1Network NodeOpen vSwitch - Self-service NetworksNetwork Traffic Flow - North/South Scenario 1Provider network 1VLAN 101, 203.0.113.0/24Compute NodeInstanceLinux Bridgeqbr(1)(3)(2)VNI 101Provider networkAggregate OVS Tunnel Bridgebr-tunOVS Integration Bridgebr-int(4)(5)(8)OVS Integration Bridgebr-intOVS Tunnel Bridgebr-tun OVS Provider Bridgebr-providerRouter Namespaceqrouter(9)(20)(21)(18)(11)(13)(12)(16)(17)(15)(14)(19)(22)(10)(6)(7)(23)Self-service networkVNI 101, 192.168.1.0/24Overlay network10.0.1.0/24VNI 101VLAN 101 diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns2.graffle b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns2.graffle new file mode 100644 index 0000000000..6eced6be25 Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns2.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns2.png b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns2.png new file mode 100644 index 0000000000..a4b4b2506f Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns2.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns2.svg b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns2.svg new file mode 100644 index 0000000000..a87277a429 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-selfservice-flowns2.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:08:42 +0000Canvas 1Layer 1Network NodeOpen vSwitch - Self-service NetworksNetwork Traffic Flow - North/South Scenario 2Provider network 1VLAN 101, 203.0.113.0/24Compute NodeInstanceLinux Bridgeqbr(23)(21)(22)VNI 101Provider networkAggregate OVS Tunnel Bridgebr-tunOVS Integration Bridgebr-int(20)(19)(16)OVS Integration Bridgebr-intOVS Tunnel Bridgebr-tun OVS Provider Bridgebr-providerRouter Namespaceqrouter(15)(4)(3)(6)(13)(11)(12)(8)(7)(9)(10)(5)(2)(14)(18)(17)(1)Self-service networkVNI 101, 192.168.1.0/24Overlay network10.0.1.0/24VNI 101VLAN 101 diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-overview.graffle b/doc/networking-guide/source/figures/deploy-ovs-selfservice-overview.graffle new file mode 100644 index 0000000000..93ad8d94ee Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-overview.graffle differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-overview.png b/doc/networking-guide/source/figures/deploy-ovs-selfservice-overview.png new file mode 100644 index 0000000000..623793e2db Binary files /dev/null and b/doc/networking-guide/source/figures/deploy-ovs-selfservice-overview.png differ diff --git a/doc/networking-guide/source/figures/deploy-ovs-selfservice-overview.svg b/doc/networking-guide/source/figures/deploy-ovs-selfservice-overview.svg new file mode 100644 index 0000000000..febd75e441 --- /dev/null +++ b/doc/networking-guide/source/figures/deploy-ovs-selfservice-overview.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.6.1 2016-10-06 18:08:15 +0000Canvas 1Layer 1 Network Node Compute NodesOpen vSwitch - Self-service NetworksOverviewInternetProvider network Controller NodeSQLDatabaseMessageBusNetworkingManagementML2 Plug-inAPIManagement network10.0.0.0/24Interface 1Open vSwitch AgentInterface 1Provider networkAggregateInstanceInterface 2FirewallProviderBridgeOpen vSwitch AgentLayer-3 AgentOverlay network10.0.1.0/24Self-service networkInterface 1Interface 2IntegrationBridgeProviderBridgeRouterNamespaceInterface 3TunnelBridgeIntegrationBridgeTunnelBridgeInterface 3 Physical Network InfrastructureDHCP AgentMetadata AgentDHCP NamespaceMetadataProcess diff --git a/doc/networking-guide/source/figures/scenario-classic-general.graffle b/doc/networking-guide/source/figures/scenario-classic-general.graffle deleted file mode 100644 index 58cbd79d6c..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-general.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-general.png b/doc/networking-guide/source/figures/scenario-classic-general.png deleted file mode 100644 index be41b13dbe..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-general.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-general.svg b/doc/networking-guide/source/figures/scenario-classic-general.svg deleted file mode 100644 index 88c7550df0..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-general.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-03-01 15:31ZCanvas 1Layer 1 Network Node Compute Node 1General ArchitectureTunnel network10.0.1.0/24External network203.0.113.0/24 Compute Node 2Interface 4(unnumbered) Compute Node XInternetDHCP ServiceRouterSNAT/DNATInterface 210.0.1.21/24SwitchInterface 3(unnumbered)Interface 210.0.1.31/24Interface 210.0.1.32/24Interface 210.0.1.X/24Interface 3(unnumbered)Interface 3(unnumbered)Interface 3(unnumbered)SwitchSwitchSwitchVLAN network(unnumbered)InstancesInstancesInstancesFirewallFirewallFirewall diff --git a/doc/networking-guide/source/figures/scenario-classic-hw.graffle b/doc/networking-guide/source/figures/scenario-classic-hw.graffle deleted file mode 100644 index 47f73700f8..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-hw.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-hw.png b/doc/networking-guide/source/figures/scenario-classic-hw.png deleted file mode 100644 index 3d2fb601fa..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-hw.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-hw.svg b/doc/networking-guide/source/figures/scenario-classic-hw.svg deleted file mode 100644 index 883e4a209a..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-hw.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-04-08 16:35ZCanvas 1Layer 1Controller Node Network NodeCompute Node 11-2CPUHardware Requirements8 GBRAM100 GBStorage1-2CPU2 GBRAM50 GBStorage2-4+CPU8+ GBRAM100+ GBStorage1NIC4NIC3NICCompute Node 22-4+CPU8+ GBRAM100+ GBStorage3NIC diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-compute1.graffle b/doc/networking-guide/source/figures/scenario-classic-lb-compute1.graffle deleted file mode 100644 index 11ef0d28c2..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-compute1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-compute1.png b/doc/networking-guide/source/figures/scenario-classic-lb-compute1.png deleted file mode 100644 index 3e794d2751..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-compute1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-compute1.svg b/doc/networking-guide/source/figures/scenario-classic-lb-compute1.svg deleted file mode 100644 index 97d5ffc1a6..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-lb-compute1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:44ZCanvas 1Layer 1 Compute NodeCompute Node OverviewTunnel network10.0.1.0/24VLAN networkInstanceLinux Network UtilitiesLinux Bridge AgentVXLANTunnelsSecurityGroupsVLANsVLAN BridgebrqTunnel BridgebrqVXLAN InterfacevxlanVLANSubInterfaceInterface 210.0.1.X/24Interface 3(unnumbered) diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-compute2.graffle b/doc/networking-guide/source/figures/scenario-classic-lb-compute2.graffle deleted file mode 100644 index adb83f57dc..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-compute2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-compute2.png b/doc/networking-guide/source/figures/scenario-classic-lb-compute2.png deleted file mode 100644 index faebc1563e..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-compute2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-compute2.svg b/doc/networking-guide/source/figures/scenario-classic-lb-compute2.svg deleted file mode 100644 index c84b98ee08..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-lb-compute2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:45ZCanvas 1Layer 1Compute Node ComponentsTunnel network10.0.1.0/24VLAN networkVLANsInstanceTunnel Bridgebrqeth0iptablesPorttapProject network 1192.168.1.0/24VLAN BridgebrqInstanceProject network 2192.168.2.0/24VXLANTunnelsiptablesPorttapeth0Interface 210.0.1.X/24Interface 3(unnumbered)VXLANInterfaceVLAN SubInterface diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowew1.graffle b/doc/networking-guide/source/figures/scenario-classic-lb-flowew1.graffle deleted file mode 100644 index 18f216584d..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-flowew1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowew1.png b/doc/networking-guide/source/figures/scenario-classic-lb-flowew1.png deleted file mode 100644 index 808ebcb6b4..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-flowew1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowew1.svg b/doc/networking-guide/source/figures/scenario-classic-lb-flowew1.svg deleted file mode 100644 index 95998a0e37..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-lb-flowew1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:45ZCanvas 1Layer 1Network Traffic Flow - East/WestInstances on different networksProject network 1192.168.1.0/24Project network 2192.168.2.0/24Tunnel network10.0.1.0/24VLAN networkCompute Node 2Instance 2Tunnel Bridgebrq(11)Porttap(10)Network NodeRouter NamespaceqrouterqrqrTunnel BridgebrqVLAN Bridgebrq(6)(7)(5)PorttapPorttapVLANsCompute Node 1Instance 1Tunnel Bridgebrq(1)Porttap(2)VXLANTunnelsInterface 3(unnumbered)VLANSubinterface (9)VLANSubinterface (8)VXLANInterface (4)Interface 210.0.1.X/24Interface 210.0.1.X/24Interface 3(unnumbered)VXLANInterface (3) diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowew2.graffle b/doc/networking-guide/source/figures/scenario-classic-lb-flowew2.graffle deleted file mode 100644 index 9815eb47ee..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-flowew2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowew2.png b/doc/networking-guide/source/figures/scenario-classic-lb-flowew2.png deleted file mode 100644 index 13675394e9..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-flowew2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowew2.svg b/doc/networking-guide/source/figures/scenario-classic-lb-flowew2.svg deleted file mode 100644 index 1e209ea193..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-lb-flowew2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-01 21:40ZCanvas 1Layer 1Network Traffic Flow - East/WestInstances on the same networkProject network192.168.1.0/24Tunnel network10.0.1.0/24Compute Node 1Instance 1Tunnel Bridgebrq(1)Porttap(2)VXLANInterface (3)Compute Node 2Instance 2Tunnel Bridgebrq(6)Porttap(5)VXLANInterface (4)Interface 210.0.1.X/24VXLANTunnelsInterface 210.0.1.X/24 diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowns1.graffle b/doc/networking-guide/source/figures/scenario-classic-lb-flowns1.graffle deleted file mode 100644 index 5c10d4dc96..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-flowns1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowns1.png b/doc/networking-guide/source/figures/scenario-classic-lb-flowns1.png deleted file mode 100644 index 0c8e9127df..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-flowns1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowns1.svg b/doc/networking-guide/source/figures/scenario-classic-lb-flowns1.svg deleted file mode 100644 index c788dfee2c..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-lb-flowns1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:48ZCanvas 1Layer 1Compute Node 1Network NodeNetwork Traffic Flow - North/SouthInstances with a fixed IP AddressExternal network203.0.113.0/24Tunnel network10.0.1.0/24Project network 1192.168.1.0/24InternetRouter NamespaceqrouterqrqgVLAN networkInstanceTunnel Bridgebrq(1)Porttap(2)VXLANTunnelsVLANsInstanceTunnel BridgebrqqrExternal BridgebrqTunnel BridgebrqVLAN BridgebrqProject network 2(6)(7)(5)PorttapPorttapInterface 4(unnumbered)PorttapPorttapVLANSubinterfaceInterface 210.0.1.X/24Interface 3(unnumbered)VXLANInterface (4)VXLANInterface (3)Interface 210.0.1.X/24Interface 3(unnumbered)VLANSubinterface diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowns2.graffle b/doc/networking-guide/source/figures/scenario-classic-lb-flowns2.graffle deleted file mode 100644 index 68270dbcc3..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-flowns2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowns2.png b/doc/networking-guide/source/figures/scenario-classic-lb-flowns2.png deleted file mode 100644 index 39f2497921..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-flowns2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-flowns2.svg b/doc/networking-guide/source/figures/scenario-classic-lb-flowns2.svg deleted file mode 100644 index 1cc33d5beb..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-lb-flowns2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:46ZCanvas 1Layer 1Network Traffic Flow - North/SouthInstances with a floating IP addressCompute Node 1InstanceTunnel Bridgebrq(7)PorttapInstanceTunnel BridgebrqInterface 210.0.1.X/24Interface 3(unnumbered)PorttapNetwork NodeExternal network203.0.113.0/24Tunnel network10.0.1.0/24Project network 1192.168.1.0/24Router NamespaceqrouterqrqgVLAN networkqrExternal BridgebrqTunnel BridgebrqVLAN BridgebrqProject network 2(2)(1)(3)PorttapPorttapPorttapVXLANInterface (4)VLANSubinterfaceVXLANTunnelsVLANsInterface 210.0.1.X/24Interface 3(unnumbered)InternetInterface 4(unnumbered)(6)VXLANInterface (5)VLANSubinterface diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-network1.graffle b/doc/networking-guide/source/figures/scenario-classic-lb-network1.graffle deleted file mode 100644 index d7f94ef383..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-network1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-network1.png b/doc/networking-guide/source/figures/scenario-classic-lb-network1.png deleted file mode 100644 index 194bcabd3d..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-network1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-network1.svg b/doc/networking-guide/source/figures/scenario-classic-lb-network1.svg deleted file mode 100644 index aa476179bd..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-lb-network1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:46ZCanvas 1Layer 1 Network NodeNetwork Node OverviewTunnel network10.0.1.0/24External network203.0.113.0/24MetadataAgentLinux Network UtilitiesL3 AgentRouterNamespaceqrouterLinux Bridge AgentInternetVXLANTunnelsDHCPAgentDHCP NamespaceqdhcpVLAN networkVLANsTunnel BridgebrqVLAN BridgebrqInterface 210.0.1.21/24Interface 3(unnumbered)VXLAN InterfacevxlanVLANSubInterfaceInterface 4(unnumbered)External Bridgebrq diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-network2.graffle b/doc/networking-guide/source/figures/scenario-classic-lb-network2.graffle deleted file mode 100644 index 83fe212fdf..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-network2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-network2.png b/doc/networking-guide/source/figures/scenario-classic-lb-network2.png deleted file mode 100644 index df70f60b8a..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-network2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-network2.svg b/doc/networking-guide/source/figures/scenario-classic-lb-network2.svg deleted file mode 100644 index 9388083828..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-lb-network2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:47ZCanvas 1Layer 1Network Node ComponentsTunnel network10.0.1.0/24External network203.0.113.0/24DHCP NamespaceqdhcpExternal BridgebrqRouter NamespaceqrouterMetadataAgentInterface 4(unnumbered)InternetVXLANTunnelsTunnel BridgebrqqrVLAN BridgebrqVLAN networkVLANsPorttapqgProject network 1192.168.1.0/24DHCP NamespaceqdhcpInterface 210.0.1.21/24tapPorttapPorttapVXLAN InterfaceInterface 3(unnumbered)PorttapPorttapVLAN SubInterfacetapiptablesqrProject network 2192.168.2.0/24 diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-services.graffle b/doc/networking-guide/source/figures/scenario-classic-lb-services.graffle deleted file mode 100644 index 3215ae9cb0..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-services.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-services.png b/doc/networking-guide/source/figures/scenario-classic-lb-services.png deleted file mode 100644 index 9eaab08947..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-lb-services.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-lb-services.svg b/doc/networking-guide/source/figures/scenario-classic-lb-services.svg deleted file mode 100644 index a6280fe5df..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-lb-services.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-04-09 15:45ZCanvas 1Layer 1 Controller Node NetworkNodeService LayoutNetworkingManagementLinux NetworkUtilitiesNetworkingLinux Bridge AgentNetworkingL3 AgentNetworkingDHCP AgentNetworkingML2 Plug-in Compute NodesLinux NetworkUtilitiesKVM HypervisorComputeNetworkingLinux Bridge AgentNetworkingML2 Plug-inNetworkingML2 Plug-inNetworkingMetadata Agent diff --git a/doc/networking-guide/source/figures/scenario-classic-networks.graffle b/doc/networking-guide/source/figures/scenario-classic-networks.graffle deleted file mode 100644 index 2390b3a3a2..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-networks.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-networks.png b/doc/networking-guide/source/figures/scenario-classic-networks.png deleted file mode 100644 index 0008826476..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-networks.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-networks.svg b/doc/networking-guide/source/figures/scenario-classic-networks.svg deleted file mode 100644 index 2f2b88ea5e..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-networks.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:20ZCanvas 1Layer 1 Controller Node Network Node Compute Node 1Interface 110.0.0.11/24Interface 110.0.0.21/24Interface 210.0.1.21/24Interface 110.0.0.31/24Interface 210.0.1.31/24Network LayoutInterface 4(unnumbered)Management network10.0.0.0/24Tunnel network10.0.1.0/24External network203.0.113.0/24 Compute Node 2Interface 110.0.0.32/24Interface 210.0.1.32/24Interface 3(unnumbered)Interface 3(unnumbered)InternetVLAN networkInterface 3(unnumbered) diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-compute1.graffle b/doc/networking-guide/source/figures/scenario-classic-ovs-compute1.graffle deleted file mode 100644 index 19f88f2a9b..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-compute1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-compute1.png b/doc/networking-guide/source/figures/scenario-classic-ovs-compute1.png deleted file mode 100644 index 14e9ea7be2..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-compute1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-compute1.svg b/doc/networking-guide/source/figures/scenario-classic-ovs-compute1.svg deleted file mode 100644 index b7bd1e5e9c..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-ovs-compute1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:38ZCanvas 1Layer 1 Compute NodeCompute Node OverviewTunnel network10.0.1.0/24VLAN networkInstanceOpen vSwitchOpen vSwitch AgentIntegration Bridgebr-intVLAN Bridgebr-vlanTunnel Bridgebr-tunVXLAN/GRETunnelsInterface 210.0.1.X/24LinuxBridgeqbrSecurityGroupsInterface 3(unnumbered)VLANs diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-compute2.graffle b/doc/networking-guide/source/figures/scenario-classic-ovs-compute2.graffle deleted file mode 100644 index 63b7d235e1..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-compute2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-compute2.png b/doc/networking-guide/source/figures/scenario-classic-ovs-compute2.png deleted file mode 100644 index ebddfe0b8a..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-compute2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-compute2.svg b/doc/networking-guide/source/figures/scenario-classic-ovs-compute2.svg deleted file mode 100644 index a4baafa0fa..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-ovs-compute2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:38ZCanvas 1Layer 1Compute Node ComponentsTunnel network10.0.1.0/24VLAN networkIntegration Bridgebr-intVLAN Bridgebr-vlanPatchint-br-vlanVLANsVXLAN/GRETunnelsTunnel Bridgebr-tunPatchpatch-intPatchpatch-tunPorttunInterface 210.0.1.X/24InstanceLinux BridgeqbrPatchphy-br-vlanPortint-3Interface 3(unnumbered)Portqvoeth0iptablesPortqvbPorttapProject network192.168.1.0/24 diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew1.graffle b/doc/networking-guide/source/figures/scenario-classic-ovs-flowew1.graffle deleted file mode 100644 index 404af2891e..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew1.png b/doc/networking-guide/source/figures/scenario-classic-ovs-flowew1.png deleted file mode 100644 index 0a668af04d..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew1.svg b/doc/networking-guide/source/figures/scenario-classic-ovs-flowew1.svg deleted file mode 100644 index aa95d37641..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:38ZCanvas 1Layer 1Network NodeCompute Node 2Network Traffic Flow - East/WestInstances on different networksProject network 1192.168.1.0/24Project network 2192.168.2.0/24Tunnel network10.0.1.0/24OVS Tunnel Bridgebr-tunInstance 2Linux Bridgeqbr(6)Porttap(5)OVS Integration Bridgebr-intOVS VLAN Bridgebr-vlanRouter Namespaceqrouterqr-2OVS Tunnel Bridgebr-tunOVS VLAN Bridgebr-vlanOVS Integration Bridgebr-intqr-1Compute Node 1OVS Tunnel Bridgebr-tunInstance 1Linux Bridgeqbr(1)Porttap(2)OVS Integration Bridgebr-intOVS VLAN Bridgebr-vlanVLAN networkPortqvbPortqvoPortqvbPortqvoVXLAN/GRETunnelsPatchint-br-vlanPatchpatch-intPorttunInterface 210.0.1.X/24Patchpatch-tunPatchphy-br-vlanPortint-3Interface 3(unnumbered)VLANsPorttunInterface 210.0.1.X/24Patchpatch-intPatchphy-br-vlanPatchpatch-tunPatchint-br-vlanPortint-3Interface 3(unnumbered)Patchint-br-vlanPatchpatch-intPorttunInterface 210.0.1.X/24Patchpatch-tunPatchphy-br-vlanPortint-3Interface 3(unnumbered)VXLAN/GRETunnelsVLANsPortqr-1Portqr-2(4)(3) diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew2.graffle b/doc/networking-guide/source/figures/scenario-classic-ovs-flowew2.graffle deleted file mode 100644 index b8027764cc..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew2.png b/doc/networking-guide/source/figures/scenario-classic-ovs-flowew2.png deleted file mode 100644 index f9d58a4b53..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew2.svg b/doc/networking-guide/source/figures/scenario-classic-ovs-flowew2.svg deleted file mode 100644 index ad5ca6927e..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-ovs-flowew2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:39ZCanvas 1Layer 1Compute Node 2Network Traffic Flow - East/WestInstances on the same networkProject network192.168.1.0/24Tunnel network10.0.1.0/24OVS Tunnel Bridgebr-tunInstance 2Linux Bridgeqbr(4)Porttap(3)OVS Integration Bridgebr-intOVS VLAN Bridgebr-vlanCompute Node 1OVS Tunnel Bridgebr-tunInstance 1Linux Bridgeqbr(1)Porttap(2)OVS Integration Bridgebr-intOVS VLAN Bridgebr-vlanVLAN networkPortqvbPortqvoPortqvbPortqvoPatchint-br-vlanPatchpatch-intPorttunInterface 210.0.1.X/24Patchpatch-tunPatchphy-br-vlanPortint-3Interface 3(unnumbered)Patchint-br-vlanPatchpatch-intPorttunPatchpatch-tunPatchphy-br-vlanPortint-3VXLAN/GRETunnelsVLANsInterface 210.0.1.X/24Interface 3(unnumbered) diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns1.graffle b/doc/networking-guide/source/figures/scenario-classic-ovs-flowns1.graffle deleted file mode 100644 index d964b8195d..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns1.png b/doc/networking-guide/source/figures/scenario-classic-ovs-flowns1.png deleted file mode 100644 index a174158972..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns1.svg b/doc/networking-guide/source/figures/scenario-classic-ovs-flowns1.svg deleted file mode 100644 index e0d8efdbdc..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:39ZCanvas 1Layer 1Network NodeNetwork Traffic Flow - North/SouthInstances with a fixed IP addressExternal network203.0.113.0/24OVS External Bridgebr-exTunnel network10.0.1.0/24Project network192.168.1.0/24InternetInterface 4(unnumbered)Portint-4Router NamespaceqrouterqrqgOVS Tunnel Bridgebr-tun(4)PorttunInterface 210.0.1.X/24VLAN networkOVS VLAN Bridgebr-vlanOVS Integration Bridgebr-int(5)PortqgPatchpatch-intPatchphy-br-vlanPatchpatch-tunPatchint-br-vlanPortint-3Interface 3(unnumbered)(3)Patchphy-br-exPatchint-br-exPortqrCompute Node 1OVS Tunnel Bridgebr-tunInstanceLinux Bridgeqbr(1)Porttap(2)OVS Integration Bridgebr-intOVS VLAN Bridgebr-vlanPortqvbPortqvoPatchint-br-vlanPatchpatch-intPorttunPatchpatch-tunPatchphy-br-vlanPortint-3VXLAN/GRETunnelsVLANsInterface 210.0.1.X/24Interface 3(unnumbered) diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns2.graffle b/doc/networking-guide/source/figures/scenario-classic-ovs-flowns2.graffle deleted file mode 100644 index bc3cb2a9f9..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns2.png b/doc/networking-guide/source/figures/scenario-classic-ovs-flowns2.png deleted file mode 100644 index 88bdb1dd77..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns2.svg b/doc/networking-guide/source/figures/scenario-classic-ovs-flowns2.svg deleted file mode 100644 index f1447e9b75..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-ovs-flowns2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:39ZCanvas 1Layer 1Network NodeNetwork Traffic Flow - North/SouthInstances with a floating IP addressExternal network203.0.113.0/24OVS External Bridgebr-exTunnel network10.0.1.0/24Project network192.168.1.0/24InternetInterface 4(unnumbered)Portint-4Router NamespaceqrouterqrqgOVS Tunnel Bridgebr-tun(2)PorttunVLAN networkOVS VLAN Bridgebr-vlanOVS Integration Bridgebr-int(1)PortqgPatchpatch-intPatchphy-br-vlanPatchpatch-tunPatchint-br-vlanPortint-3(3)Patchphy-br-exPatchint-br-exPortqrCompute Node 1OVS Tunnel Bridgebr-tunInstanceLinux Bridgeqbr(5)Porttap(4)OVS Integration Bridgebr-intOVS VLAN Bridgebr-vlanPortqvbPortqvoPatchint-br-vlanPatchpatch-intPorttunPatchpatch-tunPatchphy-br-vlanPortint-3Interface 210.0.1.X/24Interface 3(unnumbered)Interface 210.0.1.X/24VXLAN/GRETunnelsVLANsInterface 3(unnumbered) diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-network1.graffle b/doc/networking-guide/source/figures/scenario-classic-ovs-network1.graffle deleted file mode 100644 index 8dd92dffc5..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-network1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-network1.png b/doc/networking-guide/source/figures/scenario-classic-ovs-network1.png deleted file mode 100644 index cb47863dca..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-network1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-network1.svg b/doc/networking-guide/source/figures/scenario-classic-ovs-network1.svg deleted file mode 100644 index 96b07c455c..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-ovs-network1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:39ZCanvas 1Layer 1 Network NodeNetwork Node OverviewTunnel network10.0.1.0/24External network203.0.113.0/24MetadataAgentOpen vSwitchL3 AgentRouterNamespaceqrouterOpen vSwitch AgentIntegration Bridgebr-intExternal Bridgebr-exTunnel Bridgebr-tunInterface 4(unnumbered)InternetVXLAN/GRETunnelsInterface 210.0.1.21/24DHCPAgentDHCP NamespaceqdhcpVLAN networkVLAN Bridgebr-vlanInterface 3(unnumbered)VLANs diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-network2.graffle b/doc/networking-guide/source/figures/scenario-classic-ovs-network2.graffle deleted file mode 100644 index 6c1639ec43..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-network2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-network2.png b/doc/networking-guide/source/figures/scenario-classic-ovs-network2.png deleted file mode 100644 index 33830746b3..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-network2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-network2.svg b/doc/networking-guide/source/figures/scenario-classic-ovs-network2.svg deleted file mode 100644 index 2994555b2b..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-ovs-network2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-06 20:37ZCanvas 1Layer 1Network Node ComponentsTunnel network10.0.1.0/24External network203.0.113.0/24DHCP NamespaceqdhcpIntegration Bridgebr-intExternal Bridgebr-exPorttapPatchint-br-exRouter NamespaceqrouterMetadataAgentPatchphy-br-exPortint-4Interface 4(unnumbered)InternetVXLAN/GRETunnelsTunnel Bridgebr-tunPatchpatch-intPatchpatch-tunPorttunInterface 210.0.1.21/24tapPortqrqrVLAN Bridgebr-vlanVLAN networkPortint-3VLANsInterface 3(unnumbered)Patchphy-br-vlanPatchint-br-vlaniptablesPortqgqgProject network192.168.1.0/24 diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-services.graffle b/doc/networking-guide/source/figures/scenario-classic-ovs-services.graffle deleted file mode 100644 index 2c153e351d..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-services.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-services.png b/doc/networking-guide/source/figures/scenario-classic-ovs-services.png deleted file mode 100644 index c3ff54b9ca..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-classic-ovs-services.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-classic-ovs-services.svg b/doc/networking-guide/source/figures/scenario-classic-ovs-services.svg deleted file mode 100644 index b723a7aed3..0000000000 --- a/doc/networking-guide/source/figures/scenario-classic-ovs-services.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-03-01 15:32ZCanvas 1Layer 1 Controller Node NetworkNodeService LayoutNetworkingManagementOpen vSwitchNetworkingOpen vSwitch AgentNetworkingL3 AgentNetworkingDHCP AgentNetworkingML2 Plug-in Compute NodesOpen vSwitchKVM HypervisorComputeNetworkingOpen vSwitch AgentNetworkingML2 Plug-inNetworkingML2 Plug-inNetworkingMetadata Agent diff --git a/doc/networking-guide/source/figures/scenario-dvr-compute1.graffle b/doc/networking-guide/source/figures/scenario-dvr-compute1.graffle deleted file mode 100644 index 184e6d7516..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-compute1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-compute1.png b/doc/networking-guide/source/figures/scenario-dvr-compute1.png deleted file mode 100644 index 3eae285ac5..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-compute1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-compute1.svg b/doc/networking-guide/source/figures/scenario-dvr-compute1.svg deleted file mode 100644 index 1ffdece9ad..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-compute1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-07 22:38ZCanvas 1Layer 1 Compute NodeCompute Node ComponentsOverviewTunnel network10.0.1.0/24External network203.0.113.0/24Metadata AgentOpen vSwitchL3 AgentFloating IPNamespacefipDist RouterNamespaceqrouterOpen vSwitch AgentIntegration Bridgebr-intExternal Bridgebr-exTunnel Bridgebr-tunInterface 4(unnumbered)InternetVXLAN/GRETunnelsInterface 2LinuxBridgeqbrSecurityGroupsVLAN Bridgebr-vlanInterface 3(unnumbered)VLANsVLAN network diff --git a/doc/networking-guide/source/figures/scenario-dvr-compute2.graffle b/doc/networking-guide/source/figures/scenario-dvr-compute2.graffle deleted file mode 100644 index d99ecad923..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-compute2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-compute2.png b/doc/networking-guide/source/figures/scenario-dvr-compute2.png deleted file mode 100644 index ca4c399adb..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-compute2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-compute2.svg b/doc/networking-guide/source/figures/scenario-dvr-compute2.svg deleted file mode 100644 index 143944e6ec..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-compute2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-07 22:39ZCanvas 1Layer 1Compute Node ComponentsConnectivityTunnel network10.0.1.0/24External network203.0.113.0/24Integration Bridgebr-intExternal Bridgebr-exFloating IP NamespacefipPatchint-br-exDist Router NamespaceqrouterMetadataAgentInternetVXLAN/GRETunnelsTunnel Bridgebr-tunPatchpatch-intPatchpatch-tunPorttunInterface 2PortqrInstanceLinux BridgeqbrPortfgPatchphy-br-exPortint-4Interface 4(unnumbered)fgfprPortqvoeth0iptablesPortqvbPorttapiptablesqrrfpDVR internal network169.254.X.X/31Project network192.168.1.0/24VLAN Bridgebr-vlanVLAN networkVLANsPortint-3Interface 3(unnumbered)Patchphy-br-vlanPatchint-br-vlan diff --git a/doc/networking-guide/source/figures/scenario-dvr-flowew1.graffle b/doc/networking-guide/source/figures/scenario-dvr-flowew1.graffle deleted file mode 100644 index 5b4f2a9397..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-flowew1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-flowew1.png b/doc/networking-guide/source/figures/scenario-dvr-flowew1.png deleted file mode 100644 index f91a1b5903..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-flowew1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-flowew1.svg b/doc/networking-guide/source/figures/scenario-dvr-flowew1.svg deleted file mode 100644 index fa04f41ce0..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-flowew1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-02 19:19ZCanvas 1Layer 1Compute NodeCompute NodeNetwork Traffic Flow - East/WestInstances using different networks on the same routerOVS Integration Bridgebr-intInstance 1Dist Router Namespaceqrouterqr-2Linux BridgeqbrProject network 1192.168.1.0/24(2)PortqvoPortqvbPorttap(1)Project network 2192.168.2.0/24qr-1Tunnel network10.0.1.0/24OVS Tunnel Bridgebr-tunOVS Tunnel Bridgebr-tunOVS Integration Bridgebr-intInstance 2Linux BridgeqbrDist Router Namespaceqrouterqr-2qr-1(4)Portqr-1Portqr-2(3)Patchpatch-intPorttunPatchpatch-intPorttunPatchpatch-tunPortqr-1PortqvoPortqvb(8)Portqr-2Patchpatch-tun(5)(6)Porttap(7)Interface 210.0.1.X/24Interface 210.0.1.X/24VXLAN/GRETunnels diff --git a/doc/networking-guide/source/figures/scenario-dvr-flowns1.graffle b/doc/networking-guide/source/figures/scenario-dvr-flowns1.graffle deleted file mode 100644 index 7e919881c2..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-flowns1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-flowns1.png b/doc/networking-guide/source/figures/scenario-dvr-flowns1.png deleted file mode 100644 index 8159cbc560..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-flowns1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-flowns1.svg b/doc/networking-guide/source/figures/scenario-dvr-flowns1.svg deleted file mode 100644 index a6cf5763af..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-flowns1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-02 20:56ZCanvas 1Layer 1Network NodeCompute NodeNetwork Traffic Flow - North/SouthInstances with a fixed IP addressExternal network203.0.113.0/24OVS External Bridgebr-exInstanceLinux BridgeqbrTunnel network10.0.1.0/24Project network192.168.1.0/24InternetInterface 4(unnumbered)Portint-4OVS Integration Bridgebr-intSNAT NamespacesnatsgqgOVS Tunnel Bridgebr-tunOVS Tunnel Bridgebr-tunOVS Integration Bridgebr-int(2)PortqvbPorttapPortqg(1)PortsgPatchpatch-tun(4)(6)(5)Patchpatch-intPorttunPatchpatch-intPorttunPortqvoPatchpatch-tunInterface 210.0.1.X/24Interface 210.0.1.X/24VXLAN/GRETunnelsDist Router Namespaceqrouterqr(3)PortqrRouter NamespaceqrouterqrPatchphy-br-exPatchint-br-exPortqr diff --git a/doc/networking-guide/source/figures/scenario-dvr-flowns2.graffle b/doc/networking-guide/source/figures/scenario-dvr-flowns2.graffle deleted file mode 100644 index cb4b855cca..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-flowns2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-flowns2.png b/doc/networking-guide/source/figures/scenario-dvr-flowns2.png deleted file mode 100644 index b067a0fbf2..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-flowns2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-flowns2.svg b/doc/networking-guide/source/figures/scenario-dvr-flowns2.svg deleted file mode 100644 index 2be77aa2ea..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-flowns2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-02 20:56ZCanvas 1Layer 1Compute NodeNetwork Traffic Flow - North/SouthInstances with a floating IP addressExternal network203.0.113.0/24OVS External Bridgebr-exFloating IP NamespacefipOVS Integration Bridgebr-intInstanceDist Router NamespaceqrouterrfpqrfprfgLinux BridgeqbrDVR internal network169.254.X.X/31Project network192.168.1.0/24(3)(2)(1)InternetInterface 4(unnumbered)(5)(7)(4)(8)PortqrPortqvoPortqvbPorttap(6)PortfgPortint-4(9)Patchphy-br-exPatchint-br-ex diff --git a/doc/networking-guide/source/figures/scenario-dvr-general.graffle b/doc/networking-guide/source/figures/scenario-dvr-general.graffle deleted file mode 100644 index f25a09c3fc..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-general.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-general.png b/doc/networking-guide/source/figures/scenario-dvr-general.png deleted file mode 100644 index 14d3a112c6..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-general.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-general.svg b/doc/networking-guide/source/figures/scenario-dvr-general.svg deleted file mode 100644 index 247fc4e756..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-general.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-07 22:40ZCanvas 1Layer 1 Network Node Compute Node 1General ArchitectureTunnel network10.0.1.0/24External network203.0.113.0/24 Compute Node 2DHCP Service Compute Node XInstancesDistributed RouterDNAT/SNAT for Floating IPsSNAT RouterSNAT for Fixed IPsInterface 2InstancesDistributed RouterDNAT/SNAT for Floating IPsInstancesDistributed RouterDNAT/SNAT for Floating IPsFirewallFirewallFirewallInterface 4(unnumbered)Interface 4(unnumbered)Interface 4(unnumbered)Interface 4(unnumbered)InternetInternetInternetInternetInterface 3(unnumbered)Interface 2Interface 2Interface 2SwitchSwitchSwitchInterface 3(unnumbered)Interface 3(unnumbered)Interface 3(unnumbered)VLAN networkSwitch diff --git a/doc/networking-guide/source/figures/scenario-dvr-hw.graffle b/doc/networking-guide/source/figures/scenario-dvr-hw.graffle deleted file mode 100644 index 3fc6d98614..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-hw.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-hw.png b/doc/networking-guide/source/figures/scenario-dvr-hw.png deleted file mode 100644 index f74309f6dc..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-hw.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-hw.svg b/doc/networking-guide/source/figures/scenario-dvr-hw.svg deleted file mode 100644 index e5f8c570a4..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-hw.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-02 18:27ZCanvas 1Layer 1Controller Node Network NodeCompute Node 11-2CPUHardware Requirements8 GBRAM100 GBStorage1-2CPU2 GBRAM50 GBStorage2-4+CPU8+ GBRAM100+ GBStorage1NIC4NIC4NICCompute Node 22-4+CPU8+ GBRAM100+ GBStorage4NIC diff --git a/doc/networking-guide/source/figures/scenario-dvr-network1.graffle b/doc/networking-guide/source/figures/scenario-dvr-network1.graffle deleted file mode 100644 index 95f9cd55f6..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-network1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-network1.png b/doc/networking-guide/source/figures/scenario-dvr-network1.png deleted file mode 100644 index dd8fadf0fc..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-network1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-network1.svg b/doc/networking-guide/source/figures/scenario-dvr-network1.svg deleted file mode 100644 index f7fe4c9b4b..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-network1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-07 22:41ZCanvas 1Layer 1 Network NodeNetwork Node ComponentsOverviewTunnel network10.0.1.0/24External network203.0.113.0/24MetadataAgentOpen vSwitchL3 AgentSNATNamespacesnatRouterNamespaceqrouterOpen vSwitch AgentIntegration Bridgebr-intExternal Bridgebr-exTunnel Bridgebr-tunInterface 4(unnumbered)InternetVXLAN/GRETunnelsInterface 2DHCPAgentDHCP NamespaceqdhcpInterface 3(unnumbered)VLANsVLAN Bridgebr-vlanVLAN network diff --git a/doc/networking-guide/source/figures/scenario-dvr-network2.graffle b/doc/networking-guide/source/figures/scenario-dvr-network2.graffle deleted file mode 100644 index ef25773f72..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-network2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-network2.png b/doc/networking-guide/source/figures/scenario-dvr-network2.png deleted file mode 100644 index 334cdcedfd..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-network2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-network2.svg b/doc/networking-guide/source/figures/scenario-dvr-network2.svg deleted file mode 100644 index e010601315..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-network2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-07 22:42ZCanvas 1Layer 1Network Node ComponentsConnectivityTunnel network10.0.1.0/24External network203.0.113.0/24DHCP NamespaceqdhcpIntegration Bridgebr-intExternal Bridgebr-exSNAT NamespacesnatPorttapPatchint-br-exRouter NamespaceqrouterMetadataAgentqgsgiptablesPortqgPatchphy-br-exPortint-4Interface 4(unnumbered)InternetVXLAN/GRETunnelsPortsgTunnel Bridgebr-tunPatchpatch-intPatchpatch-tunPorttunInterface 2tapPortqriptablesqrProject network192.168.1.0/24VLAN Bridgebr-vlanVLANsPortint-3Interface 3(unnumbered)Patchphy-br-vlanPatchint-br-vlanVLAN network diff --git a/doc/networking-guide/source/figures/scenario-dvr-networks.graffle b/doc/networking-guide/source/figures/scenario-dvr-networks.graffle deleted file mode 100644 index 69b3804ea5..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-networks.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-networks.png b/doc/networking-guide/source/figures/scenario-dvr-networks.png deleted file mode 100644 index 64640351cc..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-networks.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-networks.svg b/doc/networking-guide/source/figures/scenario-dvr-networks.svg deleted file mode 100644 index a422afdf0c..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-networks.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-07 21:42ZCanvas 1Layer 1 Controller Node Network Node Compute Node 1Interface 110.0.0.11/24Interface 110.0.0.21/24Interface 210.0.1.21/24Interface 110.0.0.31/24Interface 210.0.1.31/24Network LayoutInterface 4(unnumbered)Management network10.0.0.0/24Tunnel network10.0.1.0/24External network203.0.113.0/24 Compute Node 2Interface 110.0.0.32/24Interface 210.0.1.32/24Interface 4(unnumbered)Interface 4(unnumbered)InternetInterface 3(unnumbered)Interface 3(unnumbered)Interface 3(unnumbered)VLAN network diff --git a/doc/networking-guide/source/figures/scenario-dvr-services.graffle b/doc/networking-guide/source/figures/scenario-dvr-services.graffle deleted file mode 100644 index 0c9d7b536d..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-services.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-services.png b/doc/networking-guide/source/figures/scenario-dvr-services.png deleted file mode 100644 index 8898fdf6a9..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-dvr-services.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-dvr-services.svg b/doc/networking-guide/source/figures/scenario-dvr-services.svg deleted file mode 100644 index 7d1776bb3e..0000000000 --- a/doc/networking-guide/source/figures/scenario-dvr-services.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-04-28 17:11ZCanvas 1Layer 1 Controller Node NetworkNodeService LayoutNetworkingManagementOpen vSwitchNetworkingOpen vSwitch AgentNetworkingL3 AgentNetworkingDHCP AgentNetworkingML2 Plug-in Compute NodesOpen vSwitchHypervisorComputeNetworkingOpen vSwitch AgentNetworkingML2 Plug-inNetworkingML2 Plug-inNetworkingMetadata AgentNetworkingL3 AgentNetworkingMetadata Agent diff --git a/doc/networking-guide/source/figures/scenario-l3ha-general.graffle b/doc/networking-guide/source/figures/scenario-l3ha-general.graffle deleted file mode 100644 index 30a829f5e4..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-general.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-general.png b/doc/networking-guide/source/figures/scenario-l3ha-general.png deleted file mode 100644 index 4900939158..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-general.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-general.svg b/doc/networking-guide/source/figures/scenario-l3ha-general.svg deleted file mode 100644 index 06ef651e6a..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-general.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.5.2 2016-06-08 06:40:57 +0000Canvas 1Layer 1 Network Node1 Compute Node 1General ArchitectureTunnel network10.0.1.0/24External network203.0.113.0/24 Compute Node X Compute Node 2InternetDHCP ServiceRouter ActiveSNAT/DNATInstancesInstancesInstancesFirewallFirewallFirewallInterface 210.0.1.31/24Interface 210.0.1.32/24VLAN network Network Node2InternetDHCP ServiceRouter StandbySNAT/DNAT Network NodeXInternetDHCP ServiceRouter StandbySNAT/DNATInterface 210.0.1.21/24SwitchSwitchSwitchInterface 3(unnumbered)Interface 210.0.1.22/24Interface 3(unnumbered)Interface 210.0.1.X/24Interface 3(unnumbered)Interface 210.0.1.Y/24SwitchSwitchSwitchInterface 3(unnumbered)Interface 3(unnumbered)Interface 3(unnumbered)Interface 4(unnumbered)Interface 4(unnumbered)Interface 4(unnumbered) diff --git a/doc/networking-guide/source/figures/scenario-l3ha-hw.graffle b/doc/networking-guide/source/figures/scenario-l3ha-hw.graffle deleted file mode 100644 index 8c201abc7b..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-hw.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-hw.png b/doc/networking-guide/source/figures/scenario-l3ha-hw.png deleted file mode 100644 index 7ed736caf5..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-hw.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-hw.svg b/doc/networking-guide/source/figures/scenario-l3ha-hw.svg deleted file mode 100644 index 1949334ca0..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-hw.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 17:37ZCanvas 1Layer 1Controller Node Network Node 1Compute Node 11-2CPUHardware Requirements8 GBRAM100 GBStorage1-2CPU2 GBRAM50 GBStorage2-4+CPU8+ GBRAM100+ GBStorage1NIC4NIC3NICCompute Node 22-4+CPU8+ GBRAM100+ GBStorage3NIC Network Node 21-2CPU2 GBRAM50 GBStorage4NIC diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute1.graffle b/doc/networking-guide/source/figures/scenario-l3ha-lb-compute1.graffle deleted file mode 100644 index 0a5a8255ca..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute1.png b/doc/networking-guide/source/figures/scenario-l3ha-lb-compute1.png deleted file mode 100644 index 6567047b59..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute1.svg b/doc/networking-guide/source/figures/scenario-l3ha-lb-compute1.svg deleted file mode 100644 index 27a89f1560..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 17:55ZCanvas 1Layer 1 Compute NodeCompute Node ComponentsOverviewTunnel network10.0.1.0/24VLAN networkLinux Network UtilitiesLinux Bridge AgentVXLANTunnelsSecurityGroupsVLANsInterface 210.0.1.X/24Interface 3(unnumbered)LinuxBridgebrqLinuxBridgebrqVXLAN InterfaceVLANSubInterface diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute2.graffle b/doc/networking-guide/source/figures/scenario-l3ha-lb-compute2.graffle deleted file mode 100644 index 49e20dacc0..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute2.png b/doc/networking-guide/source/figures/scenario-l3ha-lb-compute2.png deleted file mode 100644 index 347d7fca93..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute2.svg b/doc/networking-guide/source/figures/scenario-l3ha-lb-compute2.svg deleted file mode 100644 index b38626e4d4..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-lb-compute2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 22:20ZCanvas 1Layer 1Compute Node ComponentsConnectivityTunnel network10.0.1.0/24VLAN networkVLANsInstance 1Linux Bridgebrqeth0iptablesPorttapProject network 1Linux BridgebrqInstance 2Project network 2VXLANTunnelsiptablesPorttapeth0Interface 210.0.1.X/24Interface 3(unnumbered)VXLANInterfaceVLAN SubInterface diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-network1.graffle b/doc/networking-guide/source/figures/scenario-l3ha-lb-network1.graffle deleted file mode 100644 index 5004b21cbb..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-lb-network1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-network1.png b/doc/networking-guide/source/figures/scenario-l3ha-lb-network1.png deleted file mode 100644 index e44ab73e63..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-lb-network1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-network1.svg b/doc/networking-guide/source/figures/scenario-l3ha-lb-network1.svg deleted file mode 100644 index 7a9101d5e5..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-lb-network1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 17:55ZCanvas 1Layer 1 Network NodeNetwork Node ComponentsOverviewTunnel network10.0.1.0/24External network203.0.113.0/24MetadataAgentLinux Network UtilitiesL3 AgentRouterNamespaceqrouterLinux Bridge AgentInternetVXLANTunnelsDHCPAgentDHCP NamespaceqdhcpVLAN networkVLANsInterface 210.0.1.21/24Interface 3(unnumbered)Interface 4(unnumbered)LinuxBridgebrqVRRPkeepalivedLinuxBridgebrqLinuxBridgebrqVXLAN InterfaceVLANSubInterface diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-network2.graffle b/doc/networking-guide/source/figures/scenario-l3ha-lb-network2.graffle deleted file mode 100644 index c854cd5a23..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-lb-network2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-network2.png b/doc/networking-guide/source/figures/scenario-l3ha-lb-network2.png deleted file mode 100644 index fbbdc8d2aa..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-lb-network2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-network2.svg b/doc/networking-guide/source/figures/scenario-l3ha-lb-network2.svg deleted file mode 100644 index 0b542c5479..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-lb-network2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 22:18ZCanvas 1Layer 1Network Node ComponentsConnectivityTunnel network10.0.1.0/24External network203.0.113.0/24DHCP NamespaceqdhcpLinux BridgebrqRouter NamespaceqrouterMetadataAgentInterface 4(unnumbered)InternetVXLANTunnelsLinux BridgebrqLinux BridgebrqVLAN networkVLANsProject network 1DHCP NamespaceqdhcptapPorttapPorttapVXLAN InterfaceInterface 3(unnumbered)PorttapVLAN SubInterfacetapiptablesProject network 2qrLinux BridgebrqhaPorttapInterface 210.0.1.21/24VXLAN InterfacePorttapqgPorttapqrHA internal network169.254.192.0/18 diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-services.graffle b/doc/networking-guide/source/figures/scenario-l3ha-lb-services.graffle deleted file mode 100644 index 1529f2991e..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-lb-services.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-services.png b/doc/networking-guide/source/figures/scenario-l3ha-lb-services.png deleted file mode 100644 index a0cd416d60..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-lb-services.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-lb-services.svg b/doc/networking-guide/source/figures/scenario-l3ha-lb-services.svg deleted file mode 100644 index 124056b887..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-lb-services.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 17:38ZCanvas 1Layer 1 Controller Node NetworkNodesService LayoutNetworkingManagementLinux NetworkUtilitiesNetworkingLinux Bridge AgentNetworkingL3 AgentNetworkingDHCP AgentNetworkingML2 Plug-in Compute NodesLinux Network UtilitiesKVM HypervisorComputeNetworkingLinux Bridge AgentNetworkingML2 Plug-inNetworkingML2 Plug-inNetworkingMetadata AgentKeepalive ServiceVRRP diff --git a/doc/networking-guide/source/figures/scenario-l3ha-networks.graffle b/doc/networking-guide/source/figures/scenario-l3ha-networks.graffle deleted file mode 100644 index 8dc3834716..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-networks.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-networks.png b/doc/networking-guide/source/figures/scenario-l3ha-networks.png deleted file mode 100644 index fb961aceeb..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-networks.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-networks.svg b/doc/networking-guide/source/figures/scenario-l3ha-networks.svg deleted file mode 100644 index 81198a3fdb..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-networks.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 23:41ZCanvas 1Layer 1 Controller Node Network Node 1 Compute Node 1Interface 110.0.0.11/24Interface 110.0.0.21/24Interface 210.0.1.21/24Interface 110.0.0.31/24Interface 210.0.1.31/24Network LayoutInterface 4(unnumbered)Management network10.0.0.0/24Tunnel network10.0.1.0/24External network203.0.113.0/24 Compute Node 2Interface 110.0.0.32/24Interface 210.0.1.32/24Interface 3(unnumbered)Interface 3(unnumbered)InternetVLAN networkInterface 3(unnumbered) Network Node 2Interface 110.0.0.22/24Interface 210.0.1.22/24Interface 3(unnumbered)Interface 4(unnumbered)Internet diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute1.graffle b/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute1.graffle deleted file mode 100644 index 8eefe8d8db..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute1.png b/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute1.png deleted file mode 100644 index b98f320188..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute1.svg b/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute1.svg deleted file mode 100644 index 388a9378a4..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 17:40ZCanvas 1Layer 1 Compute NodeCompute Node ComponentsOverviewTunnel network10.0.1.0/24VLAN networkInstanceOpen vSwitchOpen vSwitch AgentVXLAN/GRETunnelsVLANsTunnel Bridgebr-tunVLAN Bridgebr-vlanInterface 210.0.1.X/24Interface 3(unnumbered)LinuxBridgeqbrSecurityGroupsIntegration Bridgebr-int diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute2.graffle b/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute2.graffle deleted file mode 100644 index 4f07980082..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute2.png b/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute2.png deleted file mode 100644 index 068585264f..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute2.svg b/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute2.svg deleted file mode 100644 index 95dbfcdfd9..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-ovs-compute2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 17:41ZCanvas 1Layer 1Compute Node ComponentsConnectivityTunnel network10.0.1.0/24VLAN networkIntegration Bridgebr-intVLAN Bridgebr-vlanPatchint-br-vlanVLANsVXLAN/GRETunnelsTunnel Bridgebr-tunPatchpatch-intPatchpatch-tunPorttunInterface 210.0.1.X/24InstanceLinux BridgeqbrPatchphy-br-vlanPortint-3Interface 3(unnumbered)Portqvoeth0iptablesPortqvbPorttapProject network192.168.1.0/24 diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network1.graffle b/doc/networking-guide/source/figures/scenario-l3ha-ovs-network1.graffle deleted file mode 100644 index b66f6f3ce3..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network1.png b/doc/networking-guide/source/figures/scenario-l3ha-ovs-network1.png deleted file mode 100644 index 1e1164e676..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network1.svg b/doc/networking-guide/source/figures/scenario-l3ha-ovs-network1.svg deleted file mode 100644 index 41a0f1bd08..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 22:21ZCanvas 1Layer 1 Network NodeNetwork Node ComponentsOverviewTunnel network10.0.1.0/24External network203.0.113.0/24MetadataAgentOpen vSwitchL3 AgentRouterNamespaceqrouterOpen vSwitch AgentIntegration Bridgebr-intExternal Bridgebr-exTunnel Bridgebr-tunInterface 4(unnumbered)InternetVXLAN/GRETunnelsInterface 210.0.1.21/24DHCPAgentDHCP NamespaceqdhcpVLAN networkVLAN Bridgebr-vlanInterface 3(unnumbered)VLANsVRRPkeepalived diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network2.graffle b/doc/networking-guide/source/figures/scenario-l3ha-ovs-network2.graffle deleted file mode 100644 index 10f100ae64..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network2.png b/doc/networking-guide/source/figures/scenario-l3ha-ovs-network2.png deleted file mode 100644 index 6e13537f9a..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network2.svg b/doc/networking-guide/source/figures/scenario-l3ha-ovs-network2.svg deleted file mode 100644 index 026fc1a19a..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-ovs-network2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-11 22:59ZCanvas 1Layer 1Network Node ComponentsConnectivityTunnel network10.0.1.0/24External network203.0.113.0/24DHCP NamespaceqdhcpIntegration Bridgebr-intExternal Bridgebr-exPorttapPatchint-br-exRouter NamespaceqrouterMetadataAgentPatchphy-br-exPortint-4Interface 4(unnumbered)InternetVXLAN/GRETunnelsTunnel Bridgebr-tunPatchpatch-intPatchpatch-tunPorttunInterface 210.0.1.21/24tapVLAN Bridgebr-vlanVLAN networkPortint-3VLANsInterface 3(unnumbered)Patchphy-br-vlanPatchint-br-vlaniptablesProject network192.168.1.0/24PortqrPortqgqrqghaPorthaHA internal network169.254.192.0/18 diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-services.graffle b/doc/networking-guide/source/figures/scenario-l3ha-ovs-services.graffle deleted file mode 100644 index e0607424fc..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-ovs-services.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-services.png b/doc/networking-guide/source/figures/scenario-l3ha-ovs-services.png deleted file mode 100644 index 182a54f35c..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-l3ha-ovs-services.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-l3ha-ovs-services.svg b/doc/networking-guide/source/figures/scenario-l3ha-ovs-services.svg deleted file mode 100644 index 33b884bcec..0000000000 --- a/doc/networking-guide/source/figures/scenario-l3ha-ovs-services.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-08-10 17:08ZCanvas 1Layer 1 Controller Node NetworkNodeService LayoutNetworkingManagementOpen vSwitchNetworkingOpen vSwitch AgentNetworkingL3 AgentNetworkingDHCP AgentNetworkingML2 Plug-in Compute NodesOpen vSwitchKVM HypervisorComputeNetworkingOpen vSwitch AgentNetworkingML2 Plug-inNetworkingML2 Plug-inNetworkingMetadata AgentKeepalive ServiceVRRP diff --git a/doc/networking-guide/source/figures/scenario-provider-general.graffle b/doc/networking-guide/source/figures/scenario-provider-general.graffle deleted file mode 100644 index 2c49c6c567..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-general.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-general.png b/doc/networking-guide/source/figures/scenario-provider-general.png deleted file mode 100644 index 9204ed38e3..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-general.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-general.svg b/doc/networking-guide/source/figures/scenario-provider-general.svg deleted file mode 100644 index 74f49e10f6..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-general.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-13 16:34ZCanvas 1Layer 1 Controller Node Compute Node 1General Architecture Compute Node 2 Compute Node XDHCP ServiceSwitchSwitchSwitchSwitchInstancesInstancesInstancesFirewallFirewallFirewall Physical Network InfrastructureInternetInterface 2(unnumbered)RouterExternal networkInterface 2(unnumbered)Interface 2(unnumbered)Interface 2(unnumbered)SwitchGeneric network(One or more VLANs)Firewall diff --git a/doc/networking-guide/source/figures/scenario-provider-hw.graffle b/doc/networking-guide/source/figures/scenario-provider-hw.graffle deleted file mode 100644 index 8ee482ad26..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-hw.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-hw.png b/doc/networking-guide/source/figures/scenario-provider-hw.png deleted file mode 100644 index f6f3521bb2..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-hw.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-hw.svg b/doc/networking-guide/source/figures/scenario-provider-hw.svg deleted file mode 100644 index 1e52c41d44..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-hw.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-04-24 12:45ZCanvas 1Layer 1Controller NodeCompute Node 11-2CPUHardware Layout8 GBRAM100 GBStorage2-4+CPU8+ GBRAM100+ GBStorage2NIC2NICCompute Node 22-4+CPU8+ GBRAM100+ GBStorage2NIC diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-compute1.graffle b/doc/networking-guide/source/figures/scenario-provider-lb-compute1.graffle deleted file mode 100644 index b88d3f419d..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-compute1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-compute1.png b/doc/networking-guide/source/figures/scenario-provider-lb-compute1.png deleted file mode 100644 index 77af83e24d..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-compute1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-compute1.svg b/doc/networking-guide/source/figures/scenario-provider-lb-compute1.svg deleted file mode 100644 index cdce4d6a56..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-lb-compute1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-08 22:22ZCanvas 1Layer 1 Compute NodeCompute Node ComponentsOverviewGeneric networkLinux Network UtilitiesLinux Bridge AgentSecurityGroupsLinux BridgebrqVLANsVLANSubInterfaceInterface 2(unnumbered) diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-compute2.graffle b/doc/networking-guide/source/figures/scenario-provider-lb-compute2.graffle deleted file mode 100644 index 2b4a677e93..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-compute2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-compute2.png b/doc/networking-guide/source/figures/scenario-provider-lb-compute2.png deleted file mode 100644 index 34ba6c21f6..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-compute2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-compute2.svg b/doc/networking-guide/source/figures/scenario-provider-lb-compute2.svg deleted file mode 100644 index 9b2aeebe81..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-lb-compute2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-08 22:22ZCanvas 1Layer 1Compute Node ComponentsConnectivityGeneric networkVLANsInstance 1Linux BridgebrqProvider network 1eth0iptablesPorttapInstance 2Linux BridgebrqInterface 2(unnumbered)VLAN SubInterfaceVLAN SubInterfaceeth0iptablesPorttapProvider network 2 diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-controller1.graffle b/doc/networking-guide/source/figures/scenario-provider-lb-controller1.graffle deleted file mode 100644 index f15be3d673..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-controller1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-controller1.png b/doc/networking-guide/source/figures/scenario-provider-lb-controller1.png deleted file mode 100644 index f0c5106d16..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-controller1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-controller1.svg b/doc/networking-guide/source/figures/scenario-provider-lb-controller1.svg deleted file mode 100644 index 87d246377d..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-lb-controller1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-08 22:22ZCanvas 1Layer 1Controller Node ComponentsOverview Controller NodeGeneric networkLinux Network UtilitiesLinux Bridge AgentSecurityGroupsLinux BridgebrqVLANsVLANSubInterfaceInterface 2(unnumbered)DHCPAgentDHCP Namespaceqdhcp diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-controller2.graffle b/doc/networking-guide/source/figures/scenario-provider-lb-controller2.graffle deleted file mode 100644 index 49de2f61f1..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-controller2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-controller2.png b/doc/networking-guide/source/figures/scenario-provider-lb-controller2.png deleted file mode 100644 index fd86ac29fe..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-controller2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-controller2.svg b/doc/networking-guide/source/figures/scenario-provider-lb-controller2.svg deleted file mode 100644 index 0de0efa7ed..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-lb-controller2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-08 22:23ZCanvas 1Layer 1Controller Node ComponentsConnectivityGeneric networkVLANsDHCP Namespace 1qdhcpLinux BridgebrqProvider network 1tapPorttapDHCP Namespace 2qdhcpLinux BridgebrqInterface 2(unnumbered)VLAN SubInterfaceVLAN SubInterfacetapPorttapProvider network 2 diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-flowew1.graffle b/doc/networking-guide/source/figures/scenario-provider-lb-flowew1.graffle deleted file mode 100644 index 7b83dc4943..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-flowew1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-flowew1.png b/doc/networking-guide/source/figures/scenario-provider-lb-flowew1.png deleted file mode 100644 index 9c02587b86..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-flowew1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-flowew1.svg b/doc/networking-guide/source/figures/scenario-provider-lb-flowew1.svg deleted file mode 100644 index 9dd37d206b..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-lb-flowew1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-08 22:23ZCanvas 1Layer 1Compute Node 2Physical Network InfrastructureNetwork Traffic Flow - East/WestInstances on different networksProvider network 1192.0.2.0/24Generic networkCompute Node 1Instance 1Linux Bridgeqbr(1)PorttapSwitch(3)PortPortRouter(4)PortPortInstance 2Linux Bridgeqbr(6)PorttapVLAN SubInterfaceVLANsInterface 2(unnumbered)PortPortVLANsInterface 2(unnumbered)Provider network 2198.51.100.0/24(2)VLAN SubInterface(5) diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-flowew2.graffle b/doc/networking-guide/source/figures/scenario-provider-lb-flowew2.graffle deleted file mode 100644 index d5698be548..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-flowew2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-flowew2.png b/doc/networking-guide/source/figures/scenario-provider-lb-flowew2.png deleted file mode 100644 index 28f86a1805..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-flowew2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-flowew2.svg b/doc/networking-guide/source/figures/scenario-provider-lb-flowew2.svg deleted file mode 100644 index 66c9ec8ee6..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-lb-flowew2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-08 22:23ZCanvas 1Layer 1Compute Node 2Physical Network InfrastructureNetwork Traffic Flow - East/WestInstances on the same networkProvider network 1192.0.2.0/24Generic networkCompute Node 1Instance 1Linux Bridgeqbr(1)PorttapSwitch(3)Instance 2Linux Bridgeqbr(5)PorttapVLAN SubInterfaceVLANsInterface 2(unnumbered)PortPortVLANsInterface 2(unnumbered)(2)VLAN SubInterface(4) diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-flowns1.graffle b/doc/networking-guide/source/figures/scenario-provider-lb-flowns1.graffle deleted file mode 100644 index 2b6f481b38..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-flowns1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-flowns1.png b/doc/networking-guide/source/figures/scenario-provider-lb-flowns1.png deleted file mode 100644 index 783ebdcf11..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-flowns1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-flowns1.svg b/doc/networking-guide/source/figures/scenario-provider-lb-flowns1.svg deleted file mode 100644 index 43d0591411..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-lb-flowns1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-08 22:25ZCanvas 1Layer 1Physical Network InfrastructureController NodeNetwork Traffic Flow - North/SouthExternal network203.0.113.0/24Provider network192.0.2.0/24DHCP NamespaceqdhcpGeneric networkLinux BridgeqbrCompute Node 1InstanceLinux Bridgeqbr(1)Porttap(2)Interface 2(unnumbered)tapPorttapVLAN SubInterfaceSwitch(3)Interface 2(unnumbered)PortPortRouter(4)PortPortPortInternetVLANsVLANsPortPortVLAN SubInterface diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-services.graffle b/doc/networking-guide/source/figures/scenario-provider-lb-services.graffle deleted file mode 100644 index 50e703f0a6..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-services.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-services.png b/doc/networking-guide/source/figures/scenario-provider-lb-services.png deleted file mode 100644 index 6ee067f135..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-lb-services.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-lb-services.svg b/doc/networking-guide/source/figures/scenario-provider-lb-services.svg deleted file mode 100644 index fb5e4f79b1..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-lb-services.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-04-23 15:33ZCanvas 1Layer 1 Controller NodeService LayoutNetworkingManagementLinux Network UtilitiesNetworkingLinux Bridge AgentNetworkingDHCP Agent Compute NodesLinux Network UtilitiesHypervisorComputeNetworkingML2 Plug-inNetworkingML2 Plug-inNetworkingLinux Bridge Agent diff --git a/doc/networking-guide/source/figures/scenario-provider-networks.graffle b/doc/networking-guide/source/figures/scenario-provider-networks.graffle deleted file mode 100644 index bbfb256691..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-networks.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-networks.png b/doc/networking-guide/source/figures/scenario-provider-networks.png deleted file mode 100644 index 3a04d243b9..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-networks.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-networks.svg b/doc/networking-guide/source/figures/scenario-provider-networks.svg deleted file mode 100644 index dc3d62b1d4..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-networks.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-13 16:35ZCanvas 1Layer 1 Controller Node Compute Node 1Interface 110.0.0.11/24Interface 110.0.0.31/24Network LayoutInterface 2(unnumbered)Management network10.0.0.0/24Generic network(One or more VLANs) Compute Node 2Interface 110.0.0.32/24Interface 2(unnumbered)Physical NetworkInfrastructureInternetInterface 2(unnumbered)External network diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-compute1.graffle b/doc/networking-guide/source/figures/scenario-provider-ovs-compute1.graffle deleted file mode 100644 index 68241f0429..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-compute1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-compute1.png b/doc/networking-guide/source/figures/scenario-provider-ovs-compute1.png deleted file mode 100644 index 0301cc1a15..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-compute1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-compute1.svg b/doc/networking-guide/source/figures/scenario-provider-ovs-compute1.svg deleted file mode 100644 index 2e5f7ead7e..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-ovs-compute1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-13 16:37ZCanvas 1Layer 1 Compute NodeCompute Node ComponentsOverviewGeneric networkOpen vSwitchOpen vSwitch AgentIntegration Bridgebr-intProvider Bridgebr-exLinuxBridgeqbrSecurityGroupsInterface 2(unnumbered)VLANs diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-compute2.graffle b/doc/networking-guide/source/figures/scenario-provider-ovs-compute2.graffle deleted file mode 100644 index 17d5732d44..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-compute2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-compute2.png b/doc/networking-guide/source/figures/scenario-provider-ovs-compute2.png deleted file mode 100644 index a5e3d6b94a..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-compute2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-compute2.svg b/doc/networking-guide/source/figures/scenario-provider-ovs-compute2.svg deleted file mode 100644 index 1d62066899..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-ovs-compute2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-13 16:38ZCanvas 1Layer 1Compute Node ComponentsConnectivityGeneric networkIntegration Bridgebr-intProvider Bridgebr-exPatchint-br-exVLANsInstance 1Linux BridgeqbrPatchphy-br-exPortint-2Interface 2(unnumbered)Portqvoeth0iptablesPortqvbPorttapProvider network 1Instance 2Linux BridgeqbrPortqvoeth0iptablesPortqvbPorttapProvider network 2 diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-controller1.graffle b/doc/networking-guide/source/figures/scenario-provider-ovs-controller1.graffle deleted file mode 100644 index 7b796a783e..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-controller1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-controller1.png b/doc/networking-guide/source/figures/scenario-provider-ovs-controller1.png deleted file mode 100644 index 84506ca8c0..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-controller1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-controller1.svg b/doc/networking-guide/source/figures/scenario-provider-ovs-controller1.svg deleted file mode 100644 index eb0d6e7e1b..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-ovs-controller1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-13 16:39ZCanvas 1Layer 1 Controller NodeController Node ComponentsOverviewOpen vSwitchOpen vSwitch AgentIntegration Bridgebr-intDHCPAgentDHCP NamespaceqdhcpGeneric networkVLANsProviderBridgebr-exInterface 2(unnumbered) diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-controller2.graffle b/doc/networking-guide/source/figures/scenario-provider-ovs-controller2.graffle deleted file mode 100644 index 46cfab0ec7..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-controller2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-controller2.png b/doc/networking-guide/source/figures/scenario-provider-ovs-controller2.png deleted file mode 100644 index 86adf30a85..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-controller2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-controller2.svg b/doc/networking-guide/source/figures/scenario-provider-ovs-controller2.svg deleted file mode 100644 index 6150e4d2f5..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-ovs-controller2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-13 16:41ZCanvas 1Layer 1Controller Node ComponentsConnectivityDHCPNamespace 1qdhcpIntegration Bridgebr-intProvider Bridgebr-exPatchint-br-exPatchphy-br-exGeneric networkVLANsProvider network 1Portint-2Interface 2(unnumbered)DHCPNamespace 2qdhcpProvider network 2PorttaptaptapPorttap diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew1.graffle b/doc/networking-guide/source/figures/scenario-provider-ovs-flowew1.graffle deleted file mode 100644 index b4df1299af..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew1.png b/doc/networking-guide/source/figures/scenario-provider-ovs-flowew1.png deleted file mode 100644 index f3d6d2bc41..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew1.svg b/doc/networking-guide/source/figures/scenario-provider-ovs-flowew1.svg deleted file mode 100644 index d22356f82c..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-13 16:42ZCanvas 1Layer 1Compute Node 2Physical Network InfrastructureNetwork Traffic Flow - East/WestInstances on different networksProvider network 1192.0.2.0/24Generic networkCompute Node 1Instance 1Linux Bridgeqbr(1)Porttap(2)OVS Integration Bridgebr-intOVS Provider Bridgebr-exPortqvbPortqvoPatchint-br-exPatchphy-br-exPortint-2Switch(3)PortPortRouter(4)PortPortInstance 2Linux Bridgeqbr(6)Porttap(5)OVS Integration Bridgebr-intOVS Provider Bridgebr-exPortqvbPortqvoPatchint-br-exPatchphy-br-exPortint-2VLANsInterface 2(unnumbered)PortPortVLANsInterface 2(unnumbered)Provider network 2198.51.100.0/24 diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew2.graffle b/doc/networking-guide/source/figures/scenario-provider-ovs-flowew2.graffle deleted file mode 100644 index 591205727b..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew2.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew2.png b/doc/networking-guide/source/figures/scenario-provider-ovs-flowew2.png deleted file mode 100644 index 44ad442016..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew2.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew2.svg b/doc/networking-guide/source/figures/scenario-provider-ovs-flowew2.svg deleted file mode 100644 index b7908e95b4..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-ovs-flowew2.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-13 16:43ZCanvas 1Layer 1Physical Network InfrastructureNetwork Traffic Flow - East/WestInstances on the same networkProvider network192.0.2.0/24Generic networkSwitch(3)PortPortCompute Node 1Instance 1Linux Bridgeqbr(1)Porttap(2)OVS Integration Bridgebr-intOVS Provider Bridgebr-exPortqvbPortqvoPatchint-br-exPatchphy-br-exPortint-2Compute Node 2Instance 2Linux Bridgeqbr(5)Porttap(4)OVS Integration Bridgebr-intOVS Provider Bridgebr-exPortqvbPortqvoPatchint-br-exPatchphy-br-exPortint-2VLANsVLANsInterface 2(unnumbered)Interface 2(unnumbered) diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-flowns1.graffle b/doc/networking-guide/source/figures/scenario-provider-ovs-flowns1.graffle deleted file mode 100644 index 83e3b56bcf..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-flowns1.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-flowns1.png b/doc/networking-guide/source/figures/scenario-provider-ovs-flowns1.png deleted file mode 100644 index 05a83266c2..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-flowns1.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-flowns1.svg b/doc/networking-guide/source/figures/scenario-provider-ovs-flowns1.svg deleted file mode 100644 index 451c93bbbb..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-ovs-flowns1.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-07-13 16:43ZCanvas 1Layer 1Physical Network InfrastructureController NodeNetwork Traffic Flow - North/SouthExternal network203.0.113.0/24OVS Provider Bridgebr-exProvider network192.0.2.0/24DHCP NamespaceqdhcpGeneric networkOVS Integration Bridgebr-intPatchphy-br-exPatchint-br-exCompute Node 1InstanceLinux Bridgeqbr(1)Porttap(2)OVS Integration Bridgebr-intOVS Provider Bridgebr-exPortqvbPortqvoPatchint-br-exPatchphy-br-exPortint-2Interface 2(unnumbered)tapPorttapPortint-2Switch(3)Interface 2(unnumbered)PortPortRouter(4)PortPortPortInternetVLANsVLANsPortPort diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-services.graffle b/doc/networking-guide/source/figures/scenario-provider-ovs-services.graffle deleted file mode 100644 index 9d6681a8e5..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-services.graffle and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-services.png b/doc/networking-guide/source/figures/scenario-provider-ovs-services.png deleted file mode 100644 index c5d13a88e8..0000000000 Binary files a/doc/networking-guide/source/figures/scenario-provider-ovs-services.png and /dev/null differ diff --git a/doc/networking-guide/source/figures/scenario-provider-ovs-services.svg b/doc/networking-guide/source/figures/scenario-provider-ovs-services.svg deleted file mode 100644 index a03b2a309c..0000000000 --- a/doc/networking-guide/source/figures/scenario-provider-ovs-services.svg +++ /dev/null @@ -1,3 +0,0 @@ - - - Produced by OmniGraffle 6.0.5 2015-04-24 20:49ZCanvas 1Layer 1 Controller NodeService LayoutNetworkingManagementOpen vSwitchNetworkingOpen vSwitch AgentNetworkingDHCP Agent Compute NodesOpen vSwitchHypervisorComputeNetworkingOpen vSwitch AgentNetworkingML2 Plug-inNetworkingML2 Plug-in diff --git a/doc/networking-guide/source/intro-os-networking.rst b/doc/networking-guide/source/intro-os-networking.rst index 3869d47789..5cac2a244d 100644 --- a/doc/networking-guide/source/intro-os-networking.rst +++ b/doc/networking-guide/source/intro-os-networking.rst @@ -42,7 +42,6 @@ Messaging queue server and neutron agents that run on each hypervisor, in the ML2 mechanism drivers for :term:`Open vSwitch` and :term:`Linux bridge`. - Concepts ~~~~~~~~ @@ -58,8 +57,72 @@ two types of network, tenant and provider networks. It is possible to share any of these types of networks among tenants as part of the network creation process. -Tenant networks ---------------- +.. _intro-os-networking-provider: + +Provider networks +----------------- + +Provider networks offer layer-2 connectivity to instances with optional +support for DHCP and metadata services. These networks connect, or map, to +existing layer-2 networks in the data center, typically using VLAN (802.1q) +tagging to identify and separate them. + +Provider networks generally offer simplicity, performance, and reliability +at the cost of flexibility. Only administrators can manage provider networks +because they require configuration of physical network infrastructure. Also, +provider networks only handle layer-2 connectivity for instances, thus +lacking support for features such as routers and floating IP addresses. + +In many cases, operators who are already familiar with virtual networking +architectures that rely on physical network infrastructure for layer-2, +layer-3, or other services can seamlessly deploy the OpenStack Networking +service. In particular, provider networks appeal to operators looking to +migrate from the Compute networking service (nova-network) to the OpenStack +Networking service. Over time, operators can build on this minimal +architecture to enable more cloud networking features. + +In general, the OpenStack Networking software components that handle layer-3 +operations impact performance and reliability the most. To improve performance +and reliability, provider networks move layer-3 operations to the physical +network infrastructure. + +In one particular use case, the OpenStack deployment resides in a mixed +environment with conventional virtualization and bare-metal hosts that use a +sizable physical network infrastructure. Applications that run inside the +OpenStack deployment might require direct layer-2 access, typically using +VLANs, to applications outside of the deployment. + +.. _intro-os-networking-selfservice: + +Self-service networks +--------------------- + +Self-service networks primarily enable general (non-privileged) projects +to manage networks without involving administrators. These networks are +entirely virtual and require virtual routers to interact with provider +and external networks such as the Internet. Self-service networks also +usually provide DHCP and metadata services to instances. + +In most cases, self-service networks use overlay protocols such as VXLAN +or GRE because they can support many more networks than layer-2 segmentation +using VLAN tagging (802.1q). Furthermore, VLANs typically require additional +configuration of physical network infrastructure. + +IPv4 self-service networks typically use private IP address ranges (RFC1918) +and interact with provider networks via source NAT on virtual routers. +Floating IP addresses enable access to instances from provider networks +via destination NAT on virtual routers. IPv6 self-service networks always +use public IP address ranges and interact with provider networks via +virtual routers with static routes. + +The Networking service implements routers using a layer-3 agent that typically +resides at least one network node. Contrary to provider networks that connect +instances to the physical network infrastructure at layer-2, self-service +networks must traverse a layer-3 agent. Thus, oversubscription or failure +of a layer-3 agent or network node can impact a significant quantity of +self-service networks and instances using them. Consider implementing one or +more high-availability features to increase redundancy and performance +of self-service networks. Users create tenant networks for connectivity within projects. By default, they are fully isolated and are not shared with other projects. OpenStack Networking @@ -86,13 +149,6 @@ GRE and VXLAN Internet. The router provides the ability to connect to instances directly from an external network using floating IP addresses. -Provider networks ------------------ - -The OpenStack administrator creates provider networks. These networks map to -existing physical networks in the data center. Useful network types in this -category are flat (untagged) and VLAN (802.1Q tagged). - .. image:: figures/NetworkTypes.png :alt: Tenant and provider networks @@ -105,7 +161,7 @@ networking service for both tenant and provider networks. Subnets are used to allocate IP addresses when new ports are created on a network. -Subnet Pools +Subnet pools ------------ End users normally can create subnets with any valid IP addresses without other @@ -130,10 +186,10 @@ used on that port. Routers ------- -This is a logical component that forwards data packets between -networks. It also provides L3 and NAT forwarding to provide external -network access for VMs on tenant networks. Required by certain -plug-ins only. +Routers provide virtual layer-3 services such as routing and NAT +between self-service and provider networks or among self-service +networks belonging to a project. The Networking service uses a +layer-3 agent to manage routers via namespaces. Security groups --------------- @@ -210,6 +266,20 @@ list available extensions by performing a GET on the is, an extension available in one API version might not be available in another. +DHCP +---- + +The optional DHCP service manages IP addresses for instances on provider +and self-service networks. The Networking service implements the DHCP +service using an agent that manages ``qdhcp`` namespaces and the +``dnsmasq`` service. + +Metadata +-------- + +The optional metadata service provides an API for instances to obtain +metadata such as SSH keys. + Service and component hierarchy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/networking-guide/source/intro.rst b/doc/networking-guide/source/intro.rst index c4cb07c284..4634bed335 100644 --- a/doc/networking-guide/source/intro.rst +++ b/doc/networking-guide/source/intro.rst @@ -36,6 +36,11 @@ components: and tenant users to create and manage network services through a web-based graphical interface. +.. note:: + + To reduce clutter, this guide removes command output without relevance + to the particular action. + .. toctree:: :maxdepth: 2 diff --git a/doc/networking-guide/source/migration-classic-to-l3ha.rst b/doc/networking-guide/source/migration-classic-to-l3ha.rst index 9fb83f7e69..0c0a6fd2cb 100644 --- a/doc/networking-guide/source/migration-classic-to-l3ha.rst +++ b/doc/networking-guide/source/migration-classic-to-l3ha.rst @@ -1,8 +1,8 @@ -.. _migration-classic-to-l3ha: +.. _migration-to-vrrp: -====================== -Classic to VRRP (L3HA) -====================== +============================== +Add VRRP to an existing router +============================== This section describes the process of migrating from a classic router to an L3 HA router, which is available starting from the Mitaka release. @@ -15,8 +15,8 @@ of bandwidth constraints that limit performance. However, it supports random distribution of routers on different network nodes to reduce the chances of bandwidth constraints and to improve scaling. -This section summarizes parts of :ref:`scenario-l3ha-ovs` and -:ref:`scenario-l3ha-lb`. For details regarding needed infrastructure and +This section references parts of :ref:`deploy-lb-ha-vrrp` and +:ref:`deploy-ovs-ha-vrrp`. For details regarding needed infrastructure and configuration to allow actual L3 HA deployment, read the relevant guide before continuing with the migration process. diff --git a/doc/networking-guide/source/scenario-classic-lb.rst b/doc/networking-guide/source/scenario-classic-lb.rst deleted file mode 100644 index b54f1a5d22..0000000000 --- a/doc/networking-guide/source/scenario-classic-lb.rst +++ /dev/null @@ -1,1025 +0,0 @@ -.. _scenario-classic-lb: - -=================================== -Scenario: Classic with Linux Bridge -=================================== - -This scenario describes a classic implementation of the OpenStack -Networking service using the ML2 plug-in with Linux bridge. - -The classic implementation contributes the networking portion of self-service -virtual data center infrastructure by providing a method for regular -(non-privileged) users to manage virtual networks within a project and -includes the following components: - -* Project (tenant) networks - - Project networks provide connectivity to instances for a particular - project. Regular (non-privileged) users can manage project networks - within the allocation that an administrator or operator defines for - them. Project networks can use VLAN or VXLAN transport methods - depending on the allocation. Project networks generally use private - IP address ranges (RFC1918) and lack connectivity to external networks - such as the Internet. Networking refers to IP addresses on project - networks as *fixed* IP addresses. - -* External networks - - External networks provide connectivity to external networks such as - the Internet. Only administrative (privileged) users can manage external - networks because they interface with the physical network infrastructure. - External networks can use flat or VLAN transport methods depending on the - physical network infrastructure and generally use public IP address - ranges. - - .. note:: - - A flat network essentially uses the untagged or native VLAN. Similar to - layer-2 properties of physical networks, only one flat network can exist - per external bridge. In most cases, production deployments should use - VLAN transport for external networks. - -* Routers - - Routers typically connect project and external networks. By default, they - implement SNAT to provide outbound external connectivity for instances on - project networks. Each router uses an IP address in the external network - allocation for SNAT. Routers also use DNAT to provide inbound external - connectivity for instances on project networks. Networking refers to IP - addresses on routers that provide inbound external connectivity for - instances on project networks as *floating* IP addresses. Routers can also - connect project networks that belong to the same project. - -* Supporting services - - Other supporting services include DHCP and metadata. The DHCP service - manages IP addresses for instances on project networks. The metadata - service provides an API for instances on project networks to obtain - metadata such as SSH keys. - -The example configuration creates one flat external network and one VXLAN -project network. However, this configuration also supports VLAN external -and project networks. The Linux bridge agent does not support GRE project -networks. - -Prerequisites -~~~~~~~~~~~~~ - -These prerequisites define the minimal physical infrastructure and immediate -OpenStack service dependencies necessary to deploy this scenario. For example, -the Networking service immediately depends on the Identity service and the -Compute service immediately depends on the Networking service. These -dependencies lack services such as the Image service because the Networking -service does not immediately depend on it. However, the Compute service -depends on the Image service to launch an instance. The example configuration -in this scenario assumes basic configuration knowledge of Networking service -components. - -Infrastructure --------------- - -#. One controller node with one network interface: management. -#. One network node with four network interfaces: management, project tunnel - networks, VLAN project networks, and external (typically the Internet). -#. At least one compute nodes with three network interfaces: management, - project tunnel networks, and VLAN project networks. - -To improve understanding of network traffic flow, the network and compute -nodes contain a separate network interface for VLAN project networks. In -production environments, you can use any network interface for VLAN project -networks. - -In the example configuration, the management network uses 10.0.0.0/24, -the tunnel network uses 10.0.1.0/24, and the external network uses -203.0.113.0/24. The VLAN network does not require an IP address range -because it only handles layer-2 connectivity. - -.. image:: figures/scenario-classic-hw.png - :alt: Hardware layout - -.. image:: figures/scenario-classic-networks.png - :alt: Network layout - -.. image:: figures/scenario-classic-lb-services.png - :alt: Service layout - -.. note:: - - For VLAN external and project networks, the physical network infrastructure - must support VLAN tagging. For best performance, 10+ Gbps networks should - support jumbo frames. - Using VXLAN project networks requires sufficient kernel support. - Kernel version 3.9 or newer is recommended. - -OpenStack services - controller node ------------------------------------- - -#. Operational SQL server with ``neutron`` database and appropriate - configuration in the ``neutron.conf`` file. -#. Operational message queue service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``nova.conf`` file. -#. Neutron server service, ML2 plug-in, and any dependencies. - -OpenStack services - network node ---------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Linux bridge agent, L3 agent, DHCP agent, metadata agent, and any - dependencies. - -OpenStack services - compute nodes ----------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``nova.conf`` file. -#. Linux bridge agent and any dependencies. - -Architecture -~~~~~~~~~~~~ - -The classic architecture provides basic virtual networking components in -your environment. Routing among project and external networks resides -completely on the network node. Although more simple to deploy than -other architectures, performing all functions on the network node -creates a single point of failure and potential performance issues. -Consider deploying DVR or L3 HA architectures in production environments -to provide redundancy and increase performance. However, the DVR architecture -requires Open vSwitch. - -.. image:: figures/scenario-classic-general.png - :alt: Architecture overview - -The network node contains the following network components: - -#. Linux bridge agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces and underlying interfaces. -#. DHCP agent managing the ``qdhcp`` namespaces. The ``qdhcp`` namespaces - provide DHCP services for instances using project networks. -#. L3 agent managing the ``qrouter`` namespaces. The ``qrouter`` namespaces - provide routing between project and external networks and among project - networks. They also route metadata traffic between instances and the - metadata agent. -#. Metadata agent handling metadata operations for instances. - -.. image:: figures/scenario-classic-lb-network1.png - :alt: Network node components - overview - -.. image:: figures/scenario-classic-lb-network2.png - :alt: Network node components - connectivity - -The compute nodes contain the Linux bridge agent managing virtual switches, -connectivity among them, and interaction via virtual ports with other network -components such as namespaces, security groups, and underlying interfaces. - -.. image:: figures/scenario-classic-lb-compute1.png - :alt: Compute node components - overview - -.. image:: figures/scenario-classic-lb-compute2.png - :alt: Compute node components - connectivity - -Packet flow -~~~~~~~~~~~ - -.. note:: - - *North-south* network traffic travels between an instance and - external network, typically the Internet. *East-west* network - traffic travels between instances. - -Case 1: North-south for instances with a fixed IP address ---------------------------------------------------------- - -For instances with a fixed IP address, the network node routes *north-south* -network traffic between project and external networks. - -* External network - - * Network 203.0.113.0/24 - * IP address allocation from 203.0.113.101 to 203.0.113.200 - * Project network router interface 203.0.113.101 *TR* - -* Project network - - * Network 192.168.1.0/24 - * Gateway 192.168.1.1 with MAC address *TG* - -* Compute node 1 - - * Instance 1 192.168.1.11 with MAC address *I1* - -* Instance 1 resides on compute node 1 and uses a project network. -* The instance sends a packet to a host on the external network. - -.. note:: - - Although the diagram shows both VXLAN and VLAN project networks, the packet - flow only considers one instance using a VXLAN project network. - -The following steps involve compute node 1: - -#. For VXLAN project networks: - - #. The instance 1 ``tap`` interface (1) forwards the packet to the tunnel - bridge ``qbr``. The packet contains destination MAC address *TG* - because the destination resides on another network. - #. Security group rules (2) on the tunnel bridge ``qbr`` handle state - tracking for the packet. - #. The tunnel bridge ``qbr`` forwards the packet to the logical tunnel - interface ``vxlan-sid`` (3) where *sid* contains the project network - segmentation ID. - #. The physical tunnel interface forwards the packet to the network - node. - -#. For VLAN project networks: - - #. The instance 1 ``tap`` interface forwards the packet to the VLAN - bridge ``qbr``. The packet contains destination MAC address *TG* - because the destination resides on another network. - #. Security group rules on the VLAN bridge ``qbr`` handle state tracking - for the packet. - #. The VLAN bridge ``qbr`` forwards the packet to the logical VLAN - interface ``device.sid`` where *device* references the underlying - physical VLAN interface and *sid* contains the project network - segmentation ID. - #. The logical VLAN interface ``device.sid`` forwards the packet to the - network node via the physical VLAN interface. - -The following steps involve the network node: - -#. For VXLAN project networks: - - #. The physical tunnel interface forwards the packet to the logical - tunnel interface ``vxlan-sid`` (4) where *sid* contains the project - network segmentation ID. - #. The logical tunnel interface ``vxlan-sid`` forwards the packet to the - tunnel bridge ``qbr``. - #. The tunnel bridge ``qbr`` forwards the packet to the ``qr`` interface (5) - in the router namespace ``qrouter``. The ``qr`` interface contains the - project network router interface IP address *TG*. - -#. For VLAN project networks: - - #. The physical VLAN interface forwards the packet to the logical VLAN - interface ``device.sid`` where *device* references the underlying - physical VLAN interface and *sid* contains the project network - segmentation ID. - #. The logical VLAN interface ``device.sid`` forwards the packet to the - VLAN bridge ``qbr``. - #. The VLAN bridge ``qbr`` forwards the packet to the ``qr`` interface in - the router namespace ``qrouter``. The ``qr`` interface contains the - project network 1 gateway IP address *TG*. - -#. The *iptables* service (6) performs SNAT on the packet using the ``qg`` - interface (7) as the source IP address. The ``qg`` interface contains - the project network router interface IP address *TR*. -#. The router namespace ``qrouter`` forwards the packet to the external - bridge ``qbr``. -#. The external bridge ``qbr`` forwards the packet to the external network - via the physical external interface. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-lb-flowns1.png - :alt: Network traffic flow - north/south with fixed IP address - -Case 2: North-south for instances with a floating IP address ------------------------------------------------------------- - -For instances with a floating IP address, the network node routes -*north-south* network traffic between project and external networks. - -* External network - - * Network 203.0.113.0/24 - * IP address allocation from 203.0.113.101 to 203.0.113.200 - * Project network router interface 203.0.113.101 *TR* - -* Project network - - * Network 192.168.1.0/24 - * Gateway 192.168.1.1 with MAC address *TG* - -* Compute node 1 - - * Instance 1 192.168.1.11 with MAC address *I1* and floating - IP address 203.0.113.102 *F1* - -* Instance 1 resides on compute node 1 and uses a project network. -* The instance receives a packet from a host on the external network. - -.. note:: - - Although the diagram shows both VXLAN and VLAN project networks, the packet - flow only considers one instance using a VXLAN project network. - -The following steps involve the network node: - -#. The physical external interface forwards the packet to the external - bridge ``qbr``. -#. The external bridge ``qbr`` forwards the packet to the ``qg`` interface (1) - in the router namespace ``qrouter``. The ``qg`` interface contains the - instance floating IP address *F1*. -#. The *iptables* service (2) performs DNAT on the packet using the ``qr`` - interface (3) as the source IP address. The ``qr`` interface contains the - project network gateway IP address *TR*. -#. For VXLAN project networks: - - #. The router namespace ``qrouter`` forwards the packet to the tunnel - bridge ``qbr``. - #. The tunnel bridge ``qbr`` forwards the packet to the logical tunnel - interface ``vxlan-sid`` (4) where *sid* contains the project network - segmentation ID. - #. The physical tunnel interface forwards the packet to compute node 1. - -#. For VLAN project networks: - - #. The router namespace ``qrouter`` forwards the packet to the VLAN - bridge ``qbr``. - #. The VLAN bridge ``qbr`` forwards the packet to the logical VLAN - interface ``device.sid`` where *device* references the underlying - physical VLAN interface and *sid* contains the project network - segmentation ID. - #. The physical VLAN interface forwards the packet to compute node 1. - -The following steps involve compute node 1: - -#. For VXLAN project networks: - - #. The physical tunnel interface forwards the packet to the logical - tunnel interface ``vxlan-sid`` (5) where *sid* contains the project - network segmentation ID. - #. The logical tunnel interface ``vxlan-sid`` forwards the packet to the - tunnel bridge ``qbr``. - #. Security group rules (6) on the tunnel bridge ``qbr`` handle firewalling - and state tracking for the packet. - #. The tunnel bridge ``qbr`` forwards the packet to the ``tap`` - interface (7) on instance 1. - -#. For VLAN project networks: - - #. The physical VLAN interface forwards the packet to the logical - VLAN interface ``device.sid`` where *device* references the underlying - physical VLAN interface and *sid* contains the project network - segmentation ID. - #. The logical VLAN interface ``device.sid`` forwards the packet to the - VLAN bridge ``qbr``. - #. Security group rules on the VLAN bridge ``qbr`` handle firewalling - and state tracking for the packet. - #. The VLAN bridge ``qbr`` forwards the packet to the ``tap`` interface - on instance 1. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-lb-flowns2.png - :alt: Network traffic flow - north/south with a floating IP address - -Case 3: East-west for instances on different networks ------------------------------------------------------ - -For instances with a fixed or floating IP address, the network node -routes *east-west* network traffic among project networks using the -same project router. - -* Project network 1 - - * Network: 192.168.1.0/24 - * Gateway: 192.168.1.1 with MAC address *TG1* - -* Project network 2 - - * Network: 192.168.2.0/24 - * Gateway: 192.168.2.1 with MAC address *TG2* - -* Compute node 1 - - * Instance 1: 192.168.1.11 with MAC address *I1* - -* Compute node 2 - - * Instance 2: 192.168.2.11 with MAC address *I2* - -* Instance 1 resides on compute node 1 and uses VXLAN project network 1. -* Instance 2 resides on compute node 2 and uses VLAN project network 2. -* Both project networks reside on the same router. -* Instance 1 sends a packet to instance 2. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface (1) forwards the packet to the tunnel - bridge ``qbr``. The packet contains destination MAC address *TG1* - because the destination resides on another network. -#. Security group rules (2) on the tunnel bridge ``qbr`` handle - state tracking for the packet. -#. The tunnel bridge ``qbr`` forwards the packet to the logical tunnel - interface ``vxlan-sid`` (3) where *sid* contains the project network - segmentation ID. -#. The physical tunnel interface forwards the packet to the network - node. - -The following steps involve the network node: - -#. The physical tunnel interface forwards the packet to the logical - tunnel interface ``vxlan-sid`` (4) where *sid* contains the project - network segmentation ID. -#. The logical tunnel interface ``vxlan-sid`` forwards the packet to the - tunnel bridge ``qbr``. -#. The tunnel bridge ``qbr`` forwards the packet to the ``qr-1`` - interface (5) in the router namespace ``qrouter``. The ``qr-1`` - interface contains the project network 1 gateway IP address - *TG1*. -#. The router namespace ``qrouter`` routes the packet (6) to the ``qr-2`` - interface (7). The ``qr-2`` interface contains the project network 2 - gateway IP address *TG2*. -#. The router namespace ``qrouter`` forwards the packet to the VLAN - bridge ``qbr``. -#. The VLAN bridge ``qbr`` forwards the packet to the logical VLAN - interface ``vlan.sid`` (8) where *sid* contains the project network - segmentation ID. -#. The physical VLAN interface forwards the packet to compute node 2. - -The following steps involve compute node 2: - -#. The physical VLAN interface forwards the packet to the logical VLAN - interface ``vlan.sid`` (9) where *sid* contains the project network - segmentation ID. -#. The logical VLAN interface ``vlan.sid`` forwards the packet to the - VLAN bridge ``qbr``. -#. Security group rules (10) on the VLAN bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The VLAN bridge ``qbr`` forwards the packet to the ``tap`` interface (11) - on instance 2. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-lb-flowew1.png - :alt: Network traffic flow - east/west for instances on different networks - -Case 4: East-west for instances on the same network ---------------------------------------------------- - -For instances with a fixed or floating IP address, the project network -switches *east-west* network traffic among instances without using a -project router on the network node. - -* Project network - - * Network: 192.168.1.0/24 - -* Compute node 1 - - * Instance 1: 192.168.1.11 with MAC address *I1* - -* Compute node 2 - - * Instance 2: 192.168.1.12 with MAC address *I2* - -* Instance 1 resides on compute node 1. -* Instance 2 resides on compute node 2. -* Both instances use the same VXLAN project network. -* Instance 1 sends a packet to instance 2. -* The Linux bridge agent handles switching within the project network. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface (1) forwards the packet to the tunnel - bridge ``qbr``. The packet contains destination MAC address *I2* - because the destination resides the same network. -#. Security group rules (2) on the tunnel bridge ``qbr`` handle - state tracking for the packet. -#. The tunnel bridge ``qbr`` forwards the packet to the logical tunnel - interface ``vxlan-sid`` (3) where *sid* contains the project network - segmentation ID. -#. The physical tunnel interface forwards the packet to compute node 2. - -The following steps involve compute node 2: - -#. The physical tunnel interface forwards the packet to the logical - tunnel interface ``vxlan-sid`` (4) where *sid* contains the project network - segmentation ID. -#. The logical tunnel interface ``vxlan-sid`` forwards the packet to the - tunnel bridge ``qbr``. -#. Security group rules (5) on the tunnel bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The tunnel bridge ``qbr`` forwards the packet to the ``tap`` interface (6) - on instance 2. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-lb-flowew2.png - :alt: Network traffic flow - east/west for instances on the same network - -Example configuration -~~~~~~~~~~~~~~~~~~~~~ - -Use the following example configuration as a template to deploy this -scenario in your environment. - -Controller node ---------------- - -#. In the ``neutron.conf`` file: - - * Configure common options: - - .. code-block:: ini - - [DEFAULT] - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = True - - * If necessary, :ref:`configure MTU `. - -#. In the ``ml2_conf.ini`` file: - - * Configure drivers and network types: - - .. code-block:: ini - - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vlan,vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - - * Configure network mappings and ID ranges: - - .. code-block:: ini - - [ml2_type_flat] - flat_networks = external - - [ml2_type_vlan] - network_vlan_ranges = external,vlan:MIN_VLAN_ID:MAX_VLAN_ID - - [ml2_type_vxlan] - vni_ranges = MIN_VXLAN_ID:MAX_VXLAN_ID - - Replace ``MIN_VLAN_ID``, ``MAX_VLAN_ID``, ``MIN_VXLAN_ID``, and - ``MAX_VXLAN_ID`` with VLAN and VXLAN ID minimum and maximum values suitable - for your environment. - - .. note:: - - The first value in the ``tenant_network_types`` option becomes the - default project network type when a regular user creates a network. - - .. note:: - - The ``external`` value in the ``network_vlan_ranges`` option lacks VLAN - ID ranges to support use of arbitrary VLAN IDs by administrative users. - - * Configure the security group driver: - - .. code-block:: ini - - [securitygroup] - firewall_driver = iptables - - * If necessary, :ref:`configure MTU `. - -#. Start the following services: - - * Server - -Network node ------------- - -#. In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent: - - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = vlan:PROJECT_VLAN_INTERFACE,external:EXTERNAL_INTERFACE - - [vxlan] - local_ip = TUNNEL_INTERFACE_IP_ADDRESS - l2_population = True - - [securitygroup] - firewall_driver = iptables - - Replace ``PROJECT_VLAN_INTERFACE`` and ``EXTERNAL_INTERFACE`` with the name - of the underlying interface that handles VLAN project networks and external - networks, respectively. Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP - address of the interface that handles project tunnel networks. - -#. In the ``l3_agent.ini`` file, configure the L3 agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver - external_network_bridge = - - .. note:: - - The ``external_network_bridge`` option intentionally contains - no value. - -#. In the ``dhcp_agent.ini`` file, configure the DHCP agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver - enable_isolated_metadata = True - -#. In the ``metadata_agent.ini`` file, configure the metadata agent: - - .. code-block:: ini - - [DEFAULT] - nova_metadata_ip = controller - metadata_proxy_shared_secret = METADATA_SECRET - - Replace ``METADATA_SECRET`` with a suitable value for your environment. - -#. Start the following services: - - * Linux bridge agent - * L3 agent - * DHCP agent - * Metadata agent - -Compute nodes -------------- - -#. In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent: - - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = vlan:PROJECT_VLAN_INTERFACE - - [vxlan] - local_ip = TUNNEL_INTERFACE_IP_ADDRESS - l2_population = True - - [securitygroup] - firewall_driver = iptables - - Replace ``PROJECT_VLAN_INTERFACE`` with the name of the underlying - interface that handles VLAN project networks and external networks, - respectively. Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address - of the interface that handles VXLAN project networks. - -#. Start the following services: - - * Linux bridge agent - -Verify service operation ------------------------- - -#. Source the administrative project credentials. -#. Verify presence and operation of the agents: - - .. code-block:: console - - $ neutron agent-list - - +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ - | id | agent_type | host | alive | admin_state_up | binary | - +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ - | 0146e482-f94a-4996-9e2a-f0cafe2575c5 | L3 agent | network1 | :-) | True | neutron-l3-agent | - | 0dd4af0d-aafd-4036-b240-db12cf2a1aa9 | Linux bridge agent | compute2 | :-) | True | neutron-linuxbridge-agent | - | 2f9e5434-575e-4079-bcca-5e559c0a5ba7 | Linux bridge agent | network1 | :-) | True | neutron-linuxbridge-agent | - | 4105fd85-7a8f-4956-b104-26a600670530 | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent | - | 8c15992a-3abc-4b14-aebc-60065e5090e6 | Metadata agent | network1 | :-) | True | neutron-metadata-agent | - | aa2e8f3e-b53e-4fb9-8381-67dcad74e940 | DHCP agent | network1 | :-) | True | neutron-dhcp-agent | - +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ - -Create initial networks ------------------------ - -This example creates a flat external network and a VXLAN project network. - -#. Source the administrative project credentials. -#. Create the external network: - - .. code-block:: console - - $ neutron net-create ext-net --router:external True \ - --provider:physical_network external --provider:network_type flat - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | d57703fd-5571-404c-abca-f59a13f3c507 | - | name | ext-net | - | provider:network_type | flat | - | provider:physical_network | external | - | provider:segmentation_id | | - | router:external | True | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | 897d7360ac3441209d00fbab5f0b5c8b | - +---------------------------+--------------------------------------+ - -#. Create a subnet on the external network: - - .. code-block:: console - - $ neutron subnet-create ext-net --name ext-subnet --allocation-pool \ - start=203.0.113.101,end=203.0.113.200 --disable-dhcp \ - --gateway 203.0.113.1 203.0.113.0/24 - - Created a new subnet: - +-------------------+----------------------------------------------------+ - | Field | Value | - +-------------------+----------------------------------------------------+ - | allocation_pools | {"start": "203.1.113.101", "end": "203.0.113.200"} | - | cidr | 201.0.113.0/24 | - | dns_nameservers | | - | enable_dhcp | False | - | gateway_ip | 203.0.113.1 | - | host_routes | | - | id | 020bb28d-0631-4af2-aa97-7374d1d33557 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | ext-subnet | - | network_id | d57703fd-5571-404c-abca-f59a13f3c507 | - | tenant_id | 897d7360ac3441209d00fbab5f0b5c8b | - +-------------------+----------------------------------------------------+ - -.. note:: - - The example configuration contains ``vlan`` as the first project network - type. Only an administrative user can create other types of networks such as - VXLAN. The following commands use the ``admin`` project credentials to - create a VXLAN project network. - -#. Obtain the ID of a regular project. For example, using the ``demo`` project: - - .. code-block:: console - - $ openstack project show demo - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Demo Project | - | enabled | True | - | id | 8dbcb34c59a741b18e71c19073a47ed5 | - | name | demo | - +-------------+----------------------------------+ - -#. Create the project network: - - .. code-block:: console - - $ neutron net-create demo-net --tenant-id 8dbcb34c59a741b18e71c19073a47ed5 \ - --provider:network_type vxlan - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | 3a0663f6-9d5d-415e-91f2-0f1bfefbe5ed | - | name | demo-net | - | provider:network_type | vxlan | - | provider:physical_network | | - | provider:segmentation_id | 1 | - | router:external | False | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | 8dbcb34c59a741b18e71c19073a47ed5 | - +---------------------------+--------------------------------------+ - -#. Source the regular project credentials. The following steps use the - ``demo`` project. -#. Create a subnet on the project network: - - .. code-block:: console - - $ neutron subnet-create demo-net --name demo-subnet --gateway 192.168.1.1 \ - 192.168.1.0/24 - - Created a new subnet: - +-------------------+--------------------------------------------------+ - | Field | Value | - +-------------------+--------------------------------------------------+ - | allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} | - | cidr | 192.168.1.0/24 | - | dns_nameservers | | - | enable_dhcp | True | - | gateway_ip | 192.168.1.1 | - | host_routes | | - | id | 1d5ab804-8925-46b0-a7b4-e520dc247284 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | demo-subnet | - | network_id | 3a0663f6-9d5d-415e-91f2-0f1bfefbe5ed | - | tenant_id | 8dbcb34c59a741b18e71c19073a47ed5 | - +-------------------+--------------------------------------------------+ - -#. Create a project router: - - .. code-block:: console - - $ neutron router-create demo-router - - +-----------------------+--------------------------------------+ - | Field | Value | - +-----------------------+--------------------------------------+ - | admin_state_up | True | - | external_gateway_info | | - | id | 299b2363-d656-401d-a3a5-55b4378e7fbb | - | name | demo-router | - | routes | | - | status | ACTIVE | - | tenant_id | 8dbcb34c59a741b18e71c19073a47ed5 | - +-----------------------+--------------------------------------+ - -#. Add the project subnet as an interface on the router: - - .. code-block:: console - - $ neutron router-interface-add demo-router demo-subnet - Added interface 4f819fd4-be4d-42ab-bd47-ba1b2cb39006 to router demo-router. - -#. Add a gateway to the external network on the router: - - .. code-block:: console - - $ neutron router-gateway-set demo-router ext-net - Set gateway for router demo-router - -Verify network operation ------------------------- - -#. On the network node, verify creation of the ``qrouter`` and ``qdhcp`` - namespaces: - - .. code-block:: console - - $ ip netns - qdhcp-3a0663f6-9d5d-415e-91f2-0f1bfefbe5ed - qrouter-299b2363-d656-401d-a3a5-55b4378e7fbb - - .. note:: - - The ``qdhcp`` namespace might not exist until launching an instance. - -#. Determine the external network gateway IP address for the project network - on the router, typically the lowest IP address in the external subnet IP - allocation range: - - .. code-block:: console - - $ neutron router-port-list demo-router - - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - | id | name | mac_address | fixed_ips | - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - | b1a894fd-aee8-475c-9262-4342afdc1b58 | | fa:16:3e:c1:20:55 | {"subnet_id": "1d5ab804-8925-46b0-a7b4-e520dc247284", "ip_address": "192.168.1.1"} | - | ff5f93c6-3760-4902-a401-af78ff61ce99 | | fa:16:3e:54:d7:8c | {"subnet_id": "020bb28d-0631-4af2-aa97-7374d1d33557", "ip_address": "203.0.113.101"} | - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the external network gateway IP address on the project router: - - .. code-block:: console - - $ ping -c 4 203.0.113.101 - PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data. - 64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619 ms - 64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189 ms - 64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165 ms - 64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216 ms - - --- 203.0.113.101 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2999ms - rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms - -#. Source the regular project credentials. The following steps use the - ``demo`` project. -#. Launch an instance with an interface on the project network. -#. Obtain console access to the instance. - - #. Test connectivity to the project router: - - .. code-block:: console - - $ ping -c 4 192.168.1.1 - PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. - 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=0.357 ms - 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=0.473 ms - 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=0.504 ms - 64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=0.470 ms - - --- 192.168.1.1 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2998ms - rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms - - #. Test connectivity to the Internet: - - .. code-block:: console - - $ ping -c 4 openstack.org - PING openstack.org (174.143.194.225) 56(84) bytes of data. - 64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms - 64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms - 64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms - 64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms - - --- openstack.org ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3003ms - rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms - -#. Create the appropriate security group rules to allow ping and SSH access - to the instance. For example: - - .. code-block:: console - - $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | icmp | -1 | -1 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - - $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | tcp | 22 | 22 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - -#. Create a floating IP address on the external network: - - .. code-block:: console - - $ neutron floatingip-create ext-net - - +---------------------+--------------------------------------+ - | Field | Value | - +---------------------+--------------------------------------+ - | fixed_ip_address | | - | floating_ip_address | 203.0.113.102 | - | floating_network_id | e5f9be2f-3332-4f2d-9f4d-7f87a5a7692e | - | id | 77cf2a36-6c90-4941-8e62-d48a585de050 | - | port_id | | - | router_id | | - | status | DOWN | - | tenant_id | 443cd1596b2e46d49965750771ebbfe1 | - +---------------------+--------------------------------------+ - -#. Associate the floating IP address with the instance: - - .. code-block:: console - - $ nova floating-ip-associate demo-instance1 203.0.113.102 - -#. Verify addition of the floating IP address to the instance: - - .. code-block:: console - - $ nova list - - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - | ID | Name | Status | Task State | Power State | Networks | - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - | 05682b91-81a1-464c-8f40-8b3da7ee92c5 | demo-instance1 | ACTIVE | - | Running | demo-net=192.168.1.3, 203.0.113.102 | - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the floating IP address associated with the instance: - - .. code-block:: console - - $ ping -c 4 203.0.113.102 - PING 203.0.113.102 (203.0.113.112) 56(84) bytes of data. - 64 bytes from 203.0.113.102: icmp_req=1 ttl=63 time=3.18 ms - 64 bytes from 203.0.113.102: icmp_req=2 ttl=63 time=0.981 ms - 64 bytes from 203.0.113.102: icmp_req=3 ttl=63 time=1.06 ms - 64 bytes from 203.0.113.102: icmp_req=4 ttl=63 time=0.929 ms - - --- 203.0.113.102 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3002ms - rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms diff --git a/doc/networking-guide/source/scenario-classic-mt.rst b/doc/networking-guide/source/scenario-classic-mt.rst deleted file mode 100644 index 2a19f41724..0000000000 --- a/doc/networking-guide/source/scenario-classic-mt.rst +++ /dev/null @@ -1,753 +0,0 @@ -.. _scenario-classic-mt: - -============================== -Scenario: Classic with Macvtap -============================== - -This scenario describes a classic implementation of the OpenStack -Networking service using the ML2 plug-in with Macvtap. - -The classic implementation contributes the networking portion of self-service -virtual data center infrastructure by providing a method for regular -(non-privileged) users to manage virtual networks within a project and -includes the following components: - -* Project (tenant) networks - - Project networks provide connectivity to instances for a particular - project. Regular (non-privileged) users can manage project networks - within the allocation that an administrator or operator defines for - them. Project networks can use VLAN transport methods. Project - networks generally use private IP address ranges (RFC1918) and lack - connectivity to external networks such as the Internet. Networking - refers to IP addresses on project networks as *fixed* IP addresses. - -* Provider networks - - Provider networks provide connectivity like project networks. - But only administrative (privileged) users can manage those - networks because they interface with the physical network infrastructure. - Provider networks can use flat or VLAN transport methods depending on the - physical network infrastructure. Like project networks, provider networks - typically use private IP address ranges. - - .. note:: - - A flat network essentially uses the untagged or native VLAN. Similar to - layer-2 properties of physical networks, only one flat network can exist - per external bridge. In most cases, production deployments should use - VLAN transport for provider networks. - -The example configuration creates one flat external network and one VLAN -project network. However, this configuration also supports VLAN external -networks. The Macvtap agent does not support VXLAN and GRE project networks. - -Known limitations -~~~~~~~~~~~~~~~~~ - -* Security Group is not supported. You must configure the NoopFirewallDriver as - described below. - -* Address Resolution Protocol (ARP) spoofing filtering is not supported. - -* Only compute resources can be attached via macvtap. Attaching other - resources like DHCP, Routers and others is not supported. Therefore run - either OVS or linux bridge in VLAN or flat mode on the controller node. - -* Migration requires the same ``physical_interface_mapping`` configuration on - each host. It is not recommended to use different mappings, like - node1 uses ``physnet1:eth1`` but node2 uses ``physnet1:eth2``. Having - different mappings could - - * cause migration to fail, if the interface configured on the source node - does not exist on the target node - - * result in an instance placed on the wrong physical network, if the - interface used on the source node exists on the target node, but is used - by another physical network or not used at all by OpenStack. Such an - instance does not have access to its configured networks anymore. - It then has layer 2 connectivity to either another OpenStack network, or - one of the hosts networks. - - .. note:: - - To get around those problems, make sure that your - ``physical_interface_mapping`` is in sync between all nodes using the - macvtap agent. This `bug - `_ tracks the progress - on overcoming this limitation. - -* Only centralized routing on the network node is supported (using either - the Open vSwitch or the Linux bridge agent). DVR is NOT supported. - -* GRE (Generic Routing Encapsulation) and VXLAN (Virtual Extensible LAN) are - not supported. - -Prerequisites -~~~~~~~~~~~~~ - -These prerequisites define the minimal physical infrastructure and immediate -OpenStack service dependencies necessary to deploy this scenario. For example, -the Networking service immediately depends on the Identity service and the -Compute service immediately depends on the Networking service. These -dependencies lack services such as the Image service because the Networking -service does not immediately depend on it. However, the Compute service -depends on the Image service to launch an instance. The example configuration -in this scenario assumes basic configuration knowledge of Networking service -components. - -Infrastructure --------------- - -#. One controller node with one network interface: management. -#. One network node with three network interfaces: management, VLAN project - networks, and external (typically the Internet). -#. At least one compute node with two network interfaces: management, - and VLAN project networks. - -To improve understanding of network traffic flow, the network and compute -nodes contain a separate network interface for VLAN project networks. In -production environments, you can use any network interface for VLAN project -networks. - -In the example configuration, the management network uses 10.0.0.0/24, -and the external network uses 203.0.113.0/24. The VLAN network does not -require an IP address range because it only handles layer-2 connectivity. - -.. image:: figures/scenario-classic-hw.png - :alt: Hardware layout - -.. image:: figures/scenario-classic-mt-networks.png - :alt: Network layout - -.. image:: figures/scenario-classic-mt-services.png - :alt: Service layout - -OpenStack services - controller node ------------------------------------- - -#. Operational SQL server with ``neutron`` database and appropriate - configuration in the ``/etc/neutron/neutron.conf`` file -#. Operational message queue service with appropriate configuration - in the ``/etc/neutron/neutron.conf`` file -#. Operational OpenStack Identity service with appropriate configuration - in the ``/etc/neutron/neutron.conf`` file -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``/etc/nova/nova.conf`` file -#. Neutron server service, ML2 plug-in, and any dependencies - -OpenStack services - network node ---------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``/etc/neutron/neutron.conf`` file -#. Linux bridge or Open vSwitch agent, L3 agent, DHCP agent, metadata agent, - and any dependencies - -OpenStack services - compute nodes ----------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``/etc/neutron/neutron.conf`` file -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``/etc/nova/nova.conf`` file -#. Macvtap agent and any dependencies - -Architecture -~~~~~~~~~~~~ - -The classic architecture provides basic virtual networking components in -your environment. Routing among project and external networks resides -completely on the network node. Although more simple to deploy than -other architectures, performing all functions on the network node -creates a single point of failure and potential performance issues. -Consider deploying DVR or L3 HA architectures in production environments -to provide redundancy and increase performance. Note that the DVR architecture -requires Open vSwitch. - -.. image:: figures/scenario-classic-mt.png - :alt: Architecture overview - -The network node contains the following network components: - -#. See :ref:`scenario-classic-ovs` or :ref:`scenario-classic-lb` - -The compute nodes contain the following network components: - -#. Macvtap agent managing the virtual server attachments and interaction - with underlying interfaces. - -.. image:: figures/scenario-classic-mt-compute1.png - :alt: Compute node components - overview - -.. image:: figures/scenario-classic-mt-compute2.png - :alt: Compute node components - connectivity - -Packet flow -~~~~~~~~~~~ - -.. note:: - - *North-south* network traffic travels between an instance and - external network, typically the Internet. *East-west* network - traffic travels between instances. - -Case 1: North-south for instances with a fixed IP address ---------------------------------------------------------- - -For instances with a fixed IP address, the network node routes *north-south* -network traffic between project and external networks. - -* External network - - * Network 203.0.113.0/24 - * IP address allocation from 203.0.113.101 to 203.0.113.200 - * Project network router interface 203.0.113.101 *TR* - -* Project network - - * Network 192.168.1.0/24 - * Gateway 192.168.1.1 with MAC address *TG* - -* Compute node 1 - - * Instance 1 192.168.1.11 with MAC address *I1* - -* Instance 1 resides on compute node 1 and uses a project network. -* The instance sends a packet to a host on the external network. - -The following steps involve compute node 1: - -#. For VLAN project networks: - - #. The instance 1 ``macvtap`` interface forwards the packet to the logical - VLAN interface ``device.sid`` where *device* references the underlying - physical VLAN interface and *sid* contains the project network - segmentation ID. The packet contains destination MAC address *TG* - because the destination resides on another network. - #. The logical VLAN interface ``device.sid`` forwards the packet to the - network node via the physical VLAN interface. - -The following steps involve the network node: - -#. For VLAN project networks: - - As the network node runs either the linuxbridge or the OVS agent, it is - like a black box for macvtap. For more information on network node scenario - see :ref:`scenario-classic-ovs` or :ref:`scenario-classic-lb` - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-mt-flowns1.png - :alt: Network traffic flow - north/south with fixed IP address - -Case 2: North-south for instances with a floating IP address ------------------------------------------------------------- - -For instances with a floating IP address, the network node routes -*north-south* network traffic between project and external networks. - -The network node runs either linuxbridge agent or OVS agent. Therefore, for -macvtap, floating IP behaves like in the fixed IP address scenario (Case 1). - -Case 3: East-west for instances on different networks ------------------------------------------------------ - -For instances with a fixed or floating IP address, the network node -routes *east-west* network traffic among project networks using the -same project router. - -* Project network 1 - - * Network: 192.168.1.0/24 - * Gateway: 192.168.1.1 with MAC address *TG1* - -* Project network 2 - - * Network: 192.168.2.0/24 - * Gateway: 192.168.2.1 with MAC address *TG2* - -* Compute node 1 - - * Instance 1: 192.168.1.11 with MAC address *I1* - -* Compute node 2 - - * Instance 2: 192.168.2.11 with MAC address *I2* - -* Instance 1 resides on compute node 1 and uses VLAN project network 1. -* Instance 2 resides on compute node 2 and uses VLAN project network 2. -* Both project networks reside on the same router. -* Instance 1 sends a packet to instance 2. - -The following steps involve compute node 1: - -#. The instance 1 ``macvtap`` interface forwards the packet to the logical - VLAN interface ``device.sid`` where *device* references the underlying - physical VLAN interface and *sid* contains the project network - segmentation ID. The packet contains destination MAC address *TG* - because the destination resides on another network. -#. The logical VLAN interface ``device.sid`` forwards the packet to the - network node via the physical VLAN interface. - -The following steps involve the network node: - -#. As the network node runs either the linuxbridge or the OVS agent, it is - like a black box for macvtap. For more information on network node scenario - see :ref:`scenario-classic-ovs` or :ref:`scenario-classic-lb` - -The following steps involve compute node 2: - -#. The physical VLAN interface forwards the packet to the logical VLAN - interface ``vlan.sid`` where *sid* contains the project network - segmentation ID. -#. The logical VLAN interface ``vlan.sid`` forwards the packet to the - ``macvtap`` interface on instance 2. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-mt-flowew1.png - :alt: Network traffic flow - east/west for instances on different networks - -Case 4: East-west for instances on the same network ---------------------------------------------------- - -For instances with a fixed or floating IP address, the project network -switches *east-west* network traffic among instances without using a -project router on the network node. - -* Project network - - * Network: 192.168.1.0/24 - -* Compute node 1 - - * Instance 1: 192.168.1.11 with MAC address *I1* - -* Compute node 2 - - * Instance 2: 192.168.1.12 with MAC address *I2* - -* Instance 1 resides on compute node 1. -* Instance 2 resides on compute node 2. -* Both instances use the same VLAN project network. -* Instance 1 sends a packet to instance 2. -* The Macvtap agent handles switching within the project network. - -The following steps involve compute node 1: - -#. The instance 1 ``macvtap`` interface forwards the packet to the logical - VLAN interface ``device.sid`` where *device* references the underlying - physical VLAN interface and *sid* contains the project network - segmentation ID. The packet contains destination MAC address *I2* - because the destination resides on the same network. -#. The logical VLAN interface ``device.sid`` forwards the packet to the - compute node 2 via the physical VLAN interface. - -The following steps involve compute node 2: - -#. The physical VLAN interface forwards the packet to the logical VLAN - interface ``vlan.sid`` where *sid* contains the project network - segmentation ID. -#. The logical VLAN interface ``vlan.sid`` forwards the packet to the - ``macvtap`` interface on instance 2. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-mt-flowew2.png - :alt: Network traffic flow - east/west for instances on the same network - -Example configuration -~~~~~~~~~~~~~~~~~~~~~ - -Use the following example configuration as a template to deploy this -scenario in your environment. - -Controller node ---------------- - -#. In the ``neutron.conf`` file: - - * Configure common options: - - .. code-block:: ini - - [DEFAULT] - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = True - - * If necessary, :ref:`configure MTU `. - -#. In the ``ml2_conf.ini`` file: - - * Configure drivers and network types: - - .. code-block:: ini - - [ml2] - type_drivers = flat,vlan - tenant_network_types = vlan - mechanism_drivers = linuxbridge,macvtap - extension_drivers = port_security - - * Configure network mappings and ID ranges: - - .. code-block:: ini - - [ml2_type_flat] - flat_networks = external - - [ml2_type_vlan] - network_vlan_ranges = external,vlan:MIN_VLAN_ID:MAX_VLAN_ID - - Replace ``MIN_VLAN_ID`` and ``MAX_VLAN_ID`` with VLAN ID minimum and - maximum values suitable for your environment. - - .. note:: - - The ``external`` value in the ``network_vlan_ranges`` option lacks VLAN - ID ranges to support use of arbitrary VLAN IDs by administrative users. - - * Configure the security group driver: - - .. code-block:: ini - - [securitygroup] - firewall_driver = iptables - - * If necessary, :ref:`configure MTU `. - -#. Start the following services: - - * Server - -Network node ------------- - -#. The controller node runs either the linuxbridge or the OVS agent. For more - information see :ref:`scenario-classic-ovs` or :ref:`scenario-classic-lb` - -Compute nodes -------------- - -#. Edit the ``macvtap_agent.ini`` file: - - .. code-block:: ini - - [macvtap] - physical_interface_mappings = vlan:PROJECT_VLAN_INTERFACE - - [securitygroup] - firewall_driver = noop - - Replace ``PROJECT_VLAN_INTERFACE`` with the name of the underlying - interface that handles VLAN project networks and external networks, - respectively. - -#. Start the following services: - - * Macvtap agent - -Verify service operation ------------------------- - -#. Source the administrative project credentials. -#. Verify presence and operation of the agents: - - .. code-block:: console - - $ neutron agent-list - - +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ - | id | agent_type | host | alive | admin_state_up | binary | - +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ - | 0146e482-f94a-4996-9e2a-f0cafe2575c5 | L3 agent | network1 | :-) | True | neutron-l3-agent | - | 0dd4af0d-aafd-4036-b240-db12cf2a1aa9 | Macvtap agent | compute2 | :-) | True | neutron-macvtap-agent | - | 2f9e5434-575e-4079-bcca-5e559c0a5ba7 | Linux bridge agent | network1 | :-) | True | neutron-linuxbridge-agent | - | 4105fd85-7a8f-4956-b104-26a600670530 | Macvtap agent | compute1 | :-) | True | neutron-macvtap-agent | - | 8c15992a-3abc-4b14-aebc-60065e5090e6 | Metadata agent | network1 | :-) | True | neutron-metadata-agent | - | aa2e8f3e-b53e-4fb9-8381-67dcad74e940 | DHCP agent | network1 | :-) | True | neutron-dhcp-agent | - +--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+ - -Create initial networks ------------------------ - -This example creates a flat external network and a VLAN project network. - -#. Source the administrative project credentials. -#. Create the external network: - - .. code-block:: console - - $ neutron net-create ext-net --router:external True \ - --provider:physical_network external --provider:network_type flat - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | d57703fd-5571-404c-abca-f59a13f3c507 | - | name | ext-net | - | provider:network_type | flat | - | provider:physical_network | external | - | provider:segmentation_id | | - | router:external | True | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | 897d7360ac3441209d00fbab5f0b5c8b | - +---------------------------+--------------------------------------+ - -#. Create a subnet on the external network: - - .. code-block:: console - - $ neutron subnet-create ext-net --name ext-subnet --allocation-pool \ - start=203.0.113.101,end=203.0.113.200 --disable-dhcp \ - --gateway 203.0.113.1 203.0.113.0/24 - - Created a new subnet: - +-------------------+----------------------------------------------------+ - | Field | Value | - +-------------------+----------------------------------------------------+ - | allocation_pools | {"start": "203.1.113.101", "end": "203.0.113.200"} | - | cidr | 201.0.113.0/24 | - | dns_nameservers | | - | enable_dhcp | False | - | gateway_ip | 203.0.113.1 | - | host_routes | | - | id | 020bb28d-0631-4af2-aa97-7374d1d33557 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | ext-subnet | - | network_id | d57703fd-5571-404c-abca-f59a13f3c507 | - | tenant_id | 897d7360ac3441209d00fbab5f0b5c8b | - +-------------------+----------------------------------------------------+ - -#. Create the project network: - -#. Source the regular project credentials. The following steps use the - ``demo`` project. - - .. code-block:: console - - $ neutron net-create demo-net - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | 3a0663f6-9d5d-415e-91f2-0f1bfefbe5ed | - | name | demo-net | - | provider:network_type | vlan | - | provider:physical_network | | - | provider:segmentation_id | 1 | - | router:external | False | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | 8dbcb34c59a741b18e71c19073a47ed5 | - +---------------------------+--------------------------------------+ - -#. Create a subnet on the project network: - - .. code-block:: console - - $ neutron subnet-create demo-net --name demo-subnet --gateway 192.168.1.1 \ - 192.168.1.0/24 - - Created a new subnet: - +-------------------+--------------------------------------------------+ - | Field | Value | - +-------------------+--------------------------------------------------+ - | allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} | - | cidr | 192.168.1.0/24 | - | dns_nameservers | | - | enable_dhcp | True | - | gateway_ip | 192.168.1.1 | - | host_routes | | - | id | 1d5ab804-8925-46b0-a7b4-e520dc247284 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | demo-subnet | - | network_id | 3a0663f6-9d5d-415e-91f2-0f1bfefbe5ed | - | tenant_id | 8dbcb34c59a741b18e71c19073a47ed5 | - +-------------------+--------------------------------------------------+ - -#. Create a project router: - - .. code-block:: console - - $ neutron router-create demo-router - - +-----------------------+--------------------------------------+ - | Field | Value | - +-----------------------+--------------------------------------+ - | admin_state_up | True | - | external_gateway_info | | - | id | 299b2363-d656-401d-a3a5-55b4378e7fbb | - | name | demo-router | - | routes | | - | status | ACTIVE | - | tenant_id | 8dbcb34c59a741b18e71c19073a47ed5 | - +-----------------------+--------------------------------------+ - -#. Add the project subnet as an interface on the router: - - .. code-block:: console - - $ neutron router-interface-add demo-router demo-subnet - Added interface 4f819fd4-be4d-42ab-bd47-ba1b2cb39006 to router demo-router. - -#. Add a gateway to the external network on the router: - - .. code-block:: console - - $ neutron router-gateway-set demo-router ext-net - Set gateway for router demo-router - -Verify network operation ------------------------- - -#. On the network node, verify creation of the ``qrouter`` and ``qdhcp`` - namespaces: - - .. code-block:: console - - $ ip netns - qdhcp-3a0663f6-9d5d-415e-91f2-0f1bfefbe5ed - qrouter-299b2363-d656-401d-a3a5-55b4378e7fbb - - .. note:: - - The ``qdhcp`` namespace might not exist until launching an instance. - -#. Determine the external network gateway IP address for the project network - on the router, typically the lowest IP address in the external subnet IP - allocation range: - - .. code-block:: console - - $ neutron router-port-list demo-router - - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - | id | name | mac_address | fixed_ips | - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - | b1a894fd-aee8-475c-9262-4342afdc1b58 | | fa:16:3e:c1:20:55 | {"subnet_id": "1d5ab804-8925-46b0-a7b4-e520dc247284", "ip_address": "192.168.1.1"} | - | ff5f93c6-3760-4902-a401-af78ff61ce99 | | fa:16:3e:54:d7:8c | {"subnet_id": "020bb28d-0631-4af2-aa97-7374d1d33557", "ip_address": "203.0.113.101"} | - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the external network gateway IP address on the project router: - - .. code-block:: console - - $ ping -c 4 203.0.113.101 - PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data. - 64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619 ms - 64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189 ms - 64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165 ms - 64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216 ms - - --- 203.0.113.101 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2999ms - rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms - -#. Source the regular project credentials. The following steps use the - ``demo`` project. -#. Launch an instance with an interface on the project network. -#. Obtain console access to the instance. - - #. Test connectivity to the project router: - - .. code-block:: console - - $ ping -c 4 192.168.1.1 - PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. - 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=0.357 ms - 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=0.473 ms - 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=0.504 ms - 64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=0.470 ms - - --- 192.168.1.1 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2998ms - rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms - - #. Test connectivity to the Internet: - - .. code-block:: console - - $ ping -c 4 openstack.org - PING openstack.org (174.143.194.225) 56(84) bytes of data. - 64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms - 64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms - 64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms - 64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms - - --- openstack.org ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3003ms - rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms - -#. Create a floating IP address on the external network: - - .. code-block:: console - - $ neutron floatingip-create ext-net - - +---------------------+--------------------------------------+ - | Field | Value | - +---------------------+--------------------------------------+ - | fixed_ip_address | | - | floating_ip_address | 203.0.113.102 | - | floating_network_id | e5f9be2f-3332-4f2d-9f4d-7f87a5a7692e | - | id | 77cf2a36-6c90-4941-8e62-d48a585de050 | - | port_id | | - | router_id | | - | status | DOWN | - | tenant_id | 443cd1596b2e46d49965750771ebbfe1 | - +---------------------+--------------------------------------+ - -#. Associate the floating IP address with the instance: - - .. code-block:: console - - $ nova floating-ip-associate demo-instance1 203.0.113.102 - -#. Verify addition of the floating IP address to the instance: - - .. code-block:: console - - $ nova list - - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - | ID | Name | Status | Task State | Power State | Networks | - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - | 05682b91-81a1-464c-8f40-8b3da7ee92c5 | demo-instance1 | ACTIVE | - | Running | demo-net=192.168.1.3, 203.0.113.102 | - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the floating IP address associated with the instance: - - .. code-block:: console - - $ ping -c 4 203.0.113.102 - PING 203.0.113.102 (203.0.113.112) 56(84) bytes of data. - 64 bytes from 203.0.113.102: icmp_req=1 ttl=63 time=3.18 ms - 64 bytes from 203.0.113.102: icmp_req=2 ttl=63 time=0.981 ms - 64 bytes from 203.0.113.102: icmp_req=3 ttl=63 time=1.06 ms - 64 bytes from 203.0.113.102: icmp_req=4 ttl=63 time=0.929 ms - - --- 203.0.113.102 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3002ms - rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms diff --git a/doc/networking-guide/source/scenario-classic-ovs.rst b/doc/networking-guide/source/scenario-classic-ovs.rst deleted file mode 100644 index 1b452d1732..0000000000 --- a/doc/networking-guide/source/scenario-classic-ovs.rst +++ /dev/null @@ -1,1133 +0,0 @@ -.. _scenario-classic-ovs: - -=================================== -Scenario: Classic with Open vSwitch -=================================== - -This scenario describes a classic implementation of the OpenStack -Networking service using the ML2 plug-in with Open vSwitch (OVS). - -The classic implementation contributes the networking portion of self-service -virtual data center infrastructure by providing a method for regular -(non-privileged) users to manage virtual networks within a project and -includes the following components: - -* Project (tenant) networks - - Project networks provide connectivity to instances for a particular - project. Regular (non-privileged) users can manage project networks - within the allocation that an administrator or operator defines for - them. Project networks can use VLAN, GRE, or VXLAN transport methods - depending on the allocation. Project networks generally use private - IP address ranges (RFC1918) and lack connectivity to external networks - such as the Internet. Networking refers to IP addresses on project - networks as *fixed* IP addresses. - -* External networks - - External networks provide connectivity to external networks such as - the Internet. Only administrative (privileged) users can manage external - networks because they interface with the physical network infrastructure. - External networks can use flat or VLAN transport methods depending on the - physical network infrastructure and generally use public IP address - ranges. - - .. note:: - - A flat network essentially uses the untagged or native VLAN. Similar to - layer-2 properties of physical networks, only one flat network can exist - per external bridge. In most cases, production deployments should use - VLAN transport for external networks. - -* Routers - - Routers typically connect project and external networks. By default, they - implement SNAT to provide outbound external connectivity for instances on - project networks. Each router uses an IP address in the external network - allocation for SNAT. Routers also use DNAT to provide inbound external - connectivity for instances on project networks. Networking refers to IP - addresses on routers that provide inbound external connectivity for - instances on project networks as *floating* IP addresses. Routers can also - connect project networks that belong to the same project. - -* Supporting services - - Other supporting services include DHCP and metadata. The DHCP service - manages IP addresses for instances on project networks. The metadata - service provides an API for instances on project networks to obtain - metadata such as SSH keys. - -The example configuration creates one flat external network and one VXLAN -project (tenant) network. However, this configuration also supports VLAN -external networks, VLAN project networks, and GRE project networks. - -Prerequisites -~~~~~~~~~~~~~ - -These prerequisites define the minimal physical infrastructure and immediate -OpenStack service dependencies necessary to deploy this scenario. For example, -the Networking service immediately depends on the Identity service and the -Compute service immediately depends on the Networking service. These -dependencies lack services such as the Image service because the Networking -service does not immediately depend on it. However, the Compute service -depends on the Image service to launch an instance. The example configuration -in this scenario assumes basic configuration knowledge of Networking service -components. - -Infrastructure --------------- - -#. One controller node with one network interface: management. -#. One network node with four network interfaces: management, project tunnel - networks, VLAN project networks, and external (typically the Internet). - The Open vSwitch bridge ``br-vlan`` must contain a port on the VLAN - interface and Open vSwitch bridge ``br-ex`` must contain a port on the - external interface. -#. At least one compute node with three network interfaces: management, - project tunnel networks, and VLAN project networks. The Open vSwitch - bridge ``br-vlan`` must contain a port on the VLAN interface. - -To improve understanding of network traffic flow, the network and compute -nodes contain a separate network interface for VLAN project networks. In -production environments, VLAN project networks can use any Open vSwitch -bridge with access to a network interface. For example, the ``br-tun`` -bridge. - -In the example configuration, the management network uses 10.0.0.0/24, -the tunnel network uses 10.0.1.0/24, and the external network uses -203.0.113.0/24. The VLAN network does not require an IP address range -because it only handles layer-2 connectivity. - -.. image:: figures/scenario-classic-hw.png - :alt: Hardware layout - -.. image:: figures/scenario-classic-networks.png - :alt: Network layout - -.. image:: figures/scenario-classic-ovs-services.png - :alt: Service layout - -.. note:: - - For VLAN external and project networks, the physical network infrastructure - must support VLAN tagging. For best performance, 10+ Gbps networks should - support jumbo frames. - -.. warning:: - - Linux distributions often package older releases of Open vSwitch that can - introduce issues during operation with the Networking service. We recommend - using at least the latest long-term stable (LTS) release of Open vSwitch - for the best experience and support from Open vSwitch. See - ``__ for available releases and the - `installation instructions - `__ for - building newer releases from source on various distributions. - - Implementing VXLAN networks requires Linux kernel 3.13 or newer. - -OpenStack services - controller node ------------------------------------- - -#. Operational SQL server with ``neutron`` database and appropriate - configuration in the ``neutron.conf`` file. -#. Operational message queue service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the - ``nova.conf`` file. -#. Neutron server service, ML2 plug-in, and any dependencies. - -OpenStack services - network node ---------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Open vSwitch service, Open vSwitch agent, L3 agent, DHCP agent, metadata - agent, and any dependencies. - -OpenStack services - compute nodes ----------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``nova.conf`` file. -#. Open vSwitch service, Open vSwitch agent, and any dependencies. - -Architecture -~~~~~~~~~~~~ - -The classic architecture provides basic virtual networking components in -your environment. Routing among project and external networks resides -completely on the network node. Although more simple to deploy than -other architectures, performing all functions on the network node -creates a single point of failure and potential performance issues. -Consider deploying DVR or L3 HA architectures in production environments -to provide redundancy and increase performance. - -.. image:: figures/scenario-classic-general.png - :alt: Architecture overview - -The network node contains the following network components: - -#. Open vSwitch agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces, Linux bridges, and underlying interfaces. -#. DHCP agent managing the ``qdhcp`` namespaces. The ``qdhcp`` namespaces - provide DHCP services for instances using project networks. -#. L3 agent managing the ``qrouter`` namespaces. The ``qrouter`` namespaces - provide routing between project and external networks and among project - networks. They also route metadata traffic between instances and the - metadata agent. -#. Metadata agent handling metadata operations for instances. - -.. image:: figures/scenario-classic-ovs-network1.png - :alt: Network node components - overview - -.. image:: figures/scenario-classic-ovs-network2.png - :alt: Network node components - connectivity - -The compute nodes contain the following network components: - -#. Open vSwitch agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces, Linux bridges, and underlying interfaces. -#. Conventional Linux bridges handling security groups. Optionally, a native - OVS implementation can handle security groups. However, due to kernel and - OVS version requirements for it, this scenario uses conventional Linux - bridges. See :ref:`config-ovsfwdriver` for more information. - -.. image:: figures/scenario-classic-ovs-compute1.png - :alt: Compute node components - overview - -.. image:: figures/scenario-classic-ovs-compute2.png - :alt: Compute node components - connectivity - -Packet flow -~~~~~~~~~~~ - -.. note:: - - *North-south* network traffic travels between an instance and - external network, typically the Internet. *East-west* network - traffic travels between instances. - -Case 1: North-south for instances with a fixed IP address ---------------------------------------------------------- - -For instances with a fixed IP address, the network node routes -*north-south* network traffic between project and external networks. - -* External network - - * Network 203.0.113.0/24 - * IP address allocation from 203.0.113.101 to 203.0.113.200 - * Project network router interface 203.0.113.101 *TR* - -* Project network - - * Network 192.168.1.0/24 - * Gateway 192.168.1.1 with MAC address *TG* - -* Compute node 1 - - * Instance 1 192.168.1.11 with MAC address *I1* - -* Instance 1 resides on compute node 1 and uses a project network. -* The instance sends a packet to a host on the external network. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface (1) forwards the packet to the Linux - bridge ``qbr``. The packet contains destination MAC address *TG* - because the destination resides on another network. -#. Security group rules (2) on the Linux bridge ``qbr`` handle state tracking - for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` adds the internal tag for - the project network. -#. For VLAN project networks: - - #. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch VLAN bridge ``br-vlan``. - #. The Open vSwitch VLAN bridge ``br-vlan`` replaces the internal tag - with the actual VLAN tag of the project network. - #. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the - network node via the VLAN interface. - -#. For VXLAN and GRE project networks: - - #. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch tunnel bridge ``br-tun``. - #. The Open vSwitch tunnel bridge ``br-tun`` wraps the packet in a VXLAN - or GRE tunnel and adds a tag to identify the project network. - #. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - network node via the tunnel interface. - -The following steps involve the network node: - -#. For VLAN project networks: - - #. The VLAN interface forwards the packet to the Open vSwitch VLAN - bridge ``br-vlan``. - #. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. - #. The Open vSwitch integration bridge ``br-int`` replaces the actual - VLAN tag of the project network with the internal tag. - -#. For VXLAN and GRE project networks: - - #. The tunnel interface forwards the packet to the Open vSwitch tunnel - bridge ``br-tun``. - #. The Open vSwitch tunnel bridge ``br-tun`` unwraps the packet and adds - the internal tag for the project network. - #. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. - -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the ``qr`` interface (3) in the router namespace ``qrouter``. The ``qr`` - interface contains the project network gateway IP address *TG*. -#. The *iptables* service (4) performs SNAT on the packet using the ``qg`` - interface (5) as the source IP address. The ``qg`` interface contains - the project network router interface IP address *TR*. -#. The router namespace ``qrouter`` forwards the packet to the Open vSwitch - integration bridge ``br-int`` via the ``qg`` interface. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch external bridge ``br-ex``. -#. The Open vSwitch external bridge ``br-ex`` forwards the packet to the - external network via the external interface. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-ovs-flowns1.png - :alt: Network traffic flow - north/south with fixed IP address - -Case 2: North-south for instances with a floating IP address ------------------------------------------------------------- - -For instances with a floating IP address, the network node routes -*north-south* network traffic between project and external networks. - -* External network - - * Network 203.0.113.0/24 - * IP address allocation from 203.0.113.101 to 203.0.113.200 - * Project network router interface 203.0.113.101 *TR* - -* Project network - - * Network 192.168.1.0/24 - * Gateway 192.168.1.1 with MAC address *TG* - -* Compute node 1 - - * Instance 1 192.168.1.11 with MAC address *I1* and floating - IP address 203.0.113.102 *F1* - -* Instance 1 resides on compute node 1 and uses a project network. -* The instance receives a packet from a host on the external network. - -The following steps involve the network node: - -#. The external interface forwards the packet to the Open vSwitch external - bridge ``br-ex``. -#. The Open vSwitch external bridge ``br-ex`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. -#. The Open vSwitch integration bridge forwards the packet to the ``qg`` - interface (1) in the router namespace ``qrouter``. The ``qg`` interface - contains the instance 1 floating IP address *F1*. -#. The *iptables* service (2) performs DNAT on the packet using the ``qr`` - interface (3) as the source IP address. The ``qr`` interface contains - the project network router interface IP address *TR1*. -#. The router namespace ``qrouter`` forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` adds the internal tag for - the project network. -#. For VLAN project networks: - - #. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch VLAN bridge ``br-vlan``. - #. The Open vSwitch VLAN bridge ``br-vlan`` replaces the internal tag - with the actual VLAN tag of the project network. - #. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the - compute node via the VLAN interface. - -#. For VXLAN and GRE project networks: - - #. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch tunnel bridge ``br-tun``. - #. The Open vSwitch tunnel bridge ``br-tun`` wraps the packet in a VXLAN - or GRE tunnel and adds a tag to identify the project network. - #. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - compute node via the tunnel interface. - -The following steps involve compute node 1: - -#. For VLAN project networks: - - #. The VLAN interface forwards the packet to the Open vSwitch VLAN - bridge ``br-vlan``. - #. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. - #. The Open vSwitch integration bridge ``br-int`` replaces the actual - VLAN tag the project network with the internal tag. - -#. For VXLAN and GRE project networks: - - #. The tunnel interface forwards the packet to the Open vSwitch tunnel - bridge ``br-tun``. - #. The Open vSwitch tunnel bridge ``br-tun`` unwraps the packet and adds - the internal tag for the project network. - #. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. - -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Linux bridge ``qbr``. -#. Security group rules (4) on the Linux bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the ``tap`` interface (5) - on instance 1. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-ovs-flowns2.png - :alt: Network traffic flow - north/south with floating IP address - -Case 3: East-west for instances on different networks ------------------------------------------------------ - -For instances with a fixed or floating IP address, the network node -routes *east-west* network traffic among project networks using the -same project router. - -* Project network 1 - - * Network: 192.168.1.0/24 - * Gateway: 192.168.1.1 with MAC address *TG1* - -* Project network 2 - - * Network: 192.168.2.0/24 - * Gateway: 192.168.2.1 with MAC address *TG2* - -* Compute node 1 - - * Instance 1: 192.168.1.11 with MAC address *I1* - -* Compute node 2 - - * Instance 2: 192.168.2.11 with MAC address *I2* - -* Instance 1 resides on compute node 1 and uses project network 1. -* Instance 2 resides on compute node 2 and uses project network 2. -* Both project networks reside on the same router. -* Instance 1 sends a packet to instance 2. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface (1) forwards the packet to the Linux - bridge ``qbr``. The packet contains destination MAC address *TG1* - because the destination resides on another network. -#. Security group rules (2) on the Linux bridge ``qbr`` handle state tracking - for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` adds the internal tag for - project network 1. -#. For VLAN project networks: - - #. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch VLAN bridge ``br-vlan``. - #. The Open vSwitch VLAN bridge ``br-vlan`` replaces the internal tag - with the actual VLAN tag of project network 1. - #. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the - network node via the VLAN interface. - -#. For VXLAN and GRE project networks: - - #. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch tunnel bridge ``br-tun``. - #. The Open vSwitch tunnel bridge ``br-tun`` wraps the packet in a VXLAN - or GRE tunnel and adds a tag to identify project network 1. - #. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - network node via the tunnel interface. - -The following steps involve the network node: - -#. For VLAN project networks: - - #. The VLAN interface forwards the packet to the Open vSwitch VLAN - bridge ``br-vlan``. - #. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. - #. The Open vSwitch integration bridge ``br-int`` replaces the actual - VLAN tag of project network 1 with the internal tag. - -#. For VXLAN and GRE project networks: - - #. The tunnel interface forwards the packet to the Open vSwitch tunnel - bridge ``br-tun``. - #. The Open vSwitch tunnel bridge ``br-tun`` unwraps the packet and adds - the internal tag for project network 1. - #. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. - -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the ``qr-1`` interface (3) in the router namespace ``qrouter``. The ``qr-1`` - interface contains the project network 1 gateway IP address *TG1*. -#. The router namespace ``qrouter`` routes the packet to the ``qr-2`` interface - (4). The ``qr-2`` interface contains the project network 2 gateway IP - address *TG2*. -#. The router namespace ``qrouter`` forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` adds the internal tag for - project network 2. -#. For VLAN project networks: - - #. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch VLAN bridge ``br-vlan``. - #. The Open vSwitch VLAN bridge ``br-vlan`` replaces the internal tag - with the actual VLAN tag of project network 2. - #. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to compute - node 2 via the VLAN interface. - -#. For VXLAN and GRE project networks: - - #. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch tunnel bridge ``br-tun``. - #. The Open vSwitch tunnel bridge ``br-tun`` wraps the packet in a VXLAN - or GRE tunnel and adds a tag to identify project network 2. - #. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to - compute node 2 via the tunnel interface. - -The following steps involve compute node 2: - -#. For VLAN project networks: - - #. The VLAN interface forwards the packet to the Open vSwitch VLAN - bridge ``br-vlan``. - #. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. - #. The Open vSwitch integration bridge ``br-int`` replaces the actual - VLAN tag of project network 2 with the internal tag. - -#. For VXLAN and GRE project networks: - - #. The tunnel interface forwards the packet to the Open vSwitch tunnel - bridge ``br-tun``. - #. The Open vSwitch tunnel bridge ``br-tun`` unwraps the packet and adds - the internal tag for project network 2. - #. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. - -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Linux bridge ``qbr``. -#. Security group rules (5) on the Linux bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the ``tap`` interface (6) - on instance 2. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-ovs-flowew1.png - :alt: Network traffic flow - east/west for instances on different networks - -Case 4: East-west for instances on the same network ---------------------------------------------------- - -For instances with a fixed or floating IP address, the project network -switches *east-west* network traffic among instances without using a -project router on the network node. - -* Project network - - * Network: 192.168.1.0/24 - -* Compute node 1 - - * Instance 1: 192.168.1.11 with MAC address *I1* - -* Compute node 2 - - * Instance 2: 192.168.1.12 with MAC address *I2* - -* Instance 1 resides on compute node 1. -* Instance 2 resides on compute node 2. -* Both instances use the same project network. -* Instance 1 sends a packet to instance 2. -* The Open vSwitch agent handles switching within the project network. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface (1) forwards the packet to the Linux - bridge ``qbr``. The packet contains destination MAC address *I2* - because the destination resides on the same network. -#. Security group rules (2) on the Linux bridge ``qbr`` handle - state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` adds the internal tag for - provider network 1. -#. For VLAN project networks: - - #. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch VLAN bridge ``br-vlan``. - #. The Open vSwitch VLAN bridge ``br-vlan`` replaces the internal tag - with the actual VLAN tag of project network 1. - #. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the - compute node 2 via the VLAN interface. - -#. For VXLAN and GRE project networks: - - #. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch tunnel bridge ``br-tun``. - #. The Open vSwitch tunnel bridge ``br-tun`` wraps the packet in a VXLAN - or GRE tunnel and adds a tag to identify project network 1. - #. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - compute node 2 via the tunnel interface. - -The following steps involve compute node 2: - -#. For VLAN project networks: - - #. The VLAN interface forwards the packet to the Open vSwitch VLAN - bridge ``br-vlan``. - #. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. - #. The Open vSwitch integration bridge ``br-int`` replaces the actual - VLAN tag of project network 2 with the internal tag. - -#. For VXLAN and GRE project networks: - - #. The tunnel interface forwards the packet to the Open vSwitch tunnel - bridge ``br-tun``. - #. The Open vSwitch tunnel bridge ``br-tun`` unwraps the packet and adds - the internal tag for project network 2. - #. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. - -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Linux bridge ``qbr``. -#. Security group rules (3) on the Linux bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the ``tap`` interface (4) - on instance 2. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-classic-ovs-flowew2.png - :alt: Network traffic flow - east/west for instances on the same network - -Example configuration -~~~~~~~~~~~~~~~~~~~~~ - -Use the following example configuration as a template to deploy this -scenario in your environment. - -Controller node ---------------- - -#. In the ``neutron.conf`` file: - - * Configure common options: - - .. code-block:: ini - - [DEFAULT] - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = True - - * If necessary, :ref:`configure MTU `. - -#. In the ``ml2_conf.ini`` file: - - * Configure drivers and network types: - - .. code-block:: ini - - [ml2] - type_drivers = flat,vlan,gre,vxlan - tenant_network_types = vlan,gre,vxlan - mechanism_drivers = openvswitch,l2population - extension_drivers = port_security - - * Configure network mappings and ID ranges: - - .. code-block:: ini - - [ml2_type_flat] - flat_networks = external - - [ml2_type_vlan] - network_vlan_ranges = external,vlan:MIN_VLAN_ID:MAX_VLAN_ID - - [ml2_type_gre] - tunnel_id_ranges = MIN_GRE_ID:MAX_GRE_ID - - [ml2_type_vxlan] - vni_ranges = MIN_VXLAN_ID:MAX_VXLAN_ID - - Replace ``MIN_VLAN_ID``, ``MAX_VLAN_ID``, ``MIN_GRE_ID``, ``MAX_GRE_ID``, - ``MIN_VXLAN_ID``, and ``MAX_VXLAN_ID`` with VLAN, GRE, and VXLAN ID minimum - and maximum values suitable for your environment. - - .. note:: - - The first value in the ``tenant_network_types`` option becomes the - default project network type when a regular user creates a network. - - .. note:: - - The ``external`` value in the ``network_vlan_ranges`` option lacks VLAN - ID ranges to support use of arbitrary VLAN IDs by administrative users. - - * Configure the security group driver: - - .. code-block:: ini - - [securitygroup] - firewall_driver = iptables_hybrid - - * If necessary, :ref:`configure MTU `. - -#. Start the following services: - - * Server - -Network node ------------- - -#. In the ``openvswitch_agent.ini`` file, configure the Open vSwitch agent: - - .. code-block:: ini - - [ovs] - local_ip = TUNNEL_INTERFACE_IP_ADDRESS - bridge_mappings = vlan:br-vlan,external:br-ex - - [agent] - tunnel_types = gre,vxlan - l2_population = True - - [securitygroup] - firewall_driver = iptables_hybrid - - Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface - that handles GRE/VXLAN project networks. - -#. In the ``l3_agent.ini`` file, configure the L3 agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver - external_network_bridge = - - .. note:: - - The ``external_network_bridge`` option intentionally contains - no value. - -#. In the ``dhcp_agent.ini`` file, configure the DHCP agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver - enable_isolated_metadata = True - -#. In the ``metadata_agent.ini`` file, configure the metadata agent: - - .. code-block:: ini - - [DEFAULT] - nova_metadata_ip = controller - metadata_proxy_shared_secret = METADATA_SECRET - - Replace ``METADATA_SECRET`` with a suitable value for your environment. - -#. Start the following services: - - * Open vSwitch - * Open vSwitch agent - * L3 agent - * DHCP agent - * Metadata agent - -Compute nodes -------------- - -#. In the ``openvswitch_agent.ini`` file, configure the Open vSwitch agent: - - .. code-block:: ini - - [ovs] - local_ip = TUNNEL_INTERFACE_IP_ADDRESS - bridge_mappings = vlan:br-vlan - - [agent] - tunnel_types = gre,vxlan - l2_population = True - - [securitygroup] - firewall_driver = iptables_hybrid - - Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface - that handles GRE/VXLAN project networks. - -#. Start the following services: - - * Open vSwitch - * Open vSwitch agent - -Verify service operation ------------------------- - -#. Source the administrative project credentials. -#. Verify presence and operation of the agents: - - .. code-block:: console - - $ neutron agent-list - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - | id | agent_type | host | alive | admin_state_up | binary | - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - | 1eaf6079-41c8-4b5b-876f-73b02753ff57 | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent | - | 511c27b3-8317-4e27-8a0f-b158e4fb8368 | Metadata agent | network1 | :-) | True | neutron-metadata-agent | - | 7eae11ef-8157-4fd4-a352-bc841cf709f6 | Open vSwitch agent | network1 | :-) | True | neutron-openvswitch-agent | - | a9110ce6-22cc-4f78-9b2e-57f83aac68a3 | Open vSwitch agent | compute2 | :-) | True | neutron-openvswitch-agent | - | c41f3200-8eda-43ab-8135-573e826776d9 | DHCP agent | network1 | :-) | True | neutron-dhcp-agent | - | f897648e-7623-486c-8043-1b219eb2895a | L3 agent | network1 | :-) | True | neutron-l3-agent | - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - -Create initial networks ------------------------ - -This example creates a flat external network and a VXLAN project network. - -#. Source the administrative project credentials. -#. Create the external network: - - .. code-block:: console - - $ neutron net-create ext-net --router:external True \ - --provider:physical_network external --provider:network_type flat - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | e5f9be2f-3332-4f2d-9f4d-7f87a5a7692e | - | name | ext-net | - | provider:network_type | flat | - | provider:physical_network | external | - | provider:segmentation_id | | - | router:external | True | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | 96393622940e47728b6dcdb2ef405f50 | - +---------------------------+--------------------------------------+ - -#. Create a subnet on the external network: - - .. code-block:: console - - $ neutron subnet-create ext-net --name ext-subnet --allocation-pool \ - start=203.0.113.101,end=203.0.113.200 --disable-dhcp \ - --gateway 203.0.113.1 203.0.113.0/24 - Created a new subnet: - +-------------------+----------------------------------------------------+ - | Field | Value | - +-------------------+----------------------------------------------------+ - | allocation_pools | {"start": "203.0.113.101", "end": "203.0.113.200"} | - | cidr | 203.0.113.0/24 | - | dns_nameservers | | - | enable_dhcp | False | - | gateway_ip | 203.0.113.1 | - | host_routes | | - | id | cd9c15a1-0a66-4bbe-b1b4-4b7edd936f7a | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | ext-subnet | - | network_id | e5f9be2f-3332-4f2d-9f4d-7f87a5a7692e | - | tenant_id | 96393622940e47728b6dcdb2ef405f50 | - +-------------------+----------------------------------------------------+ - -.. note:: - - The example configuration contains ``vlan`` as the first project network - type. Only an administrative user can create other types of networks such as - GRE or VXLAN. The following commands use the ``admin`` project credentials - to create a VXLAN project network. - -#. Obtain the ID of a regular project. For example, using the ``demo`` project: - - .. code-block:: console - - $ openstack project show demo - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Demo Project | - | enabled | True | - | id | 443cd1596b2e46d49965750771ebbfe1 | - | name | demo | - +-------------+----------------------------------+ - -#. Create the project network: - - .. code-block:: console - - $ neutron net-create demo-net --tenant-id 443cd1596b2e46d49965750771ebbfe1 \ - --provider:network_type vxlan - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | 6e9c5324-68d1-47a8-98d5-8268db955475 | - | name | demo-net | - | provider:network_type | vxlan | - | provider:physical_network | | - | provider:segmentation_id | 1 | - | router:external | False | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | 443cd1596b2e46d49965750771ebbfe1 | - +---------------------------+--------------------------------------+ - -#. Source the regular project credentials. The following steps use the - ``demo`` project. -#. Create a subnet on the project network: - - .. code-block:: console - - $ neutron subnet-create demo-net --name demo-subnet --gateway 192.168.1.1 \ - 192.168.1.0/24 - Created a new subnet: - +-------------------+--------------------------------------------------+ - | Field | Value | - +-------------------+--------------------------------------------------+ - | allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} | - | cidr | 192.168.1.0/24 | - | dns_nameservers | | - | enable_dhcp | True | - | gateway_ip | 192.168.1.1 | - | host_routes | | - | id | c7b42e58-a2f4-4d63-b199-d266504c03c9 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | demo-subnet | - | network_id | 6e9c5324-68d1-47a8-98d5-8268db955475 | - | tenant_id | 443cd1596b2e46d49965750771ebbfe1 | - +-------------------+--------------------------------------------------+ - -#. Create a project router: - - .. code-block:: console - - $ neutron router-create demo-router - Created a new router: - +-----------------------+--------------------------------------+ - | Field | Value | - +-----------------------+--------------------------------------+ - | admin_state_up | True | - | external_gateway_info | | - | id | 474a5b1f-d64c-4db9-b3b2-8ae9bb1b5970 | - | name | demo-router | - | routes | | - | status | ACTIVE | - | tenant_id | 443cd1596b2e46d49965750771ebbfe1 | - +-----------------------+--------------------------------------+ - -#. Add the project subnet as an interface on the router: - - .. code-block:: console - - $ neutron router-interface-add demo-router demo-subnet - Added interface 0fa57069-29fd-4795-87b7-c123829137e9 to router demo-router. - -#. Add a gateway to the external network on the router: - - .. code-block:: console - - $ neutron router-gateway-set demo-router ext-net - Set gateway for router demo-router - -Verify network operation ------------------------- - -#. On the network node, verify creation of the ``qrouter`` and ``qdhcp`` - namespaces: - - .. code-block:: console - - $ ip netns - qrouter-4d7928a0-4a3c-4b99-b01b-97da2f97e279 - qdhcp-353f5937-a2d3-41ba-8225-fa1af2538141 - - .. note:: - The ``qdhcp`` namespace might not exist until launching an instance. - -#. Determine the external network gateway IP address for the project network - on the router, typically the lowest IP address in the external subnet IP - allocation range: - - .. code-block:: console - - $ neutron router-port-list demo-router - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - | id | name | mac_address | fixed_ips | - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - | b1a894fd-aee8-475c-9262-4342afdc1b58 | | fa:16:3e:c1:20:55 | {"subnet_id": "c7b42e58-a2f4-4d63-b199-d266504c03c9", "ip_address": "192.168.1.1"} | - | ff5f93c6-3760-4902-a401-af78ff61ce99 | | fa:16:3e:54:d7:8c | {"subnet_id": "cd9c15a1-0a66-4bbe-b1b4-4b7edd936f7a", "ip_address": "203.0.113.101"} | - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the external network gateway IP address on the project router: - - .. code-block:: console - - $ ping -c 4 203.0.113.101 - PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data. - 64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619 ms - 64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189 ms - 64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165 ms - 64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216 ms - - --- 203.0.113.101 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2999ms - rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms - -#. Source the regular project credentials. The following steps use the - ``demo`` project. -#. Launch an instance with an interface on the project network. -#. Obtain console access to the instance. - - #. Test connectivity to the project router: - - .. code-block:: console - - $ ping -c 4 192.168.1.1 - PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. - 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=0.357 ms - 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=0.473 ms - 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=0.504 ms - 64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=0.470 ms - - --- 192.168.1.1 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2998ms - rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms - - #. Test connectivity to the Internet: - - .. code-block:: console - - $ ping -c 4 openstack.org - PING openstack.org (174.143.194.225) 56(84) bytes of data. - 64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms - 64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms - 64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms - 64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms - - --- openstack.org ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3003ms - rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms - -#. Create the appropriate security group rules to allow ping and SSH access - to the instance. For example: - - .. code-block:: console - - $ openstack security group rule create default --proto icmp - +-----------------------+--------------------------------------+ - | Field | Value | - +-----------------------+--------------------------------------+ - | id | 5a61ab14-c1b7-4520-a7a7-e32f2e575983 | - | ip_protocol | icmp | - | ip_range | 0.0.0.0/0 | - | parent_group_id | cd15a4d3-d1c1-4702-a855-5d35027dd04c | - | port_range | | - | remote_security_group | | - +-----------------------+--------------------------------------+ - - $ openstack security group rule create default --proto tcp --dst-port 22 - +-----------------------+--------------------------------------+ - | Field | Value | - +-----------------------+--------------------------------------+ - | id | de7aad57-9df2-492f-bdaf-54da18b56dc8 | - | ip_protocol | tcp | - | ip_range | 0.0.0.0/0 | - | parent_group_id | cd15a4d3-d1c1-4702-a855-5d35027dd04c | - | port_range | 22:22 | - | remote_security_group | | - +-----------------------+--------------------------------------+ - -#. Create a floating IP address on the external network: - - .. code-block:: console - - $ openstack ip floating create ext-net - +-------------+--------------------------------------+ - | Field | Value | - +-------------+--------------------------------------+ - | fixed_ip | None | - | id | dad7a1f1-128c-4ed4-8bfa-1ed84b741a56 | - | instance_id | None | - | ip | 203.0.113.102 | - | pool | ext-net | - +-------------+--------------------------------------+ - -#. Associate the floating IP address with the instance: - - .. code-block:: console - - $ openstack ip floating add 203.0.113.102 demo-instance1 - - .. note:: - - This command provides no output. - -#. Verify addition of the floating IP address to the instance: - - .. code-block:: console - - $ openstack server list - +--------------------------------------+----------------+--------+------------------------------------+ - | ID | Name | Status | Networks | - +--------------------------------------+----------------+--------+------------------------------------+ - | 05682b91-81a1-464c-8f40-8b3da7ee92c5 | demo-instance1 | ACTIVE | private=192.168.1.3, 203.0.113.102 | - +--------------------------------------+----------------+--------+------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the floating IP address associated with the instance: - - .. code-block:: console - - $ ping -c 4 203.0.113.102 - PING 203.0.113.102 (203.0.113.112) 56(84) bytes of data. - 64 bytes from 203.0.113.102: icmp_req=1 ttl=63 time=3.18 ms - 64 bytes from 203.0.113.102: icmp_req=2 ttl=63 time=0.981 ms - 64 bytes from 203.0.113.102: icmp_req=3 ttl=63 time=1.06 ms - 64 bytes from 203.0.113.102: icmp_req=4 ttl=63 time=0.929 ms - - --- 203.0.113.102 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3002ms - rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms diff --git a/doc/networking-guide/source/scenario-dvr-ovs.rst b/doc/networking-guide/source/scenario-dvr-ovs.rst deleted file mode 100644 index cf35e710c2..0000000000 --- a/doc/networking-guide/source/scenario-dvr-ovs.rst +++ /dev/null @@ -1,1042 +0,0 @@ -.. _scenario-dvr-ovs: - -=================================================================== -Scenario: High Availability using Distributed Virtual Routing (DVR) -=================================================================== - -This scenario describes the high-availability Distributed Virtual Routing -(DVR) implementation of the OpenStack Networking service using the ML2 -plug-in and Open vSwitch. The example configuration creates one flat -external network and one VXLAN project (tenant) network. However, this -configuration also supports VLAN external networks, VLAN project networks, -and GRE project networks. - -The DVR architecture augments the classic architecture by providing direct -connectivity to external networks on compute nodes. For instances with a -floating IP address, routing between project and external networks resides -completely on the compute nodes to eliminate single point of failure and -performance issues with classic network nodes. Routing also resides -completely on the compute nodes for instances with a fixed or floating IP -address using project networks on the same distributed virtual router. -However, instances with a fixed IP address still rely on the network node for -routing and SNAT services between project and external networks. - -Prerequisites -~~~~~~~~~~~~~ - -These prerequisites define the minimal physical infrastructure and immediate -OpenStack service dependencies necessary to deploy this scenario. For example, -the Networking service immediately depends on the Identity service and the -Compute service immediately depends on the Networking service. These -dependencies lack services such as the Image service because the Networking -service does not immediately depend on it. However, the Compute service -depends on the Image service to launch an instance. The example configuration -in this scenario assumes basic configuration knowledge of Networking service -components. - -Infrastructure --------------- - -#. One controller node with one network interface: management. -#. One network node with four network interfaces: management, project tunnel - networks, VLAN project networks, and external (typically the Internet). - The Open vSwitch bridge ``br-vlan`` must contain a port on the VLAN - interface and the Open vSwitch bridge ``br-ex`` must contain a port on the - external interface. -#. At least two compute nodes with four network interfaces: management, - project tunnel networks, project VLAN networks, and external (typically - the Internet). The Open vSwitch bridge ``br-vlan`` must contain a port - on the VLAN interface and the Open vSwitch bridge ``br-ex`` must contain - a port on the external interface. - -In the example configuration, the management network uses 10.0.0.0/24, -the tunnel network uses 10.0.1.0/24, and the external network uses -203.0.113.0/24. The VLAN network does not require an IP address range -because it only handles layer 2 connectivity. - -.. image:: figures/scenario-dvr-hw.png - :alt: Hardware layout - -.. image:: figures/scenario-dvr-networks.png - :alt: Network layout - -.. image:: figures/scenario-dvr-services.png - :alt: Service layout - -.. note:: - - For VLAN external and project networks, the network infrastructure must - support VLAN tagging. For best performance, 10+ Gbps networks should support - jumbo frames. - -.. warning:: - - Linux distributions often package older releases of Open vSwitch that can - introduce issues during operation with the Networking service. We recommend - using at least the latest long-term stable (LTS) release of Open vSwitch - for the best experience and support from Open vSwitch. See - ``__ for available releases and the - `installation instructions - `__ for - building newer releases from source on various distributions. - - Implementing VXLAN networks requires Linux kernel 3.13 or newer. - -OpenStack services - controller node ------------------------------------- - -#. Operational SQL server with ``neutron`` database and appropriate - configuration in the ``neutron.conf`` file. -#. Operational message queue service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``nova.conf`` file. -#. Neutron server service, ML2 plug-in, and any dependencies. - -OpenStack services - network node ---------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Open vSwitch service, Open vSwitch agent, L3 agent, DHCP agent, metadata - agent, and any dependencies. - -OpenStack services - compute nodes ----------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Compute hypervisor service with appropriate - configuration to use neutron in the ``nova.conf`` file. -#. Open vSwitch service, Open vSwitch agent, L3 agent, metadata agent, and - any dependencies. - -Architecture -~~~~~~~~~~~~ - -.. image:: figures/scenario-dvr-general.png - :alt: Architecture overview - -.. note:: - - The term *north-south* generally defines network traffic that - travels between an instance and external network (typically the - Internet) and the term *east-west* generally defines network traffic - that travels between instances. - -The network node contains the following network components: - -#. Open vSwitch agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces, Linux bridges, and underlying interfaces. -#. DHCP agent managing the ``qdhcp`` namespaces. The ``dhcp`` namespaces - provide DHCP services for instances using project networks. -#. L3 agent managing the ``qrouter`` and ``snat`` namespaces. - - #. For instances using project networks on classic routers, the ``qrouter`` - namespaces route *north-south* and *east-west* network traffic and - perform DNAT/SNAT similar to the classic scenarios. They also route - metadata traffic between instances and the metadata agent. - #. For instances with a fixed IP address using project networks on - distributed routers, the ``snat`` namespaces perform SNAT for - *north-south* network traffic. - -#. Metadata agent handling metadata operations for instances using project - networks on classic routers. - -.. image:: figures/scenario-dvr-network1.png - :alt: Network node components - overview - -.. image:: figures/scenario-dvr-network2.png - :alt: Network node components - connectivity - -The compute nodes contain the following network components: - -#. Open vSwitch agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces, Linux bridges, and underlying interfaces. - -#. L3 agent managing the ``qrouter`` and ``fip`` namespaces. - - #. For instances with a floating IP address using project networks on - distributed routers, the ``fip`` namespaces route *north-south* network - traffic and perform DNAT/SNAT. - #. For instances with a fixed or floating IP address using project - networks on distributed routers, the ``qrouter`` namespaces route - *east-west* traffic. - -#. Metadata agent handling metadata operations for instances using project - networks on distributed routers. -#. Conventional Linux bridges handling security groups. Optionally, a native - OVS implementation can handle security groups. However, due to kernel and - OVS version requirements for it, this scenario uses conventional Linux - bridges. See :ref:`config-ovsfwdriver` for more information. - -.. image:: figures/scenario-dvr-compute1.png - :alt: Network node components - overview - -.. image:: figures/scenario-dvr-compute2.png - :alt: Network node components - connectivity - -Packet flow -~~~~~~~~~~~ - -Case 1: North/south for instances with a fixed IP address ---------------------------------------------------------- - -For instances with a fixed IP address using project networks on distributed -routers, the network node routes *north-south* network traffic between -project and external networks. - -* External network - - * Network 203.0.113.0/24 - * Gateway 203.0.113.1 with MAC address *EG* - * Floating IP range 203.0.113.101 to 203.0.113.200 - * Project network router interface 203.0.113.101 *TR* - * Project network SNAT interface 192.168.1.2 with MAC address *TN* - -* Project network - - * Network 192.168.1.0/24 - * Gateway 192.168.1.1 with MAC address *TG* - -* Compute node 1 - - * Instance 1 192.168.1.11 with MAC address *I1* - * DVR MAC address *D1* - -* Instance 1 resides on compute node 1 and uses a project network. -* The instance sends a packet to a host on the external network. - -.. note:: - - This scenario supports both VLAN and GRE/VXLAN project networks. - However, the packet flow only considers one instance using a VXLAN project - network for simplicity. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface (1) forwards the packet to the Linux - bridge ``qbr``. The packet contains destination MAC address *TG* - because the destination resides on another network. -#. Security group rules (2) on the Linux bridge ``qbr`` handle state tracking - for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` modifies the packet to - contain the internal tag for project network 1. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet (3) - to the project network 1 gateway *TG* interface ``qr`` in the distributed - router namespace ``qrouter``. -#. The distributed router ``qrouter`` namespace resolves the project network 1 - SNAT interface MAC address *TN* on the ``sg`` interface (4) in the SNAT - namespace ``snat`` and forwards the packet to the Open vSwitch integration - bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to the - Open vSwitch tunnel bridge ``br-tun``. -#. The Open vSwitch tunnel bridge ``br-tun`` replaces the packet source - MAC address *I1* with *D1*. -#. The Open vSwitch tunnel bridge ``br-tun`` wraps the packet in a VXLAN - tunnel that contains a tag for project network 1. -#. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - network node via the tunnel interface. - -The following steps involve the network node: - -#. The tunnel interface forwards the packet to the Open vSwitch tunnel - bridge ``br-tun``. -#. The Open vSwitch tunnel bridge ``br-tun`` unwraps the packet and adds - the internal tag for project network 1. -#. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` replaces the packet - source MAC address *D1* with *TG*. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the ``sg`` interface (4) in the SNAT namespace ``snat``. -#. The *iptables* service (5) performs SNAT on the packet using the project - network 1 router interface IP address *TR* on the ``qg`` interface (6). -#. The ``qg`` interface forwards the packet to the Open vSwitch external - bridge ``br-ex``. -#. The Open vSwitch external bridge ``br-ex`` forwards the packet to the - external network via the external interface. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-dvr-flowns1.png - :alt: Network traffic flow - north/south with fixed IP address - -Case 2: North/south for instances with a floating IP address ------------------------------------------------------------- - -For instances with a floating IP address using project networks on -distributed routers, the compute node containing the instance routes -*north-south* network traffic between project and external networks, -avoiding the network node. Given the complexity of this case, the -following case covers both the flow of network traffic from the external -network to an instance and from an instance to the external network. - -* External network - - * Network 203.0.113.0/24 - * Gateway 203.0.113.1 with MAC address *EG* - * Floating IP range 203.0.113.101 to 203.0.113.200 - * Project network router interface 203.0.113.101 *TR* - -* Project network - - * Network 192.168.1.0/24 - * Gateway 192.168.1.1 with MAC address *TG* - -* Compute node - - * Instance 1 192.168.1.11 with MAC address *I1* and floating - IP address 203.0.113.102 *F1* - * DVR MAC address *D1* - * DVR internal IP addresses *DA1* and *DA2* - -* Instance 1 resides on compute node 1 and uses a project network. -* Instance 1 sends a packet to a host on the external network. - -.. note:: - - This scenario supports both VLAN and GRE/VXLAN project networks. - However, the packet flow only considers one instance using a VXLAN project - network for simplicity. - -The following steps involve a packet inbound from the external network -to an instance on compute node 1: - -#. The external interface forwards the packet to the Open vSwitch - external bridge ``br-ex``. The packet contains destination IP - address *F1*. -#. The Open vSwitch external bridge ``br-ex`` forwards the packet to the - ``fg`` interface (1) in the floating IP namespace ``fip``. The ``fg`` - interface responds to any ARP requests for the instance floating IP - address *F1*. -#. The floating IP namespace ``fip`` routes the packet (2) to the - distributed router namespace ``qrouter`` using DVR internal IP - addresses *DA1* and *DA2*. The ``fpr`` interface (3) contains DVR - internal IP address *DA1* and the ``rfp`` interface (4) contains DVR - internal IP address *DA2*. -#. The floating IP namespace ``fip`` forwards the packet to the ``rfp`` - interface (5) in the distributed router namespace ``qrouter``. The ``rfp`` - interface also contains the instance floating IP address *F1*. -#. The *iptables* service (6) in the distributed router namespace ``qrouter`` - performs DNAT on the packet using the destination IP address. The ``qr`` - interface (7) contains the project network gateway IP address *TG*. -#. The distributed router namespace ``qrouter`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Linux bridge ``qbr``. -#. Security group rules (8) on the Linux bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the instance ``tap`` - interface (9). - -The following steps involve a packet outbound from an instance on -compute node 1 to the external network: - -#. The instance 1 ``tap`` interface (9) forwards the packet to the Linux - bridge ``qbr``. The packet contains destination MAC address *TG1* - because the destination resides on another network. -#. Security group rules (8) on the Linux bridge ``qbr`` handle state tracking - for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the ``qr`` interface (7) in the distributed router namespace ``qrouter``. - The ``qr`` interface contains the project network gateway IP address - *TG*. -#. The *iptables* service (6) performs SNAT on the packet using the ``rfp`` - interface (5) as the source IP address. The ``rfp`` interface contains - the instance floating IP address *F1*. -#. The distributed router namespace ``qrouter`` (2) routes the packet - to the floating IP namespace ``fip`` using DVR internal IP addresses - *DA1* and *DA2*. The ``rfp`` interface (4) contains DVR internal - IP address *DA2* and the ``fpr`` interface (3) contains DVR internal - IP address *DA1*. -#. The ``fg`` interface (1) in the floating IP namespace ``fip`` forwards the - packet to the Open vSwitch external bridge ``br-ex``. The ``fg`` interface - contains the project router external IP address *TE*. -#. The Open vSwitch external bridge ``br-ex`` forwards the packet to the - external network via the external interface. - -.. image:: figures/scenario-dvr-flowns2.png - :alt: Network traffic flow - north/south with floating IP address - -Case 3: East/west for instances using different networks on the same router ---------------------------------------------------------------------------- - -For instances with fixed or floating IP addresses using project networks on -distributed routers, the compute nodes route *east-west* network traffic -among the project networks that reside on the same distributed virtual -router, avoiding the network node. - -* Project network 1 - - * Network 192.168.1.0/24 - * Gateway 192.168.1.1 with MAC address *TG1* - -* Project network 2 - - * Network 192.168.2.0/24 - * Gateway 192.168.2.1 with MAC address *TG2* - -* Compute node 1 - - * Instance 1 192.168.1.11 with MAC address *I1* - * DVR MAC address *D1* - -* Compute node 2 - - * Instance 2 192.168.2.11 with MAC address *I2* - * DVR MAC address *D2* - -* Instance 1 resides on compute node 1 and uses project network 1. -* Instance 2 resides on compute node 2 and uses project network 2. -* Both project networks reside on the same distributed virtual router. -* Instance 1 sends a packet to instance 2. - -.. note:: - - This scenario supports both VLAN and GRE/VXLAN project networks. - However, the packet flow only considers one instance using a VXLAN project - network for simplicity. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface (1) forwards the packet to the Linux - bridge ``qbr``. The packet contains destination MAC address *TG1* - because the destination resides on another network. -#. Security group rules (2) on the Linux bridge ``qbr`` handle state tracking - for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the project network 1 interface (3) in the distributed router namespace - ``qrouter``. -#. The distributed router namespace ``qrouter`` routes the packet to - project network 2. -#. The project network 2 interface (4) in the distributed router namespace - ``qrouter`` namespace forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` modifies the packet - to contain the internal tag for project network 2. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch tunnel bridge ``br-tun``. -#. The Open vSwitch tunnel bridge ``br-tun`` replaces the packet source - MAC address *TG2* with *D1*. -#. The Open vSwitch tunnel bridge ``br-tun`` wraps the packet in a VXLAN - tunnel that contains a tag for project network 2. -#. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to - compute node 2 via the tunnel interface. - -The following steps involve compute node 2: - -#. The tunnel interface forwards the packet to the Open vSwitch tunnel - bridge ``br-tun``. -#. The Open vSwitch tunnel bridge ``br-tun`` unwraps the packet. -#. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` replaces the packet - source MAC address *D1* with *TG2*. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Linux bridge ``qbr``. -#. Security group rules (7) on the Linux bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the instance 2 ``tap`` - interface (8). - -.. note:: - - Packets arriving from compute node 1 do not traverse the project - network interfaces (5,6) in the ``qrouter`` namespace on compute node 2. - However, return traffic traverses them. - -.. image:: figures/scenario-dvr-flowew1.png - :alt: Network traffic flow - east/west for instances on different networks - -.. todo: - Case 4: East/west for instances using different networks on different - routers - Case 5: East/west for instances using the same network on the same router - -Example configuration -~~~~~~~~~~~~~~~~~~~~~ - -Use the following example configuration as a template to deploy this -scenario in your environment. - -.. note:: - - This configuration primarily supports the Kilo release. - -Controller node ---------------- - -#. In the ``neutron.conf`` file: - - * Configure common options and enable distributed routers by default: - - .. code-block:: ini - - [DEFAULT] - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = True - router_distributed = True - - .. note:: - - Configuring the ``router_distributed = True`` option creates distributed - routers by default for all users. Without it, only privileged users can - create distributed routers using the ``--distributed True`` option - during router creation. - - * If necessary, :ref:`configure MTU `. - -#. In the ``ml2_conf.ini`` file: - - * Configure drivers and network types: - - .. code-block:: ini - - [ml2] - type_drivers = flat,vlan,gre,vxlan - tenant_network_types = vlan,gre,vxlan - mechanism_drivers = openvswitch,l2population - extension_drivers = port_security - - * Configure network mappings and ID ranges: - - .. code-block:: ini - - [ml2_type_flat] - flat_networks = external - - [ml2_type_vlan] - network_vlan_ranges = external,vlan:MIN_VLAN_ID:MAX_VLAN_ID - - [ml2_type_gre] - tunnel_id_ranges = MIN_GRE_ID:MAX_GRE_ID - - [ml2_type_vxlan] - vni_ranges = MIN_VXLAN_ID:MAX_VXLAN_ID - - Replace ``MIN_VLAN_ID``, ``MAX_VLAN_ID``, ``MIN_GRE_ID``, ``MAX_GRE_ID``, - ``MIN_VXLAN_ID``, and ``MAX_VXLAN_ID`` with VLAN, GRE, and VXLAN ID minimum - and maximum values suitable for your environment. - - .. note:: - - The first value in the ``tenant_network_types`` option becomes the - default project network type when a non-privileged user creates a - network. - - .. note:: - - The ``external`` value in the ``network_vlan_ranges`` option lacks VLAN - ID ranges to support use of arbitrary VLAN IDs by privileged users. - - * Configure the security group driver: - - .. code-block:: ini - - [securitygroup] - firewall_driver = iptables_hybrid - - * If necessary, :ref:`configure MTU `. - -#. Start the following services: - - * Server - -Network node ------------- - -#. In the ``openvswitch_agent.ini`` file, configure the Open vSwitch agent: - - .. code-block:: ini - - [ovs] - local_ip = TUNNEL_INTERFACE_IP_ADDRESS - bridge_mappings = vlan:br-vlan,external:br-ex - - [agent] - tunnel_types = gre,vxlan - enable_distributed_routing = True - l2_population = True - arp_responder = True - - [securitygroup] - firewall_driver = iptables_hybrid - - Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface - that handles GRE/VXLAN project networks. - -#. In the ``l3_agent.ini`` file, configure the L3 agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver - external_network_bridge = - agent_mode = dvr_snat - - .. note:: - - The ``external_network_bridge`` option intentionally contains - no value. - -#. In the ``dhcp_agent.ini`` file, configure the DHCP agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver - enable_isolated_metadata = True - -#. In the ``metadata_agent.ini`` file, configure the metadata agent: - - .. code-block:: ini - - [DEFAULT] - nova_metadata_ip = controller - metadata_proxy_shared_secret = METADATA_SECRET - - Replace ``METADATA_SECRET`` with a suitable value for your environment. - -#. Start the following services: - - * Open vSwitch - * Open vSwitch agent - * L3 agent - * DHCP agent - * Metadata agent - -Compute nodes -------------- - -#. In the ``openvswitch_agent.ini`` file, configure the Open vSwitch agent: - - .. code-block:: ini - - [ovs] - local_ip = TUNNEL_INTERFACE_IP_ADDRESS - bridge_mappings = vlan:br-vlan,external:br-ex - - [agent] - tunnel_types = gre,vxlan - enable_distributed_routing = True - l2_population = True - arp_responder = True - - [securitygroup] - firewall_driver = iptables_hybrid - - Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface - that handles GRE/VXLAN project networks. - -#. In the ``l3_agent.ini`` file, configure the L3 agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver - external_network_bridge = - agent_mode = dvr - - .. note:: - - The ``external_network_bridge`` option intentionally contains - no value. - -#. In the ``metadata_agent.ini`` file, configure the metadata agent: - - .. code-block:: ini - - [DEFAULT] - nova_metadata_ip = controller - metadata_proxy_shared_secret = METADATA_SECRET - - Replace ``METADATA_SECRET`` with a suitable value for your environment. - -#. Start the following services: - - * Open vSwitch - * Open vSwitch agent - * L3 agent - * Metadata agent - -Verify service operation ------------------------- - -#. Source the administrative project credentials. -#. Verify presence and operation of the agents: - - .. code-block:: console - - $ neutron agent-list - - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - | id | agent_type | host | alive | admin_state_up | binary | - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - | 10b084e5-4ab8-43d6-9b04-6d56f27f9cd4 | Metadata agent | network1 | :-) | True | neutron-metadata-agent | - | 2f90ef81-3eed-4ecf-b6b9-2d2c21dda85c | Open vSwitch agent | compute2 | :-) | True | neutron-openvswitch-agent | - | 319563ac-88f9-4352-b63e-e55beb673372 | DHCP agent | network1 | :-) | True | neutron-dhcp-agent | - | 3345723e-16e8-4b74-9d15-d7f1f977a3bd | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent | - | 4643c811-a54a-41da-91a8-c2328bcaeea3 | Open vSwitch agent | network1 | :-) | True | neutron-openvswitch-agent | - | 5ad81671-efc3-4acc-9d5d-030a1c4f6a25 | L3 agent | compute1 | :-) | True | neutron-l3-agent | - | 641337fa-99c2-468d-8d7e-89277d6ba144 | Metadata agent | compute1 | :-) | True | neutron-metadata-agent | - | 9372e008-bd29-4436-8e01-8ddfd50d2b74 | L3 agent | network1 | :-) | True | neutron-l3-agent | - | af9d1169-1012-4440-9de2-778c8fce21b9 | L3 agent | compute2 | :-) | True | neutron-l3-agent | - | ee59e3ba-ee3c-4621-b3d5-c9d8123b6cc5 | Metadata agent | compute2 | :-) | True | neutron-metadata-agent | - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - -Create initial networks ------------------------ - -This example creates a flat external network and a VXLAN project network. - -#. Source the administrative project credentials. -#. Create the external network: - - .. code-block:: console - - $ neutron net-create ext-net --router:external \ - --provider:physical_network external --provider:network_type flat - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | 893aebb9-1c1e-48be-8908-6b947f3237b3 | - | name | ext-net | - | provider:network_type | flat | - | provider:physical_network | external | - | provider:segmentation_id | | - | router:external | True | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | 54cd044c64d5408b83f843d63624e0d8 | - +---------------------------+--------------------------------------+ - -#. Create a subnet on the external network: - - .. code-block:: console - - $ neutron subnet-create ext-net 203.0.113.0/24 --allocation-pool \ - start=203.0.113.101,end=203.0.113.200 --disable-dhcp \ - --gateway 203.0.113.1 - - Created a new subnet: - +-------------------+------------------------------------------------------+ - | Field | Value | - +-------------------+------------------------------------------------------+ - | allocation_pools | {"start": "203.0.113.101", "end": "203.0.113.200"} | - | cidr | 203.0.113.0/24 | - | dns_nameservers | | - | enable_dhcp | False | - | gateway_ip | 203.0.113.1 | - | host_routes | | - | id | 9159f0dc-2b63-41cf-bd7a-289309da1391 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | ext-subnet | - | network_id | 893aebb9-1c1e-48be-8908-6b947f3237b3 | - | tenant_id | 54cd044c64d5408b83f843d63624e0d8 | - +-------------------+------------------------------------------------------+ - -.. note:: - - The example configuration contains ``vlan`` as the first project network - type. Only a privileged user can create other types of networks such as - GRE or VXLAN. The following commands use the ``admin`` project credentials - to create a VXLAN project network. - -#. Obtain the ID of a regular project. For example, using the ``demo`` project: - - .. code-block:: console - - $ openstack project show demo - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Demo Project | - | enabled | True | - | id | cdef0071a0194d19ac6bb63802dc9bae | - | name | demo | - +-------------+----------------------------------+ - -#. Create the project network: - - .. code-block:: console - - $ neutron net-create demo-net --tenant-id cdef0071a0194d19ac6bb63802dc9bae \ - --provider:network_type vxlan - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | ac108952-6096-4243-adf4-bb6615b3de28 | - | name | demo-net | - | provider:network_type | vxlan | - | provider:physical_network | | - | provider:segmentation_id | 1 | - | router:external | False | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | - +---------------------------+--------------------------------------+ - -#. Source the regular project credentials. -#. Create a subnet on the project network: - - .. code-block:: console - - $ neutron subnet-create demo-net --name demo-subnet --gateway 192.168.1.1 \ - 192.168.1.0/24 - - Created a new subnet: - +-------------------+------------------------------------------------------+ - | Field | Value | - +-------------------+------------------------------------------------------+ - | allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} | - | cidr | 192.168.1.0/24 | - | dns_nameservers | | - | enable_dhcp | True | - | gateway_ip | 192.168.1.1 | - | host_routes | | - | id | 69d38773-794a-4e49-b887-6de6734e792d | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | demo-subnet | - | network_id | ac108952-6096-4243-adf4-bb6615b3de28 | - | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | - +-------------------+------------------------------------------------------+ - -#. Create a distributed project router: - - .. code-block:: console - - $ neutron router-create demo-router - - Created a new router: - +-----------------------+--------------------------------------+ - | Field | Value | - +-----------------------+--------------------------------------+ - | admin_state_up | True | - | distributed | True | - | external_gateway_info | | - | ha | False | - | id | 635660ae-a254-4feb-8993-295aa9ec6418 | - | name | demo-router | - | routes | | - | status | ACTIVE | - | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | - +-----------------------+--------------------------------------+ - - .. note:: - - Default policy might prevent the '`distributed`` flag from - appearing in the command output for non-privileged users. - -#. Attach the project network to the router: - - .. code-block:: console - - $ neutron router-interface-add demo-router demo-subnet - Added interface b1a894fd-aee8-475c-9262-4342afdc1b58 to router demo-router. - -#. Add a gateway to the external network for the project network on the - router: - - .. code-block:: console - - $ neutron router-gateway-set demo-router ext-net - Set gateway for router demo-router - -Verify network operation ------------------------- - -#. On the network node, verify creation of the `snat`, `qrouter`, and `qdhcp` - namespaces: - - .. code-block:: console - - $ ip netns - snat-4d7928a0-4a3c-4b99-b01b-97da2f97e279 - qrouter-4d7928a0-4a3c-4b99-b01b-97da2f97e279 - qdhcp-353f5937-a2d3-41ba-8225-fa1af2538141 - - .. note:: - - One or more namespaces might not exist until launching an instance. - -#. Source the administrative project credentials. -#. Determine the external network gateway IP address for the project network - on the router, typically the lowest IP address in the external subnet IP - allocation range: - - .. code-block:: console - - $ neutron router-port-list demo-router - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - | id | name | mac_address | fixed_ips | - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - | b1a894fd-aee8-475c-9262-4342afdc1b58 | | fa:16:3e:c1:20:55 | {"subnet_id": "69d38773-794a-4e49-b887-6de6734e792d", "ip_address": "192.168.1.1"} | - | ff5f93c6-3760-4902-a401-af78ff61ce99 | | fa:16:3e:54:d7:8c | {"subnet_id": "9159f0dc-2b63-41cf-bd7a-289309da1391", "ip_address": "203.0.113.101"} | - +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the external network gateway IP address on the project router: - - .. code-block:: console - - $ ping -c 4 203.0.113.101 - PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data. - 64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619 ms - 64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189 ms - 64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165 ms - 64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216 ms - - --- 203.0.113.101 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2999ms - rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms - -#. Source the regular project credentials. -#. Launch an instance with an interface on the project network. -#. On the compute node with the instance, verify creation of the ``qrouter`` - namespace: - - .. code-block:: console - - $ ip netns - qrouter-4d7928a0-4a3c-4b99-b01b-97da2f97e279 - -#. Obtain console access to the instance. - - #. Test connectivity to the project router: - - .. code-block:: console - - $ ping -c 4 192.168.1.1 - PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. - 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=0.357 ms - 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=0.473 ms - 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=0.504 ms - 64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=0.470 ms - - --- 192.168.1.1 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2998ms - rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms - - #. Test connectivity to the Internet: - - .. code-block:: console - - $ ping -c 4 openstack.org - PING openstack.org (174.143.194.225) 56(84) bytes of data. - 64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms - 64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms - 64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms - 64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms - - --- openstack.org ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3003ms - rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms - -#. Create the appropriate security group rules to allow ping and SSH access - to the instance. For example: - - .. code-block:: console - - $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | icmp | -1 | -1 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - - $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | tcp | 22 | 22 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - -#. Create a floating IP address on the external network: - - .. code-block:: console - - $ neutron floatingip-create ext-net - - Created a new floatingip: - +---------------------+--------------------------------------+ - | Field | Value | - +---------------------+--------------------------------------+ - | fixed_ip_address | | - | floating_ip_address | 203.0.113.102 | - | floating_network_id | 9bce64a3-a963-4c05-bfcd-161f708042d1 | - | id | 05e36754-e7f3-46bb-9eaa-3521623b3722 | - | port_id | | - | router_id | | - | status | DOWN | - | tenant_id | 7cf50047f8df4824bc76c2fdf66d11ec | - +---------------------+--------------------------------------+ - -#. Associate the floating IP address with the instance: - - .. code-block:: console - - $ nova floating-ip-associate demo-instance1 203.0.113.102 - -#. Verify addition of the floating IP address to the instance: - - .. code-block:: console - - $ nova list - - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - | ID | Name | Status | Task State | Power State | Networks | - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - | 05682b91-81a1-464c-8f40-8b3da7ee92c5 | demo-instance1 | ACTIVE | - | Running | demo-net=192.168.1.3, 203.0.113.102 | - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - -#. On the compute node with the instance, verify creation of the ``fip`` - namespace: - - .. code-block:: console - - $ ip netns - fip-2c7bd9c2-8ab0-46ef-b7c1-023ce0452c24 - -#. On the controller node or any host with access to the external network, - ping the floating IP address associated with the instance: - - .. code-block:: console - - $ ping -c 4 203.0.113.102 - PING 203.0.113.102 (203.0.113.112) 56(84) bytes of data. - 64 bytes from 203.0.113.102: icmp_req=1 ttl=63 time=3.18 ms - 64 bytes from 203.0.113.102: icmp_req=2 ttl=63 time=0.981 ms - 64 bytes from 203.0.113.102: icmp_req=3 ttl=63 time=1.06 ms - 64 bytes from 203.0.113.102: icmp_req=4 ttl=63 time=0.929 ms - - --- 203.0.113.102 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3002ms - rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms diff --git a/doc/networking-guide/source/scenario-l3ha-lb.rst b/doc/networking-guide/source/scenario-l3ha-lb.rst deleted file mode 100644 index a551d0232d..0000000000 --- a/doc/networking-guide/source/scenario-l3ha-lb.rst +++ /dev/null @@ -1,890 +0,0 @@ -.. _scenario-l3ha-lb: - -=============================================================== -Scenario: High availability using VRRP (L3HA) with Linux Bridge -=============================================================== - -This scenario describes a high-availability implementation of the OpenStack -Networking service using the ML2 plug-in and Linux bridge. - -This high availability implementation augments the :ref:`scenario-classic-lb` -architecture with Virtual Router Redundancy Protocol (VRRP) using -``keepalived`` to provide quick failover of layer-3 services. See -:ref:`scenario_l3ha_lb-packet_flow` for VRRP operation. Similar to the classic -scenario, all network traffic on a project network that requires routing -actively traverses only one network node regardless of the quantity of network -nodes providing HA for the router. Therefore, this high availability -implementation primarily addresses failure situations instead of bandwidth -constraints that limit performance. However, it supports random distribution -of routers on different network nodes to reduce the chances of bandwidth -constraints and to improve scaling. Also, this implementation does not address -situations where one or more layer-3 agents fail and the underlying virtual -networks continue to operate normally. Consider deploying -:ref:`scenario-dvr-ovs` to increase performance in addition to redundancy. As -of the Liberty release, you cannot combine the DVR and L3HA mechanisms. - -.. note:: - - The failover process only retains the state of network connections for - instances with a floating IP address. - -The example configuration creates one flat external network and one VXLAN -project (tenant) network. However, this configuration also supports VLAN -external and project networks. - -.. note:: - - Due to a bug, we recommend disabling the layer-2 population mechanism - for deployments using VXLAN project networks. For more information, see - ``__. - -Prerequisites -~~~~~~~~~~~~~ - -These prerequisites define the minimal physical infrastructure and immediate -OpenStack service dependencies necessary to deploy this scenario. For example, -the Networking service immediately depends on the Identity service and the -Compute service immediately depends on the Networking service. These -dependencies lack services such as the Image service because the Networking -service does not immediately depend on it. However, the Compute service -depends on the Image service to launch an instance. The example configuration -in this scenario assumes basic configuration knowledge of Networking service -components. For assistance with basic configuration of the Networking service, -see the Installation Guide. - -Infrastructure --------------- - -#. One controller node with one network interface: management. -#. At least two network nodes with four network interfaces: management, - project tunnel networks, project VLAN networks, and external (typically - the Internet). -#. At least two compute nodes with three network interfaces: management, - project tunnel networks, and project VLAN networks. - -To improve understanding of network traffic flow, the network and compute -nodes contain a separate network interface for project VLAN networks. In -production environments, you can use any network interface for VLAN project -networks. - -In the example configuration, the management network uses 10.0.0.0/24, -the tunnel network uses 10.0.1.0/24, the VRRP network uses 169.254.192.0/18, -and the external network uses 203.0.113.0/24. The VLAN network does not -require an IP address range because it only handles layer-2 connectivity. - -.. image:: figures/scenario-l3ha-hw.png - :alt: Hardware layout - -.. image:: figures/scenario-l3ha-networks.png - :alt: Network layout - -.. image:: figures/scenario-l3ha-lb-services.png - :alt: Service layout - -.. note:: - - For VLAN external and project networks, the network infrastructure must - support VLAN tagging. For best performance, 10+ Gbps networks should support - jumbo frames. - Using VXLAN project networks requires sufficient kernel support. - Kernel version 3.9 or newer is recommended. - -OpenStack services - controller node ------------------------------------- - -#. Operational SQL server with ``neutron`` database and appropriate - configuration in the ``neutron.conf`` file. -#. Operational message queue service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``nova.conf`` file. -#. Neutron server service, ML2 plug-in, and any dependencies. - -OpenStack services - network nodes ----------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Linux bridge agent, L3 agent, DHCP agent, metadata agent, and any - dependencies. - -OpenStack services - compute nodes ----------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Compute hypervisor service with appropriate - configuration to use neutron in the ``nova.conf`` file. -#. Linux bridge agent and any dependencies. - -Architecture -~~~~~~~~~~~~ - -.. image:: figures/scenario-l3ha-general.png - :alt: Architecture overview - -The network nodes contain the following components: - -#. Linux bridge agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces and underlying interfaces. -#. DHCP agent managing the ``qdhcp`` namespaces. The ``qdhcp`` namespaces - provide DHCP services for instances using project networks. -#. L3 agent managing the ``qrouter`` namespaces and VRRP using ``keepalived``. - The ``qrouter`` namespaces provide routing between project and external - networks and among project networks. They also route metadata traffic - between instances and the metadata agent. -#. Metadata agent handling metadata operations for instances. - -.. image:: figures/scenario-l3ha-lb-network1.png - :alt: Network node components - overview - -.. image:: figures/scenario-l3ha-lb-network2.png - :alt: Network node components - connectivity - -.. note:: - - For simplicity, the hidden project network that connects all HA routers for - a particular project uses the VXLAN network type. - -The compute nodes contain the following network components: - -#. Linux bridge agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces, security groups, and underlying interfaces. - -.. image:: figures/scenario-l3ha-lb-compute1.png - :alt: Compute node components - overview - -.. image:: figures/scenario-l3ha-lb-compute2.png - :alt: Compute node components - connectivity - -.. _scenario_l3ha_lb-packet_flow: - -Packet flow -~~~~~~~~~~~ - -The L3HA mechanism simply augments :ref:`scenario-classic-lb` with quick -failover of layer-3 services to another router if the master router -fails. - -During normal operation, the master router periodically transmits *heartbeat* -packets over a hidden project network that connects all HA routers for a -particular project. By default, this network uses the type indicated by the -first value in the ``tenant_network_types`` option in the -``ml2_conf.ini`` file. - -If the backup router stops receiving these packets, it assumes failure -of the master router and promotes itself to the master router by configuring -IP addresses on the interfaces in the ``qrouter`` namespace. In environments -with more than one backup router, the router with the next highest priority -becomes the master router. - -.. note:: - - The L3HA mechanism uses the same priority for all routers. Therefore, VRRP - promotes the backup router with the highest IP address to the master - router. - -Example configuration -~~~~~~~~~~~~~~~~~~~~~ - -Use the following example configuration as a template to deploy this -scenario in your environment. - -Controller node ---------------- - -#. In the ``neutron.conf`` file: - - * Configure common options, enable VRRP, and enable DHCP agent - redundancy: - - .. code-block:: ini - - [DEFAULT] - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = True - l3_ha = True - dhcp_agents_per_network = 2 - - .. note:: - - You can increase the ``dhcp_agents_per_network`` value up to the - number of nodes running the DHCP agent. - - * If necessary, :ref:`configure MTU `. - -#. In the ``ml2_conf.ini`` file: - - * Configure drivers and network types: - - .. code-block:: ini - - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vlan,vxlan - mechanism_drivers = linuxbridge - extension_drivers = port_security - - * Configure network mappings and ID ranges: - - .. code-block:: ini - - [ml2_type_flat] - flat_networks = external - - [ml2_type_vlan] - network_vlan_ranges = external,vlan:MIN_VLAN_ID:MAX_VLAN_ID - - [ml2_type_vxlan] - vni_ranges = MIN_VXLAN_ID:MAX_VXLAN_ID - - Replace ``MIN_VLAN_ID``, ``MAX_VLAN_ID``, ``MIN_VXLAN_ID``, and - ``MAX_VXLAN_ID`` with VLAN and VXLAN ID minimum and maximum values suitable - for your environment. - - .. note:: - - The first value in the ``tenant_network_types`` option becomes the - default project network type when a regular user creates a network. - - .. note:: - - The ``external`` value in the ``network_vlan_ranges`` option lacks VLAN - ID ranges to support use of arbitrary VLAN IDs by administrative users. - - * Configure the security group driver: - - .. code-block:: ini - - [securitygroup] - firewall_driver = iptables - - * If necessary, :ref:`configure MTU `. - -#. Start the following services: - - * Server - -Network nodes -------------- - -#. In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent: - - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = vlan:PROJECT_VLAN_INTERFACE,external:EXTERNAL_INTERFACE - - [vxlan] - local_ip = TUNNEL_INTERFACE_IP_ADDRESS - l2_population = False - - [securitygroup] - firewall_driver = iptables - - Replace ``PROJECT_VLAN_INTERFACE`` and ``EXTERNAL_INTERFACE`` with the name - of the underlying interface that handles VLAN project networks and external - networks, respectively. Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP - address of the interface that handles project tunnel networks. - -#. In the ``l3_agent.ini`` file, configure the L3 agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver - external_network_bridge = - - .. note:: - - The ``external_network_bridge`` option intentionally contains - no value. - -#. In the ``dhcp_agent.ini`` file, configure the DHCP agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver - enable_isolated_metadata = True - -#. In the ``metadata_agent.ini`` file, configure the metadata agent: - - .. code-block:: ini - - [DEFAULT] - nova_metadata_ip = controller - metadata_proxy_shared_secret = METADATA_SECRET - - Replace ``METADATA_SECRET`` with a suitable value for your environment. - -#. Start the following services: - - * Linux bridge agent - * L3 agent - * DHCP agent - * Metadata agent - -Compute nodes -------------- - -#. In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent: - - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = vlan:PROJECT_VLAN_INTERFACE - - [vxlan] - local_ip = TUNNEL_INTERFACE_IP_ADDRESS - l2_population = False - - [securitygroup] - firewall_driver = iptables - - Replace ``PROJECT_VLAN_INTERFACE`` and ``EXTERNAL_INTERFACE`` with the name - of the underlying interface that handles VLAN project networks and external - networks, respectively. Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP - address of the interface that handles project tunnel networks. - -#. Start the following services: - - * Linux bridge agent - -Verify service operation ------------------------- - -#. Source the administrative project credentials. -#. Verify presence and operation of the agents: - - .. code-block:: console - - $ neutron agent-list - - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - | id | agent_type | host | alive | admin_state_up | binary | - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - | 7856ba29-5447-4392-b2e1-2c236bd5f479 | Metadata agent | network1 | :-) | True | neutron-metadata-agent | - | 85d5c715-08f6-425d-9efc-73633736bf06 | Linux bridge agent | network2 | :-) | True | neutron-linuxbridge-agent | - | 98d32a4d-1257-4b42-aea4-ad9bd7deea62 | Metadata agent | network2 | :-) | True | neutron-metadata-agent | - | b45096a1-7bfa-4816-8b3c-900b752a9c08 | DHCP agent | network1 | :-) | True | neutron-dhcp-agent | - | d4c45b8e-3b34-4192-80b1-bbdefb110c3f | Linux bridge agent | compute2 | :-) | True | neutron-linuxbridge-agent | - | e5a4e06b-dd9d-4b97-a09a-c8ba07706753 | Linux bridge agent | network1 | :-) | True | neutron-linuxbridge-agent | - | e8f8b228-5c3e-4378-b8f5-36b5c41cb3fe | L3 agent | network2 | :-) | True | neutron-l3-agent | - | f2d10c26-2136-4e6a-86e5-d22f67ab22d7 | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent | - | f9f94732-08af-4f82-8908-fdcd69ab12e8 | L3 agent | network1 | :-) | True | neutron-l3-agent | - | fbeebad9-6590-4f78-bb29-7d58ea867878 | DHCP agent | network2 | :-) | True | neutron-dhcp-agent | - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - -Create initial networks ------------------------ - -This example creates a flat external network and a VXLAN project network. - -#. Source the administrative project credentials. -#. Create the external network: - - .. code-block:: console - - $ neutron net-create ext-net --router:external \ - --provider:physical_network external --provider:network_type flat - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | 5266fcbc-d429-4b21-8544-6170d1691826 | - | name | ext-net | - | provider:network_type | flat | - | provider:physical_network | external | - | provider:segmentation_id | | - | router:external | True | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | 96393622940e47728b6dcdb2ef405f50 | - +---------------------------+--------------------------------------+ - -#. Create a subnet on the external network: - - .. code-block:: console - - $ neutron subnet-create ext-net 203.0.113.0/24 --name ext-subnet \ - --allocation-pool start=203.0.113.101,end=203.0.113.200 \ - --disable-dhcp --gateway 203.0.113.1 - - Created a new subnet: - +-------------------+----------------------------------------------------+ - | Field | Value | - +-------------------+----------------------------------------------------+ - | allocation_pools | {"start": "203.0.113.101", "end": "203.0.113.200"} | - | cidr | 203.0.113.0/24 | - | dns_nameservers | | - | enable_dhcp | False | - | gateway_ip | 203.0.113.1 | - | host_routes | | - | id | b32e0efc-8cc3-43ff-9899-873b94df0db1 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | ext-subnet | - | network_id | 5266fcbc-d429-4b21-8544-6170d1691826 | - | tenant_id | 96393622940e47728b6dcdb2ef405f50 | - +-------------------+----------------------------------------------------+ - -.. note:: - - The example configuration contains ``vlan`` as the first project network - type. Only an administrative user can create other types of networks such as - VXLAN. The following commands use the ``admin`` project credentials to - create a VXLAN project network. - -#. Obtain the ID of a regular project. For example, using the ``demo`` project: - - .. code-block:: console - - $ openstack project show demo - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Demo Tenant | - | enabled | True | - | id | f8207c03fd1e4b4aaf123efea4662819 | - | name | demo | - +-------------+----------------------------------+ - -#. Create a project network: - - .. code-block:: console - - $ neutron net-create demo-net \ - --tenant-id f8207c03fd1e4b4aaf123efea4662819 \ - --provider:network_type vxlan - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | d990778b-49ea-4beb-9336-6ea2248edf7d | - | name | demo-net | - | provider:network_type | vxlan | - | provider:physical_network | | - | provider:segmentation_id | 1 | - | router:external | False | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | f8207c03fd1e4b4aaf123efea4662819 | - +---------------------------+--------------------------------------+ - -#. Source the regular project credentials. The following steps use the - ``demo`` project. -#. Create a subnet on the project network: - - .. code-block:: console - - $ neutron subnet-create demo-net 192.168.1.0/24 --name demo-subnet \ - --gateway 192.168.1.1 - - Created a new subnet: - +-------------------+--------------------------------------------------+ - | Field | Value | - +-------------------+--------------------------------------------------+ - | allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} | - | cidr | 192.168.1.0/24 | - | dns_nameservers | | - | enable_dhcp | True | - | gateway_ip | 192.168.1.1 | - | host_routes | | - | id | b7fe4e86-65d5-4e88-8266-88795ae4ac53 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | demo-subnet | - | network_id | d990778b-49ea-4beb-9336-6ea2248edf7d | - | tenant_id | f8207c03fd1e4b4aaf123efea4662819 | - +-------------------+--------------------------------------------------+ - -#. Create a project router: - - .. code-block:: console - - $ neutron router-create demo-router - - Created a new router: - +-----------------------+--------------------------------------+ - | Field | Value | - +-----------------------+--------------------------------------+ - | admin_state_up | True | - | distributed | False | - | external_gateway_info | | - | ha | True | - | id | 557bf478-6afe-48af-872f-63513f7e9b92 | - | name | demo-router | - | routes | | - | status | ACTIVE | - | tenant_id | f8207c03fd1e4b4aaf123efea4662819 | - +-----------------------+--------------------------------------+ - - .. note:: - - The default ``policy.json`` file allows only administrative projects - to enable/disable HA during router creation and view the ``ha`` flag - for a router. - -#. Attach the project subnet as an interface on the router: - - .. code-block:: console - - $ neutron router-interface-add demo-router demo-subnet - Added interface 4cb8f7ea-28f2-4fe1-91f7-1c2823994fc4 to router demo-router. - -#. Add a gateway to the external network on the router: - - .. code-block:: console - - $ neutron router-gateway-set demo-router ext-net - Set gateway for router demo-router - -Verify network operation ------------------------- - -#. Source the administrative project credentials. -#. On the controller node, verify creation of the HA network: - - .. code-block:: console - - $ neutron net-list - - +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+ - | id | name | subnets | - +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+ - | b304e495-b80d-4dd7-9345-5455302397a7 | HA network tenant f8207c03fd1e4b4aaf123efea4662819 | bbb53715-f4e9-4ce3-bf2b-44b2aed2f4ef 169.254.192.0/18 | - | d990778b-49ea-4beb-9336-6ea2248edf7d | demo-net | b7fe4e86-65d5-4e88-8266-88795ae4ac53 192.168.1.0/24 | - | fde31a29-3e23-470d-bc9d-6218375dca4f | ext-net | 2e1d865a-ef56-41e9-aa31-63fb8a591003 203.0.113.0/24 | - +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+ - -#. On the controller node, verify creation of the router on more than one - network node: - - .. code-block:: console - - $ neutron l3-agent-list-hosting-router demo-router - - +--------------------------------------+----------+----------------+-------+----------+ - | id | host | admin_state_up | alive | ha_state | - +--------------------------------------+----------+----------------+-------+----------+ - | e5a4e06b-dd9d-4b97-a09a-c8ba07706753 | network1 | True | :-) | active | - | 85d5c715-08f6-425d-9efc-73633736bf06 | network2 | True | :-) | standby | - +--------------------------------------+----------+----------------+-------+----------+ - - .. note:: - - Older versions of *python-neutronclient* do not support the ``ha_state`` field. - -#. On the controller node, verify creation of the HA ports on the - ``demo-router`` router: - - .. code-block:: console - - $ neutron router-port-list demo-router - - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - | id | name | mac_address | fixed_ips | - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - | 255d2e4b-33ba-4166-a13f-6531122641fe | HA port tenant f8207c03fd1e4b4aaf123efea4662819 | fa:16:3e:25:05:d7 | {"subnet_id": "bbb53715-f4e9-4ce3-bf2b-44b2aed2f4ef", "ip_address": "169.254.192.1"} | - | 374587d7-2acd-4156-8993-4294f788b55e | | fa:16:3e:82:a0:59 | {"subnet_id": "2e1d865a-ef56-41e9-aa31-63fb8a591003", "ip_address": "203.0.113.101"} | - | 8de3e172-5317-4c87-bdc1-f69e359de92e | | fa:16:3e:10:9f:f6 | {"subnet_id": "b7fe4e86-65d5-4e88-8266-88795ae4ac53", "ip_address": "192.168.1.1"} | - | 90d1a59f-b122-459d-a94a-162a104de629 | HA port tenant f8207c03fd1e4b4aaf123efea4662819 | fa:16:3e:ae:3b:22 | {"subnet_id": "bbb53715-f4e9-4ce3-bf2b-44b2aed2f4ef", "ip_address": "169.254.192.2"} | - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - -#. On the network nodes, verify creation of the ``qrouter`` and ``qdhcp`` - namespaces. - - Network node 1: - - .. code-block:: console - - $ ip netns - qrouter-7a46dba8-8846-498c-9e10-588664558473 - - Network node 2: - - .. code-block:: console - - $ ip netns - qrouter-7a46dba8-8846-498c-9e10-588664558473 - - Both ``qrouter`` namespaces should use the same UUID. - - .. note:: - - The ``qdhcp`` namespaces might not appear until launching an instance. - -#. On the network nodes, verify HA operation: - - Network node 1: - - .. code-block:: console - - $ ip netns exec qrouter-7a46dba8-8846-498c-9e10-588664558473 ip addr show - 11: ha-255d2e4b-33: mtu 1450 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:25:05:d7 brd ff:ff:ff:ff:ff:ff - inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-255d2e4b-33 - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:fe25:5d7/64 scope link - valid_lft forever preferred_lft forever - 12: qr-8de3e172-53: mtu 1450 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:10:9f:f6 brd ff:ff:ff:ff:ff:ff - inet 192.168.1.1/24 scope global qr-8de3e172-53 - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:fe10:9ff6/64 scope link - valid_lft forever preferred_lft forever - 13: qg-374587d7-2a: mtu 1500 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:82:a0:59 brd ff:ff:ff:ff:ff:ff - inet 203.0.113.101/24 scope global qg-374587d7-2a - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:fe82:a059/64 scope link - valid_lft forever preferred_lft forever - - Network node 2: - - .. code-block:: console - - $ ip netns exec qrouter-7a46dba8-8846-498c-9e10-588664558473 ip addr show - 11: ha-90d1a59f-b1: mtu 1450 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:ae:3b:22 brd ff:ff:ff:ff:ff:ff - inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-90d1a59f-b1 - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:feae:3b22/64 scope link - valid_lft forever preferred_lft forever - 12: qr-8de3e172-53: mtu 1450 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:10:9f:f6 brd ff:ff:ff:ff:ff:ff - inet6 fe80::f816:3eff:fe10:9ff6/64 scope link - valid_lft forever preferred_lft forever - 13: qg-374587d7-2a: mtu 1500 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:82:a0:59 brd ff:ff:ff:ff:ff:ff - inet6 fe80::f816:3eff:fe82:a059/64 scope link - valid_lft forever preferred_lft forever - - On each network node, the ``qrouter`` namespace should include the ``ha``, - ``qr``, and ``qg`` interfaces. On the master node, the ``qr`` interface - contains the project network gateway IP address and the ``qg`` interface - contains the project network router IP address on the external network. - On the backup node, the ``qr`` and ``qg`` interfaces should not contain - an IP address. On both nodes, the ``ha`` interface should contain a - unique IP address in the 169.254.192.0/18 range. - -#. On the network nodes, verify VRRP advertisements from the master node - HA interface IP address on the appropriate network interface. - - Network node 1: - - .. code-block:: console - - $ tcpdump -lnpi eth1 - 16:50:16.857294 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - 16:50:18.858436 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - 16:50:20.859677 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - - Network node 2: - - .. code-block:: console - - $ tcpdump -lnpi eth1 - 16:51:44.911640 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - 16:51:46.912591 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - 16:51:48.913900 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - - .. note:: - - The example output uses network interface ``eth1``. - -#. Determine the external network gateway IP address for the project network - on the router, typically the lowest IP address in the external subnet IP - allocation range: - - .. code-block:: console - - $ neutron router-port-list demo-router - - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - | id | name | mac_address | fixed_ips | - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - | 255d2e4b-33ba-4166-a13f-6531122641fe | HA port tenant f8207c03fd1e4b4aaf123efea4662819 | fa:16:3e:25:05:d7 | {"subnet_id": "bbb53715-f4e9-4ce3-bf2b-44b2aed2f4ef", "ip_address": "169.254.192.1"} | - | 374587d7-2acd-4156-8993-4294f788b55e | | fa:16:3e:82:a0:59 | {"subnet_id": "2e1d865a-ef56-41e9-aa31-63fb8a591003", "ip_address": "203.0.113.101"} | - | 8de3e172-5317-4c87-bdc1-f69e359de92e | | fa:16:3e:10:9f:f6 | {"subnet_id": "b7fe4e86-65d5-4e88-8266-88795ae4ac53", "ip_address": "192.168.1.1"} | - | 90d1a59f-b122-459d-a94a-162a104de629 | HA port tenant f8207c03fd1e4b4aaf123efea4662819 | fa:16:3e:ae:3b:22 | {"subnet_id": "bbb53715-f4e9-4ce3-bf2b-44b2aed2f4ef", "ip_address": "169.254.192.2"} | - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the external network gateway IP address on the project router: - - .. code-block:: console - - $ ping -c 4 203.0.113.101 - PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data. - 64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619 ms - 64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189 ms - 64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165 ms - 64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216 ms - - --- 203.0.113.101 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2999ms - rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms - -#. Source the credentials for a non-privileged project. The following - steps use the ``demo`` project. -#. Create the appropriate security group rules to allow ping and SSH access - to the instance. For example: - - .. code-block:: console - - $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | icmp | -1 | -1 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - - $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | tcp | 22 | 22 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - -#. Launch an instance with an interface on the project network. For example, - using an existing *CirrOS* image: - - .. code-block:: console - - $ nova boot --flavor m1.tiny --image cirros \ - --nic net-id=d990778b-49ea-4beb-9336-6ea2248edf7d demo-instance1 - - +--------------------------------------+-----------------------------------------------+ - | Property | Value | - +--------------------------------------+-----------------------------------------------+ - | OS-DCF:diskConfig | MANUAL | - | OS-EXT-AZ:availability_zone | nova | - | OS-EXT-STS:power_state | 0 | - | OS-EXT-STS:task_state | scheduling | - | OS-EXT-STS:vm_state | building | - | OS-SRV-USG:launched_at | - | - | OS-SRV-USG:terminated_at | - | - | accessIPv4 | | - | accessIPv6 | | - | adminPass | Z3uAd2utPUNu | - | config_drive | | - | created | 2015-08-10T15:06:24Z | - | flavor | m1.tiny (1) | - | hostId | | - | id | 77149598-c839-400f-b948-db6993f0b40b | - | image | cirros (125733d9-8d37-4d70-9a64-1c989cfa8e9c) | - | key_name | | - | metadata | {} | - | name | demo-instance1 | - | os-extended-volumes:volumes_attached | [] | - | progress | 0 | - | security_groups | default | - | status | BUILD | - | tenant_id | f8207c03fd1e4b4aaf123efea4662819 | - | updated | 2015-08-10T15:06:25Z | - | user_id | bdd4e165bdf94b258ddd4856340ed01c | - +--------------------------------------+-----------------------------------------------+ - -#. Obtain console access to the instance. - - #. Test connectivity to the project router: - - .. code-block:: console - - $ ping -c 4 192.168.1.1 - PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. - 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=0.357 ms - 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=0.473 ms - 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=0.504 ms - 64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=0.470 ms - - --- 192.168.1.1 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2998ms - rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms - - #. Test connectivity to the Internet: - - .. code-block:: console - - $ ping -c 4 openstack.org - PING openstack.org (174.143.194.225) 56(84) bytes of data. - 64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms - 64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms - 64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms - 64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms - - --- openstack.org ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3003ms - rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms - -#. Create a floating IP address on the external network: - - .. code-block:: console - - $ neutron floatingip-create ext-net - - Created a new floatingip: - +---------------------+--------------------------------------+ - | Field | Value | - +---------------------+--------------------------------------+ - | fixed_ip_address | | - | floating_ip_address | 203.0.113.102 | - | floating_network_id | fde31a29-3e23-470d-bc9d-6218375dca4f | - | id | 05e36754-e7f3-46bb-9eaa-3521623b3722 | - | port_id | | - | router_id | | - | status | DOWN | - | tenant_id | f8207c03fd1e4b4aaf123efea4662819 | - +---------------------+--------------------------------------+ - -#. Associate the floating IP address with the instance: - - .. code-block:: console - - $ nova floating-ip-associate demo-instance1 203.0.113.102 - -#. Verify addition of the floating IP address to the instance: - - .. code-block:: console - - $ nova list - - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - | ID | Name | Status | Task State | Power State | Networks | - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - | 77149598-c839-400f-b948-db6993f0b40b | demo-instance1 | ACTIVE | - | Running | demo-net=192.168.1.3, 203.0.113.102 | - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the floating IP address associated with the instance: - - .. code-block:: console - - $ ping -c 4 203.0.113.102 - PING 203.0.113.102 (203.0.113.112) 56(84) bytes of data. - 64 bytes from 203.0.113.102: icmp_req=1 ttl=63 time=3.18 ms - 64 bytes from 203.0.113.102: icmp_req=2 ttl=63 time=0.981 ms - 64 bytes from 203.0.113.102: icmp_req=3 ttl=63 time=1.06 ms - 64 bytes from 203.0.113.102: icmp_req=4 ttl=63 time=0.929 ms - - --- 203.0.113.102 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3002ms - rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms diff --git a/doc/networking-guide/source/scenario-l3ha-ovs.rst b/doc/networking-guide/source/scenario-l3ha-ovs.rst deleted file mode 100644 index 02d2f10957..0000000000 --- a/doc/networking-guide/source/scenario-l3ha-ovs.rst +++ /dev/null @@ -1,903 +0,0 @@ -.. _scenario-l3ha-ovs: - -=============================================================== -Scenario: High availability using VRRP (L3HA) with Open vSwitch -=============================================================== - -This scenario describes a high-availability implementation of the OpenStack -Networking service using the ML2 plug-in and Open vSwitch (OVS). - -This high availability implementation augments the :ref:`scenario-classic-ovs` -architecture with Virtual Router Redundancy Protocol (VRRP) using -``keepalived`` to provide quick failover of layer-3 services. See -:ref:`scenario_l3ha_ovs-packet_flow` for VRRP operation. Similar to the classic -scenario, all network traffic on a project network that requires routing -actively traverses only one network node regardless of the quantity of network -nodes providing HA for the router. Therefore, this high availability -implementation primarily addresses failure situations instead of bandwidth -constraints that limit performance. However, it supports random distribution -of routers on different network nodes to reduce the chances of bandwidth -constraints and to improve scaling. Also, this implementation does not address -situations where one or more layer-3 agents fail and the underlying virtual -networks continue to operate normally. Consider deploying -:ref:`scenario-dvr-ovs` to increase performance in addition to redundancy. -As of the Liberty release, you cannot combine the DVR and L3HA mechanisms. - -.. note:: - - The failover process only retains the state of network connections for - instances with a floating IP address. - -The example configuration creates one flat external network and one VXLAN -project (tenant) network. However, this configuration also supports VLAN -external networks, VLAN project networks, and GRE project networks. - -Prerequisites -~~~~~~~~~~~~~ - -These prerequisites define the minimal physical infrastructure and immediate -OpenStack service dependencies necessary to deploy this scenario. For example, -the Networking service immediately depends on the Identity service and the -Compute service immediately depends on the Networking service. These -dependencies lack services such as the Image service because the Networking -service does not immediately depend on it. However, the Compute service -depends on the Image service to launch an instance. The example configuration -in this scenario assumes basic configuration knowledge of Networking service -components. For assistance with basic configuration of the Networking service, -see the Installation Guide. - -Infrastructure --------------- - -#. One controller node with one network interface: management. -#. Two network nodes with four network interfaces: management, project tunnel - networks, project VLAN networks, and external (typically the Internet). - The Open vSwitch bridge ``br-vlan`` must contain a port on the VLAN - interface and Open vSwitch bridge ``br-ex`` must contain a port on the - external interface. -#. At least one compute node with three network interfaces: management, - project tunnel networks, and project VLAN networks. The Open vSwitch - bridge ``br-vlan`` must contain a port on the VLAN interface. - -To improve understanding of network traffic flow, the network and compute -nodes contain a separate network interface for project VLAN networks. In -production environments, project VLAN networks can use any Open vSwitch -bridge with access to a network interface. For example, the ``br-tun`` -bridge. - -In the example configuration, the management network uses 10.0.0.0/24, -the tunnel network uses 10.0.1.0/24, the VRRP network uses 169.254.192.0/18, -and the external network uses 203.0.113.0/24. The VLAN network does not -require an IP address range because it only handles layer-2 connectivity. - -.. figure:: figures/scenario-l3ha-hw.png - :alt: Hardware layout - -.. figure:: figures/scenario-l3ha-networks.png - :alt: Network layout - -.. figure:: figures/scenario-l3ha-ovs-services.png - :alt: Service layout - -.. note:: - - For VLAN external and project networks, the network infrastructure must - support VLAN tagging. For best performance, 10+ Gbps networks should support - jumbo frames. - -.. warning:: - - Linux distributions often package older releases of Open vSwitch that can - introduce issues during operation with the Networking service. We recommend - using at least the latest long-term stable (LTS) release of Open vSwitch - for the best experience and support from Open vSwitch. See - ``__ for available releases and the - `installation instructions - `__ for - building newer releases from source on various distributions. - - Implementing VXLAN networks requires Linux kernel 3.13 or newer. - -OpenStack services - controller node ------------------------------------- - -#. Operational SQL server with ``neutron`` database and appropriate - configuration in the ``neutron.conf`` file. -#. Operational message queue service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use OpenStack Networking in the - ``nova.conf`` file. -#. Neutron server service, ML2 plug-in, and any dependencies. - -OpenStack services - network nodes ----------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Open vSwitch service, Open vSwitch agent, L3 agent, DHCP agent, metadata - agent, and any dependencies. - -OpenStack services - compute nodes ----------------------------------- - -#. Operational OpenStack Identity service with appropriate configuration - in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use OpenStack Networking in the - ``neutron.conf`` file. -#. Open vSwitch service, Open vSwitch agent, and any dependencies. - -Architecture -~~~~~~~~~~~~ - -.. figure:: figures/scenario-l3ha-general.png - :alt: Architecture overview - -The network nodes contain the following components: - -#. Open vSwitch agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces, Linux bridges, and underlying interfaces. -#. DHCP agent managing the ``qdhcp`` namespaces. The ``qdhcp`` namespaces - provide DHCP services for instances using project networks. -#. L3 agent managing the ``qrouter`` namespaces and VRRP using ``keepalived``. - The ``qrouter`` namespaces provide routing between project and external - networks and among project networks. They also route metadata traffic - between instances and the metadata agent. -#. Metadata agent handling metadata operations for instances. - -.. figure:: figures/scenario-l3ha-ovs-network1.png - :alt: Network node components - overview - -.. figure:: figures/scenario-l3ha-ovs-network2.png - :alt: Network node components - connectivity - -The compute nodes contain the following components: - -#. Open vSwitch agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces, Linux bridges, and underlying interfaces. -#. Conventional Linux bridges handling security groups. Optionally, a native - OVS implementation can handle security groups. However, due to kernel and - OVS version requirements for it, this scenario uses conventional Linux - bridges. See :ref:`config-ovsfwdriver` for more information. - -.. figure:: figures/scenario-l3ha-ovs-compute1.png - :alt: Compute node components - overview - -.. figure:: figures/scenario-l3ha-ovs-compute2.png - :alt: Compute node components - connectivity - -.. _scenario_l3ha_ovs-packet_flow: - -Packet flow -~~~~~~~~~~~ - -The L3HA mechanism simply augments :ref:`scenario-classic-ovs` with quick -failover of layer-3 services to another router if the master router -fails. - -During normal operation, the master router periodically transmits *heartbeat* -packets over a hidden project network that connects all HA routers for a -particular project. By default, this network uses the type indicated by the -first value in the ``tenant_network_types`` option in the -``ml2_conf.ini`` file. - -If the backup router stops receiving these packets, it assumes failure -of the master router and promotes itself to the master router by configuring -IP addresses on the interfaces in the ``qrouter`` namespace. In environments -with more than one backup router, the router with the next highest priority -becomes the master router. - -.. note:: - - The L3HA mechanism uses the same priority for all routers. Therefore, VRRP - promotes the backup router with the highest IP address to the master - router. - -Example configuration -~~~~~~~~~~~~~~~~~~~~~ - -Use the following example configuration as a template to deploy this -scenario in your environment. - -Controller node ---------------- - -#. In the ``neutron.conf`` file: - - * Configure common options, enable VRRP, and enable DHCP agent - redundancy: - - .. code-block:: ini - - [DEFAULT] - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = True - l3_ha = True - dhcp_agents_per_network = 2 - - .. note:: - - You can increase the ``dhcp_agents_per_network`` value up to the - number of nodes running the DHCP agent. - - * If necessary, :ref:`configure MTU `. - -#. In the ``ml2_conf.ini`` file: - - * Configure drivers and network types: - - .. code-block:: ini - - [ml2] - type_drivers = flat,vlan,gre,vxlan - tenant_network_types = vlan,gre,vxlan - mechanism_drivers = openvswitch,l2population - extension_drivers = port_security - - * Configure network mappings and ID ranges: - - .. code-block:: ini - - [ml2_type_flat] - flat_networks = external - - [ml2_type_vlan] - network_vlan_ranges = external,vlan:MIN_VLAN_ID:MAX_VLAN_ID - - [ml2_type_gre] - tunnel_id_ranges = MIN_GRE_ID:MAX_GRE_ID - - [ml2_type_vxlan] - vni_ranges = MIN_VXLAN_ID:MAX_VXLAN_ID - - Replace ``MIN_VLAN_ID``, ``MAX_VLAN_ID``, ``MIN_GRE_ID``, ``MAX_GRE_ID``, - ``MIN_VXLAN_ID``, and ``MAX_VXLAN_ID`` with VLAN, GRE, and VXLAN ID minimum - and maximum values suitable for your environment. - - .. note:: - - The first value in the ``tenant_network_types`` option becomes the - default project network type when a regular user creates a network. - - .. note:: - - The ``external`` value in the ``network_vlan_ranges`` option lacks VLAN - ID ranges to support use of arbitrary VLAN IDs by administrative users. - - * Configure the security group driver: - - .. code-block:: ini - - [securitygroup] - firewall_driver = iptables_hybrid - - * If necessary, :ref:`configure MTU `. - -#. Start the following services: - - * Server - -Network nodes -------------- - -#. In the ``openvswitch_agent.ini`` file, configure the Open vSwitch agent: - - .. code-block:: ini - - [ovs] - local_ip = TUNNEL_INTERFACE_IP_ADDRESS - bridge_mappings = vlan:br-vlan,external:br-ex - - [agent] - tunnel_types = gre,vxlan - l2_population = True - - [securitygroup] - firewall_driver = iptables_hybrid - - Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface - that handles GRE/VXLAN project networks. - -#. In the ``l3_agent.ini`` file, configure the L3 agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver - external_network_bridge = - - .. note:: - - The ``external_network_bridge`` option intentionally contains - no value. - -#. In the ``dhcp_agent.ini`` file, configure the DHCP agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver - enable_isolated_metadata = True - -#. In the ``metadata_agent.ini`` file, configure the metadata agent: - - .. code-block:: ini - - [DEFAULT] - nova_metadata_ip = controller - metadata_proxy_shared_secret = METADATA_SECRET - - Replace ``METADATA_SECRET`` with a suitable value for your environment. - -#. Start the following services: - - * Open vSwitch - * Open vSwitch agent - * L3 agent - * DHCP agent - * Metadata agent - -Compute nodes -------------- - -#. In the ``openvswitch_agent.ini`` file, configure the Open vSwitch agent: - - .. code-block:: ini - - [ovs] - local_ip = TUNNEL_INTERFACE_IP_ADDRESS - bridge_mappings = vlan:br-vlan - - [agent] - tunnel_types = gre,vxlan - l2_population = False - - [securitygroup] - firewall_driver = iptables_hybrid - - Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface - that handles GRE/VXLAN project networks. - -#. Start the following services: - - * Open vSwitch - * Open vSwitch agent - -Verify service operation ------------------------- - -#. Source the administrative project credentials. -#. Verify presence and operation of the agents: - - .. code-block:: console - - $ neutron agent-list - - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - | id | agent_type | host | alive | admin_state_up | binary | - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - | 0bfe5b5d-0b82-434e-b8a0-524cc18da3a4 | DHCP agent | network1 | :-) | True | neutron-dhcp-agent | - | 25224bd5-0905-4ec9-9f2d-3b17cdaf5650 | Open vSwitch agent | compute2 | :-) | True | neutron-openvswitch-agent | - | 29afe014-273d-42f3-ad71-8a226e40dea6 | L3 agent | network1 | :-) | True | neutron-l3-agent | - | 3bed5093-e46c-4b0f-9460-3309c62254a3 | DHCP agent | network2 | :-) | True | neutron-dhcp-agent | - | 54aefb1c-35f7-4ebf-a848-3bb4fe81dcf7 | Open vSwitch agent | network1 | :-) | True | neutron-openvswitch-agent | - | 91c9cc03-1678-4d7a-b0a7-fa1ac24e5516 | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent | - | ac7b3f77-7e4d-47a6-9dbd-3358cfb67b61 | Open vSwitch agent | network2 | :-) | True | neutron-openvswitch-agent | - | ceef5c49-3148-4c39-9e15-4985fc995113 | Metadata agent | network1 | :-) | True | neutron-metadata-agent | - | d27ac19b-fb4d-4fec-b81d-e8c65557b6ec | L3 agent | network2 | :-) | True | neutron-l3-agent | - | f072a1ec-f842-4223-a6b6-ec725419be85 | Metadata agent | network2 | :-) | True | neutron-metadata-agent | - +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ - -Create initial networks -~~~~~~~~~~~~~~~~~~~~~~~ - -This example creates a flat external network and a VXLAN project network. - -#. Source the administrative project credentials. -#. Create the external network: - - .. code-block:: console - - $ neutron net-create ext-net --router:external True \ - --provider:physical_network external --provider:network_type flat - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | 5266fcbc-d429-4b21-8544-6170d1691826 | - | name | ext-net | - | provider:network_type | flat | - | provider:physical_network | external | - | provider:segmentation_id | | - | router:external | True | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | 96393622940e47728b6dcdb2ef405f50 | - +---------------------------+--------------------------------------+ - -#. Create a subnet on the external network: - - .. code-block:: console - - $ neutron subnet-create ext-net 203.0.113.0/24 --name ext-subnet \ - --allocation-pool start=203.0.113.101,end=203.0.113.200 \ - --disable-dhcp --gateway 203.0.113.1 - - Created a new subnet: - +-------------------+----------------------------------------------------+ - | Field | Value | - +-------------------+----------------------------------------------------+ - | allocation_pools | {"start": "203.0.113.101", "end": "203.0.113.200"} | - | cidr | 203.0.113.0/24 | - | dns_nameservers | | - | enable_dhcp | False | - | gateway_ip | 203.0.113.1 | - | host_routes | | - | id | b32e0efc-8cc3-43ff-9899-873b94df0db1 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | ext-subnet | - | network_id | 5266fcbc-d429-4b21-8544-6170d1691826 | - | tenant_id | 96393622940e47728b6dcdb2ef405f50 | - +-------------------+----------------------------------------------------+ - -.. note:: - - The example configuration contains ``vlan`` as the first project network - type. Only an administrative user can create other types of networks such as - GRE or VXLAN. The following commands use the ``admin`` project credentials - to create a VXLAN project network. - -#. Obtain the ID of a regular project. For example, using the ``demo`` project: - - .. code-block:: console - - $ openstack project show demo - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Demo Tenant | - | enabled | True | - | id | 443cd1596b2e46d49965750771ebbfe1 | - | name | demo | - +-------------+----------------------------------+ - -#. Create the project network: - - .. code-block:: console - - $ neutron net-create demo-net \ - --tenant-id 443cd1596b2e46d49965750771ebbfe1 \ - --provider:network_type vxlan - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | 7ac9a268-1ddd-453f-857b-0fd9552b645f | - | name | demo-net | - | provider:network_type | vxlan | - | provider:physical_network | | - | provider:segmentation_id | 1 | - | router:external | False | - | shared | False | - | status | ACTIVE | - | subnets | | - | tenant_id | 443cd1596b2e46d49965750771ebbfe1 | - +---------------------------+--------------------------------------+ - -#. Source the ``demo`` project credentials. The following steps use the - ``demo`` project. -#. Create a subnet on the project network: - - .. code-block:: console - - $ neutron subnet-create demo-net 192.168.1.0/24 --name demo-subnet \ - --gateway 192.168.1.1 - - Created a new subnet: - +-------------------+--------------------------------------------------+ - | Field | Value | - +-------------------+--------------------------------------------------+ - | allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} | - | cidr | 192.168.1.0/24 | - | dns_nameservers | | - | enable_dhcp | True | - | gateway_ip | 192.168.1.1 | - | host_routes | | - | id | 2945790c-5999-4693-b8e7-50a9fc7f46f5 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | demo-subnet | - | network_id | 7ac9a268-1ddd-453f-857b-0fd9552b645f | - | tenant_id | 443cd1596b2e46d49965750771ebbfe1 | - +-------------------+--------------------------------------------------+ - -#. Create a project router: - - .. code-block:: console - - $ neutron router-create demo-router - - Created a new router: - +-----------------------+--------------------------------------+ - | Field | Value | - +-----------------------+--------------------------------------+ - | admin_state_up | True | - | distributed | False | - | external_gateway_info | | - | ha | True | - | id | 7a46dba8-8846-498c-9e10-588664558473 | - | name | demo-router | - | routes | | - | status | ACTIVE | - | tenant_id | 443cd1596b2e46d49965750771ebbfe1 | - +-----------------------+--------------------------------------+ - - .. note:: - - The default ``policy.json`` file allows only administrative projects - to enable/disable HA during router creation and view the ``ha`` flag - for a router. - -#. Add the project subnet as an interface on the router: - - .. code-block:: console - - $ neutron router-interface-add demo-router demo-subnet - Added interface 8de3e172-5317-4c87-bdc1-f69e359de92e to router demo-router. - -#. Add a gateway to the external network on the router: - - .. code-block:: console - - $ neutron router-gateway-set demo-router ext-net - Set gateway for router demo-router - -Verify network operation ------------------------- - -#. Source the administrative project credentials. -#. On the controller node, verify creation of the HA network: - - .. code-block:: console - - $ neutron net-list - - +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+ - | id | name | subnets | - +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+ - | 5266fcbc-d429-4b21-8544-6170d1691826 | ext-net | b32e0efc-8cc3-43ff-9899-873b94df0db1 203.0.113.0/24 | - | e029b568-0fd7-4d10-bb16-f9e014811d10 | HA network tenant 443cd1596b2e46d49965750771ebbfe1 | ee30083f-eb4c-41ea-8937-1bae65740af4 169.254.192.0/18 | - | 7ac9a268-1ddd-453f-857b-0fd9552b645f | demo-net | 2945790c-5999-4693-b8e7-50a9fc7f46f5 192.168.1.0/24 | - +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+ - -#. On the controller node, verify creation of the router on more than one - network node: - - .. code-block:: console - - $ neutron l3-agent-list-hosting-router demo-router - - +--------------------------------------+----------+----------------+-------+----------+ - | id | host | admin_state_up | alive | ha_state | - +--------------------------------------+----------+----------------+-------+----------+ - | 29afe014-273d-42f3-ad71-8a226e40dea6 | network1 | True | :-) | active | - | d27ac19b-fb4d-4fec-b81d-e8c65557b6ec | network2 | True | :-) | standby | - +--------------------------------------+----------+----------------+-------+----------+ - - .. note:: - - Older versions of *python-neutronclient* do not support the ``ha_state`` field. - -#. On the controller node, verify creation of the HA ports on the - ``demo-router`` router: - - .. code-block:: console - - $ neutron router-port-list demo-router - - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - | id | name | mac_address | fixed_ips | - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - | 255d2e4b-33ba-4166-a13f-6531122641fe | HA port tenant 443cd1596b2e46d49965750771ebbfe1 | fa:16:3e:25:05:d7 | {"subnet_id": "8e8e4c7d-fa38-417d-a4e3-03ee5ab5493c", "ip_address": "169.254.192.1"} | - | 374587d7-2acd-4156-8993-4294f788b55e | | fa:16:3e:82:a0:59 | {"subnet_id": "b32e0efc-8cc3-43ff-9899-873b94df0db1", "ip_address": "203.0.113.101"} | - | 8de3e172-5317-4c87-bdc1-f69e359de92e | | fa:16:3e:10:9f:f6 | {"subnet_id": "2945790c-5999-4693-b8e7-50a9fc7f46f5", "ip_address": "192.168.1.1"} | - | 90d1a59f-b122-459d-a94a-162a104de629 | HA port tenant 443cd1596b2e46d49965750771ebbfe1 | fa:16:3e:ae:3b:22 | {"subnet_id": "8e8e4c7d-fa38-417d-a4e3-03ee5ab5493c", "ip_address": "169.254.192.2"} | - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - -#. On the network nodes, verify creation of the ``qrouter`` and ``qdhcp`` - namespaces: - - Network node 1: - - .. code-block:: console - - $ ip netns - qrouter-7a46dba8-8846-498c-9e10-588664558473 - - Network node 2: - - .. code-block:: console - - $ ip netns - qrouter-7a46dba8-8846-498c-9e10-588664558473 - - Both ``qrouter`` namespaces should use the same UUID. - - .. note:: - - The ``qdhcp`` namespaces might not exist until launching an instance. - -#. On the network nodes, verify HA operation: - - Network node 1: - - .. code-block:: console - - $ ip netns exec qrouter-7a46dba8-8846-498c-9e10-588664558473 ip addr show - 11: ha-255d2e4b-33: mtu 1450 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:25:05:d7 brd ff:ff:ff:ff:ff:ff - inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-255d2e4b-33 - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:fe25:5d7/64 scope link - valid_lft forever preferred_lft forever - 12: qr-8de3e172-53: mtu 1450 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:10:9f:f6 brd ff:ff:ff:ff:ff:ff - inet 192.168.1.1/24 scope global qr-8de3e172-53 - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:fe10:9ff6/64 scope link - valid_lft forever preferred_lft forever - 13: qg-374587d7-2a: mtu 1500 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:82:a0:59 brd ff:ff:ff:ff:ff:ff - inet 203.0.113.101/24 scope global qg-374587d7-2a - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:fe82:a059/64 scope link - valid_lft forever preferred_lft forever - - Network node 2: - - .. code-block:: console - - $ ip netns exec qrouter-7a46dba8-8846-498c-9e10-588664558473 ip addr show - 11: ha-90d1a59f-b1: mtu 1450 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:ae:3b:22 brd ff:ff:ff:ff:ff:ff - inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-90d1a59f-b1 - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:feae:3b22/64 scope link - valid_lft forever preferred_lft forever - 12: qr-8de3e172-53: mtu 1450 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:10:9f:f6 brd ff:ff:ff:ff:ff:ff - inet6 fe80::f816:3eff:fe10:9ff6/64 scope link - valid_lft forever preferred_lft forever - 13: qg-374587d7-2a: mtu 1500 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:82:a0:59 brd ff:ff:ff:ff:ff:ff - inet6 fe80::f816:3eff:fe82:a059/64 scope link - valid_lft forever preferred_lft forever - - On each network node, the ``qrouter`` namespace should include the ``ha``, - ``qr``, and ``qg`` interfaces. On the master node, the ``qr`` interface - contains the project network gateway IP address and the ``qg`` interface - contains the project router IP address on the external network. On the - backup node, the ``qr`` and ``qg`` interfaces should not contain an IP - address. On both nodes, the ``ha`` interface should contain a unique IP - address in the 169.254.192.0/18 range. - -#. On the network nodes, verify VRRP advertisements from the master node - HA interface IP address on the appropriate network interface: - - Network node 1: - - .. code-block:: console - - $ tcpdump -lnpi eth1 - 16:50:16.857294 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - 16:50:18.858436 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - 16:50:20.859677 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - - Network node 2: - - .. code-block:: console - - $ tcpdump -lnpi eth1 - 16:51:44.911640 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - 16:51:46.912591 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - 16:51:48.913900 IP 169.254.192.1 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype none, intvl 2s, length 20 - - .. note:: - - The example output uses network interface ``eth1``. - -#. Determine the external network gateway IP address for the project network - on the router, typically the lowest IP address in the external subnet IP - allocation range: - - .. code-block:: console - - $ neutron router-port-list demo-router - - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - | id | name | mac_address | fixed_ips | - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - | 255d2e4b-33ba-4166-a13f-6531122641fe | HA port tenant 443cd1596b2e46d49965750771ebbfe1 | fa:16:3e:25:05:d7 | {"subnet_id": "8e8e4c7d-fa38-417d-a4e3-03ee5ab5493c", "ip_address": "169.254.192.1"} | - | 374587d7-2acd-4156-8993-4294f788b55e | | fa:16:3e:82:a0:59 | {"subnet_id": "b32e0efc-8cc3-43ff-9899-873b94df0db1", "ip_address": "203.0.113.101"} | - | 8de3e172-5317-4c87-bdc1-f69e359de92e | | fa:16:3e:10:9f:f6 | {"subnet_id": "2945790c-5999-4693-b8e7-50a9fc7f46f5", "ip_address": "192.168.1.1"} | - | 90d1a59f-b122-459d-a94a-162a104de629 | HA port tenant 443cd1596b2e46d49965750771ebbfe1 | fa:16:3e:ae:3b:22 | {"subnet_id": "8e8e4c7d-fa38-417d-a4e3-03ee5ab5493c", "ip_address": "169.254.192.2"} | - +--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the external network gateway IP address on the project router: - - .. code-block:: console - - $ ping -c 4 203.0.113.101 - PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data. - 64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619 ms - 64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189 ms - 64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165 ms - 64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216 ms - - --- 203.0.113.101 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2999ms - rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms - -#. Source the regular project credentials. The following steps use the - ``demo`` project. -#. Create the appropriate security group rules to allow ping and SSH access - to the instance. For example: - - .. code-block:: console - - $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | icmp | -1 | -1 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - - $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | tcp | 22 | 22 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - -#. Launch an instance with an interface on the project network. For example, - using an existing *CirrOS* image: - - .. code-block:: console - - $ nova boot --flavor m1.tiny --image cirros \ - --nic net-id=7ac9a268-1ddd-453f-857b-0fd9552b645f demo-instance1 - - +--------------------------------------+-----------------------------------------------+ - | Property | Value | - +--------------------------------------+-----------------------------------------------+ - | OS-DCF:diskConfig | MANUAL | - | OS-EXT-AZ:availability_zone | nova | - | OS-EXT-STS:power_state | 0 | - | OS-EXT-STS:task_state | scheduling | - | OS-EXT-STS:vm_state | building | - | OS-SRV-USG:launched_at | - | - | OS-SRV-USG:terminated_at | - | - | accessIPv4 | | - | accessIPv6 | | - | adminPass | Z3uAd2utPUNu | - | config_drive | | - | created | 2015-08-10T15:06:24Z | - | flavor | m1.tiny (1) | - | hostId | | - | id | 77149598-c839-400f-b948-db6993f0b40b | - | image | cirros (125733d9-8d37-4d70-9a64-1c989cfa8e9c) | - | key_name | | - | metadata | {} | - | name | demo-instance1 | - | os-extended-volumes:volumes_attached | [] | - | progress | 0 | - | security_groups | default | - | status | BUILD | - | tenant_id | 443cd1596b2e46d49965750771ebbfe1 | - | updated | 2015-08-10T15:06:25Z | - | user_id | bdd4e165bdf94b258ddd4856340ed01c | - +--------------------------------------+-----------------------------------------------+ - -#. Obtain console access to the instance. - - #. Test connectivity to the project router: - - .. code-block:: console - - $ ping -c 4 192.168.1.1 - PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. - 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=0.357 ms - 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=0.473 ms - 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=0.504 ms - 64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=0.470 ms - - --- 192.168.1.1 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 2998ms - rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms - - #. Test connectivity to the Internet: - - .. code-block:: console - - $ ping -c 4 openstack.org - PING openstack.org (174.143.194.225) 56(84) bytes of data. - 64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms - 64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms - 64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms - 64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms - - --- openstack.org ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3003ms - rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms - -#. Create a floating IP address on the external network: - - .. code-block:: console - - $ neutron floatingip-create ext-net - - Created a new floatingip: - +---------------------+--------------------------------------+ - | Field | Value | - +---------------------+--------------------------------------+ - | fixed_ip_address | | - | floating_ip_address | 203.0.113.102 | - | floating_network_id | 5266fcbc-d429-4b21-8544-6170d1691826 | - | id | 20a6b5dd-1c5c-460e-8a81-8b5cf1739307 | - | port_id | | - | router_id | | - | status | DOWN | - | tenant_id | 443cd1596b2e46d49965750771ebbfe1 | - +---------------------+--------------------------------------+ - -#. Associate the floating IP address with the instance: - - .. code-block:: console - - $ nova floating-ip-associate demo-instance1 203.0.113.102 - -#. Verify addition of the floating IP address to the instance: - - .. code-block:: console - - $ nova list - - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - | ID | Name | Status | Task State | Power State | Networks | - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - | 77149598-c839-400f-b948-db6993f0b40b | demo-instance1 | ACTIVE | - | Running | demo-net=192.168.1.3, 203.0.113.102 | - +--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+ - -#. On the controller node or any host with access to the external network, - ping the floating IP address associated with the instance: - - .. code-block:: console - - $ ping -c 4 203.0.113.102 - PING 203.0.113.102 (203.0.113.112) 56(84) bytes of data. - 64 bytes from 203.0.113.102: icmp_req=1 ttl=63 time=3.18 ms - 64 bytes from 203.0.113.102: icmp_req=2 ttl=63 time=0.981 ms - 64 bytes from 203.0.113.102: icmp_req=3 ttl=63 time=1.06 ms - 64 bytes from 203.0.113.102: icmp_req=4 ttl=63 time=0.929 ms - - --- 203.0.113.102 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3002ms - rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms diff --git a/doc/networking-guide/source/scenario-provider-lb.rst b/doc/networking-guide/source/scenario-provider-lb.rst deleted file mode 100644 index dbd48aee97..0000000000 --- a/doc/networking-guide/source/scenario-provider-lb.rst +++ /dev/null @@ -1,688 +0,0 @@ -.. _scenario-provider-lb: - -============================================= -Scenario: Provider networks with Linux bridge -============================================= - -This scenario describes a provider networks implementation of the -OpenStack Networking service using the ML2 plug-in with Linux bridge. - -Provider networks generally offer simplicity, performance, and reliability at -the cost of flexibility. Unlike other scenarios, only administrators can -manage provider networks because they require configuration of physical -network infrastructure. Also, provider networks lack the concept of fixed -and floating IP addresses because they only handle layer-2 connectivity for -instances. - -In many cases, operators who are already familiar with virtual networking -architectures that rely on physical network infrastructure for layer-2, -layer-3, or other services can seamlessly deploy the OpenStack Networking -service. In particular, this scenario appeals to operators looking to -migrate from the Compute networking service (nova-net) to the OpenStack -Networking service. Over time, operators can build on this minimal -deployment to enable more cloud networking features. - -Before OpenStack Networking introduced Distributed Virtual Routers (DVR), all -network traffic traversed one or more dedicated network nodes which limited -performance and reliability. Physical network infrastructures typically offer -better performance and reliability than general-purpose hosts that handle -various network operations in software. - -In general, the OpenStack Networking software components that handle layer-3 -operations impact performance and reliability the most. To improve performance -and reliability, provider networks move layer-3 operations to the physical -network infrastructure. - -In one particular use case, the OpenStack deployment resides in a mixed -environment with conventional virtualization and bare-metal hosts that use a -sizable physical network infrastructure. Applications that run inside the -OpenStack deployment might require direct layer-2 access, typically using -VLANs, to applications outside of the deployment. - -In comparison to provider networks with Open vSwitch (OVS), this scenario -relies completely on native Linux networking services which makes it the -simplest of all scenarios in this guide. - -The example configuration creates a VLAN provider network. However, it also -supports flat (untagged or native) provider networks. - -Prerequisites -~~~~~~~~~~~~~ - -These prerequisites define the minimum physical infrastructure and OpenStack -service dependencies that you need to deploy this scenario. For example, the -Networking service immediately depends on the Identity service, and the Compute -service immediately depends on the Networking service. These dependencies lack -services such as the Image service because the Networking service does not -immediately depend on it. However, the Compute service depends on the Image -service to launch an instance. The example configuration in this scenario -assumes basic configuration knowledge of Networking service components. - -For illustration purposes, the management network uses 10.0.0.0/24 and -provider networks use 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24. - -Infrastructure --------------- - -#. One controller node with two network interfaces: management and - provider. The provider interface connects to a generic network that - physical network infrastructure switches/routes to external networks - (typically the Internet). -#. At least two compute nodes with two network interfaces: management - and provider. The provider interface connects to a generic network that - physical network infrastructure switches/routes to external networks - (typically the Internet). - -.. image:: figures/scenario-provider-hw.png - :alt: Hardware layout - -.. image:: figures/scenario-provider-networks.png - :alt: Network layout - -.. image:: figures/scenario-provider-lb-services.png - :alt: Service layout - -OpenStack services - controller node ------------------------------------- - -#. Operational SQL server with ``neutron`` database and appropriate - configuration in the ``neutron.conf`` file. -#. Operational message queue service with appropriate configuration in - the ``neutron.conf`` file. -#. Operational OpenStack Identity service with appropriate - configuration in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``nova.conf`` file. -#. Neutron server service, ML2 plug-in, Linux bridge agent, DHCP agent, - and any dependencies. - -OpenStack services - compute nodes ----------------------------------- - -#. Operational OpenStack Identity service with appropriate - configuration in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``nova.conf`` file. -#. ML2 plug-in, Linux bridge agent, and any dependencies. - -Architecture -~~~~~~~~~~~~ - -The general provider network architecture uses physical network -infrastructure to handle switching and routing of network traffic. - -.. image:: figures/scenario-provider-general.png - :alt: Architecture overview - -The controller node contains the following network components: - -#. Linux bridge agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces and underlying interfaces. -#. DHCP agent managing the ``qdhcp`` namespaces. The ``qdhcp`` namespaces - provide DHCP services for instances using provider networks. - -.. image:: figures/scenario-provider-lb-controller1.png - :alt: Controller node components - overview - -.. image:: figures/scenario-provider-lb-controller2.png - :alt: Controller node components - connectivity - -.. note:: - - For illustration purposes, the diagram contains two different provider - networks. - -The compute nodes contain the following network components: - -#. Linux bridge agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces, security groups, and underlying interfaces. - -.. image:: figures/scenario-provider-lb-compute1.png - :alt: Compute node components - overview - -.. image:: figures/scenario-provider-lb-compute2.png - :alt: Compute node components - connectivity - -.. note:: - - For illustration purposes, the diagram contains two different provider - networks. - -Packet flow -~~~~~~~~~~~ - -.. note:: - - *North-south* network traffic travels between an instance and - external network, typically the Internet. *East-west* network - traffic travels between instances. - -Case 1: North-south -------------------- - -The physical network infrastructure handles routing and potentially other -services between the provider and external network. In this case, *provider* -and *external* simply differentiate between a network available to instances -and a network only accessible via router, respectively, to illustrate that -the physical network infrastructure handles routing. However, provider -networks support direct connection to *external* networks such as the -Internet. - -* External network - - * Network 203.0.113.0/24 - -* Provider network (VLAN) - - * Network 192.0.2.0/24 - * Gateway 192.0.2.1 with MAC address *TG* - -* Compute node 1 - - * Instance 1 192.0.2.11 with MAC address *I1* - -* Instance 1 resides on compute node 1 and uses a provider network. -* The instance sends a packet to a host on the external network. - -The following steps involve compute node 1. - -#. The instance 1 ``tap`` interface (1) forwards the packet to the provider - bridge ``qbr``. The packet contains destination MAC address *TG* - because the destination resides on another network. -#. Security group rules (2) on the provider bridge ``qbr`` handle firewalling - and tracking for the packet. -#. The provider bridge ``qbr`` forwards the packet to the logical VLAN - interface ``device.sid`` where *device* references the underlying - physical provider interface and *sid* contains the provider network - segmentation ID. -#. The logical VLAN interface ``device.sid`` forwards the packet to the - physical network via the physical provider interface. - -The following steps involve the physical network infrastructure: - -#. A switch (3) handles any VLAN tag operations between the provider network - and the router (4). -#. A router (4) routes the packet from the provider network to the external - network. -#. A switch (3) handles any VLAN tag operations between the router (4) and - the external network. -#. A switch (3) forwards the packet to the external network. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-provider-lb-flowns1.png - :alt: Network traffic flow - north/south - -Case 2: East-west for instances on different networks ------------------------------------------------------ - -The physical network infrastructure handles routing between the provider -networks. - -* Provider network 1 - - * Network: 192.0.2.0/24 - * Gateway: 192.0.2.1 with MAC address *TG1* - -* Provider network 2 - - * Network: 198.51.100.0/24 - * Gateway: 198.51.100.1 with MAC address *TG2* - -* Compute node 1 - - * Instance 1: 192.0.2.11 with MAC address *I1* - -* Compute node 2 - - * Instance 2: 198.51.100.11 with MAC address *I2* - -* Instance 1 resides on compute node 1 and uses provider network 1. -* Instance 2 resides on compute node 2 and uses provider network 2. -* Instance 1 sends a packet to instance 2. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface forwards the packet to the provider - bridge ``qbr``. The packet contains destination MAC address *TG1* - because the destination resides on another network. -#. Security group rules on the provider bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The provider bridge ``qbr`` forwards the packet to the logical VLAN - interface ``device.sid`` where *device* references the underlying - physical provider interface and *sid* contains the provider network - segmentation ID. -#. The logical VLAN interface ``device.sid`` forwards the packet to the - physical network infrastructure via the physical provider interface. - -The following steps involve the physical network infrastructure: - -#. A switch (3) handles any VLAN tag operations between provider network 1 - and the router (4). -#. A router (4) routes the packet from provider network 1 to provider - network 2. -#. A switch (3) handles any VLAN tag operations between the router (4) and - provider network 2. -#. A switch (3) forwards the packet to compute node 2. - -The following steps involve compute node 2: - -#. The physical provider interface forwards the packet to the logical VLAN - interface ``device.sid`` where *device* references the underlying - physical provider interface and *sid* contains the provider network - segmentation ID. -#. The logical VLAN interface ``device.sid`` forwards the packet to the - provider bridge ``qbr``. -#. Security group rules (5) on the provider bridge ``qbr`` handle - firewalling and state tracking for the packet. -#. The provider bridge ``qbr`` forwards the packet to the ``tap`` interface (6) - on instance 2. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-provider-lb-flowew1.png - :alt: Network traffic flow - east/west for instances on different networks - -Case 3: East-west for instances on the same network ---------------------------------------------------- - -The physical network infrastructure handles switching within the provider -network. - -* Provider network - - * Network: 192.0.2.0/24 - -* Compute node 1 - - * Instance 1: 192.0.2.11 with MAC address *I1* - -* Compute node 2 - - * Instance 2: 192.0.2.12 with MAC address *I2* - -* Instance 1 resides on compute node 1. -* Instance 2 resides on compute node 2. -* Both instances use the same provider network. -* Instance 1 sends a packet to instance 2. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface (1) forwards the packet to the provider - bridge ``qbr``. The packet contains destination MAC address *I2* - because the destination resides on the same network. -#. Security group rules (2) on the provider bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The provider bridge ``qbr`` forwards the packet to the logical VLAN - interface ``device.sid`` where *device* references the underlying - physical provider interface and *sid* contains the provider network - segmentation ID. -#. The logical VLAN interface ``device.sid`` forwards the packet to the - physical network infrastructure via the physical provider interface. - -The following steps involve the physical network infrastructure: - -#. A switch (3) forwards the packet from compute node 1 to compute node 2. - -The following steps involve compute node 2: - -#. The physical provider interface forwards the packet to the logical VLAN - interface ``device.sid`` where *device* references the underlying - physical provider interface and *sid* contains the provider network - segmentation ID. -#. The logical VLAN interface ``device.sid`` forwards the packet to the - provider bridge ``qbr``. -#. Security group rules (4) on the provider bridge ``qbr`` handle - firewalling and state tracking for the packet. -#. The provider bridge ``qbr`` forwards the packet to the instance 2 ``tap`` - interface (5). - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-provider-lb-flowew2.png - :alt: Network traffic flow - east/west for instances on the same network - -Example configuration -~~~~~~~~~~~~~~~~~~~~~ - -Use the following example configuration as a template to deploy this -scenario in your environment. - -.. note:: - - To further simplify this scenario, we recommend using a configuration drive - rather than the conventional metadata agent to provide instance metadata. - -Controller node ---------------- - -#. In the ``neutron.conf`` file: - - * Configure common options: - - .. code-block:: ini - - [DEFAULT] - core_plugin = ml2 - service_plugins = - - .. note:: - - The ``service_plugins`` option contains no value because the - Networking service does not provide layer-3 services such as - routing. However, this breaks portions of the dashboard that - manage the Networking service. See the - `Installation Guide `__ - for more information. - - * If necessary, :ref:`configure MTU `. - -#. In the ``ml2_conf.ini`` file: - - * Configure drivers and network types: - - .. code-block:: ini - - [ml2] - type_drivers = flat,vlan - tenant_network_types = - mechanism_drivers = linuxbridge - extension_drivers = port_security - - * Configure network mappings: - - .. code-block:: ini - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vlan] - network_vlan_ranges = provider - - .. note:: - - The ``tenant_network_types`` option contains no value because the - architecture does not support project (private) networks. - - .. note:: - - The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN - ID ranges to support use of arbitrary VLAN IDs. - - * Configure the security group driver: - - .. code-block:: ini - - [securitygroup] - firewall_driver = iptables - -#. In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent: - - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE - - [vxlan] - enable_vxlan = False - - [securitygroup] - firewall_driver = iptables - - Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface - that handles provider networks. For example, ``eth1``. - -#. In the ``dhcp_agent.ini`` file, configure the DHCP agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver - enable_isolated_metadata = True - -#. Start the following services: - - * Server - * Linux bridge agent - * DHCP agent - -Compute nodes -------------- - -#. In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent: - - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE - - [vxlan] - enable_vxlan = False - - [securitygroup] - firewall_driver = iptables - enable_security_group = True - - Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface - that handles provider networks. For example, ``eth1``. - -#. Start the following services: - - * Linux bridge agent - -Verify service operation ------------------------- - -#. Source the administrative project credentials. -#. Verify presence and operation of the agents: - - .. code-block:: console - - $ neutron agent-list - - +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+ - | id | agent_type | host | alive | admin_state_up | binary | - +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+ - | 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | :-) | True | neutron-linuxbridge-agent | - | 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent | - | e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | controller | :-) | True | neutron-dhcp-agent | - | e9901853-6687-45b1-8a92-3712bdec0416 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent | - +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+ - -Create initial networks ------------------------ - -This example creates a VLAN provider network. Change the VLAN ID and IP -address range to values suitable for your environment. - -#. Source the administrative project credentials. -#. Create a provider network: - - .. code-block:: console - - $ neutron net-create provider-101 --shared \ - --provider:physical_network provider --provider:network_type vlan \ - --provider:segmentation_id 101 - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | 572a3fc9-ad1f-4e54-a63a-4bf5047c1a4a | - | name | provider-101 | - | provider:network_type | vlan | - | provider:physical_network | provider | - | provider:segmentation_id | 101 | - | router:external | False | - | shared | True | - | status | ACTIVE | - | subnets | | - | tenant_id | e0bddbc9210d409795887175341b7098 | - +---------------------------+--------------------------------------+ - - .. note:: - - The ``shared`` option allows any project to use this network. - -#. Create a subnet on the provider network: - - .. code-block:: console - - $ neutron subnet-create provider-101 203.0.113.0/24 \ - --name provider-101-subnet --gateway 203.0.113.1 - - Created a new subnet: - +-------------------+--------------------------------------------------+ - | Field | Value | - +-------------------+--------------------------------------------------+ - | allocation_pools | {"start": "203.0.113.2", "end": "203.0.113.254"} | - | cidr | 203.0.113.0/24 | - | dns_nameservers | | - | enable_dhcp | True | - | gateway_ip | 203.0.113.1 | - | host_routes | | - | id | ff6c9a0b-0c81-4ce4-94e6-c6617a059bab | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | provider-101-subnet | - | network_id | 572a3fc9-ad1f-4e54-a63a-4bf5047c1a4a | - | tenant_id | e0bddbc9210d409795887175341b7098 | - +-------------------+--------------------------------------------------+ - -Verify network operation ------------------------- - -#. On the controller node, verify creation of the ``qdhcp`` namespace: - - .. code-block:: console - - $ ip netns - qdhcp-8b868082-e312-4110-8627-298109d4401c - - .. note:: - - The ``qdhcp`` namespace might not exist until launching an instance. - -#. Source the regular project credentials. The following steps use the - ``demo`` project. -#. Create the appropriate security group rules to allow ping and SSH - access to the instance. For example: - - .. code-block:: console - - $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | icmp | -1 | -1 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - - $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | tcp | 22 | 22 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - -#. Launch an instance with an interface on the provider network. - - .. note:: - - This example uses a CirrOS image that was manually uploaded into the Image service - - .. code-block:: console - - $ nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64-disk \ - --nic net-id=572a3fc9-ad1f-4e54-a63a-4bf5047c1a4a test_server - - +--------------------------------------+-----------------------------------------------------------------+ - | Property | Value | - +--------------------------------------+-----------------------------------------------------------------+ - | OS-DCF:diskConfig | MANUAL | - | OS-EXT-AZ:availability_zone | nova | - | OS-EXT-SRV-ATTR:host | - | - | OS-EXT-SRV-ATTR:hypervisor_hostname | - | - | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | - | OS-EXT-STS:power_state | 0 | - | OS-EXT-STS:task_state | scheduling | - | OS-EXT-STS:vm_state | building | - | OS-SRV-USG:launched_at | - | - | OS-SRV-USG:terminated_at | - | - | accessIPv4 | | - | accessIPv6 | | - | adminPass | h7CkMdkRXuuh | - | config_drive | | - | created | 2015-07-22T20:40:16Z | - | flavor | m1.tiny (1) | - | hostId | | - | id | dee2a9f4-e24c-444d-8c94-386f11f74af5 | - | image | cirros-0.3.3-x86_64-disk (2b6bb38f-f69f-493c-a1c0-264dfd4188d8) | - | key_name | - | - | metadata | {} | - | name | test_server | - | os-extended-volumes:volumes_attached | [] | - | progress | 0 | - | security_groups | default | - | status | BUILD | - | tenant_id | 5f2db133e98e4bc2999ac2850ce2acd1 | - | updated | 2015-07-22T20:40:16Z | - | user_id | ea417ebfa86741af86f84a5dbcc97cd2 | - +--------------------------------------+-----------------------------------------------------------------+ - -#. Determine the IP address of the instance. The following step uses - 203.0.113.3. - - .. code-block:: console - - $ nova list - - +--------------------------------------+-------------+--------+------------+-------------+--------------------------+ - | ID | Name | Status | Task State | Power State | Networks | - +--------------------------------------+-------------+--------+------------+-------------+--------------------------+ - | dee2a9f4-e24c-444d-8c94-386f11f74af5 | test_server | ACTIVE | - | Running | provider-101=203.0.113.3 | - +--------------------------------------+-------------+--------+------------+-------------+--------------------------+ - - -#. On the controller node or any host with access to the provider network, - ping the IP address of the instance: - - .. code-block:: console - - $ ping -c 4 203.0.113.3 - PING 203.0.113.3 (203.0.113.3) 56(84) bytes of data. - 64 bytes from 203.0.113.3: icmp_req=1 ttl=63 time=3.18 ms - 64 bytes from 203.0.113.3: icmp_req=2 ttl=63 time=0.981 ms - 64 bytes from 203.0.113.3: icmp_req=3 ttl=63 time=1.06 ms - 64 bytes from 203.0.113.3: icmp_req=4 ttl=63 time=0.929 ms - - --- 203.0.113.3 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3002ms - rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms - -#. Obtain access to the instance. -#. Test connectivity to the Internet: - - .. code-block:: console - - $ ping -c 4 openstack.org - PING openstack.org (174.143.194.225) 56(84) bytes of data. - 64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms - 64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms - 64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms - 64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms - - --- openstack.org ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3003ms - rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms diff --git a/doc/networking-guide/source/scenario-provider-ovs.rst b/doc/networking-guide/source/scenario-provider-ovs.rst deleted file mode 100644 index 386d38de31..0000000000 --- a/doc/networking-guide/source/scenario-provider-ovs.rst +++ /dev/null @@ -1,746 +0,0 @@ -.. _scenario_provider_ovs: - -============================================= -Scenario: Provider networks with Open vSwitch -============================================= - -This scenario describes a provider networks implementation of the -OpenStack Networking service using the ML2 plug-in with Open vSwitch (OVS). - -Provider networks generally offer simplicity, performance, and reliability at -the cost of flexibility. Unlike other scenarios, only administrators can -manage provider networks because they require configuration of physical -network infrastructure. Also, provider networks lack the concept of fixed -and floating IP addresses because they only handle layer-2 connectivity for -instances. - -In many cases, operators who are already familiar with network architectures -that rely on the physical network infrastructure can easily deploy OpenStack -Networking on it. Over time, operators can test and implement cloud -networking features in their environment. - -Before OpenStack Networking introduced Distributed Virtual Routers (DVR), all -network traffic traversed one or more dedicated network nodes, which limited -performance and reliability. Physical network infrastructures typically offer -better performance and reliability than general-purpose hosts that handle -various network operations in software. - -In general, the OpenStack Networking software components that handle layer-3 -operations impact performance and reliability the most. To improve performance -and reliability, provider networks move layer-3 operations to the physical -network infrastructure. - -In one particular use case, the OpenStack deployment resides in a mixed -environment with conventional virtualization and bare-metal hosts that use a -sizable physical network infrastructure. Applications that run inside the -OpenStack deployment might require direct layer-2 access, typically using -VLANs, to applications outside of the deployment. - -The example configuration creates a VLAN provider network. However, it also -supports flat (untagged or native) provider networks. - -Prerequisites -~~~~~~~~~~~~~ - -These prerequisites define the minimum physical infrastructure and OpenStack -service dependencies that you need to deploy this scenario. For example, the -Networking service immediately depends on the Identity service and the Compute -service immediately depends on the Networking service. These dependencies lack -services such as the Image service because the Networking service does not -immediately depend on it. However, the Compute service depends on the Image -service to launch an instance. The example configuration in this scenario -assumes basic configuration knowledge of Networking service components. - -For illustration purposes, the management network uses 10.0.0.0/24 and -provider networks use 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24. - -Infrastructure --------------- - -#. One controller node with two network interfaces: management and - provider. The provider interface connects to a generic network that - physical network infrastructure switches/routes to external networks - (typically the Internet). The Open vSwitch bridge ``br-provider`` - must contain a port on the provider network interface. -#. At least two compute nodes with two network interfaces: management - and provider. The provider interface connects to a generic network that - the physical network infrastructure switches/routes to external networks - (typically the Internet). The Open vSwitch bridge ``br-provider`` - must contain a port on the provider network interface. - -.. figure:: figures/scenario-provider-hw.png - :alt: Hardware layout - -.. figure:: figures/scenario-provider-networks.png - :alt: Network layout - -.. image:: figures/scenario-provider-ovs-services.png - :alt: Service layout - -.. warning:: - - Linux distributions often package older releases of Open vSwitch that can - introduce issues during operation with the Networking service. We recommend - using at least the latest long-term stable (LTS) release of Open vSwitch - for the best experience and support from Open vSwitch. See - ``__ for available releases and the - `installation instructions - `__ for - building newer releases from source on various distributions. - -OpenStack services - controller node ------------------------------------- - -#. Operational SQL server with ``neutron`` database and appropriate - configuration in the ``neutron.conf`` file. -#. Operational message queue service with appropriate configuration in - the ``neutron.conf`` file. -#. Operational OpenStack Identity service with appropriate - configuration in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``nova.conf`` file. -#. Neutron server service, Open vSwitch service, ML2 plug-in, Open - vSwitch agent, DHCP agent, and any dependencies. - -OpenStack services - compute nodes ----------------------------------- - -#. Operational OpenStack Identity service with appropriate - configuration in the ``neutron.conf`` file. -#. Operational OpenStack Compute controller/management service with - appropriate configuration to use neutron in the ``nova.conf`` file. -#. Open vSwitch service, Open vSwitch agent, and any dependencies. - -Architecture -~~~~~~~~~~~~ - -The general provider network architecture uses physical network -infrastructure to handle switching and routing of network traffic. - -.. figure:: figures/scenario-provider-general.png - :alt: Architecture overview - -The controller node contains the following network components: - -#. Open vSwitch agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as namespaces and underlying interfaces. -#. DHCP agent managing the ``qdhcp`` namespaces. The ``qdhcp`` namespaces - provide DHCP services for instances using provider networks. - -.. figure:: figures/scenario-provider-ovs-controller1.png - :alt: Controller node components - overview - -.. figure:: figures/scenario-provider-ovs-controller2.png - :alt: Controller node components - connectivity - -.. note:: - - For illustration purposes, the diagram contains two different provider - networks. - -The compute nodes contain the following network components: - -#. Open vSwitch agent managing virtual switches, connectivity among - them, and interaction via virtual ports with other network components - such as Linux bridges and underlying interfaces. -#. Conventional Linux bridges handling security groups. Optionally, a native - OVS implementation can handle security groups. However, due to kernel and - OVS version requirements for it, this scenario uses conventional Linux - bridges. See :ref:`config-ovsfwdriver` for more information. - -.. figure:: figures/scenario-provider-ovs-compute1.png - :alt: Compute node components - overview - -.. figure:: figures/scenario-provider-ovs-compute2.png - :alt: Compute node components - connectivity - -.. note:: - - For illustration purposes, the diagram contains two different provider - networks. - -Packet flow -~~~~~~~~~~~ - -.. note:: - - *North-south* network traffic travels between an instance and - external network, typically the Internet. *East-west* network - traffic travels between instances. - -.. note:: - - Open vSwitch uses VLANs internally to segregate networks that traverse - bridges. The VLAN ID usually differs from the segmentation ID of the - virtual network. - -Case 1: North-south -------------------- - -The physical network infrastructure handles routing and potentially other -services between the provider and external network. In this case, *provider* -and *external* simply differentiate between a network available to instances -and a network only accessible via router, respectively, to illustrate that -the physical network infrastructure handles routing. However, provider -networks support direct connection to *external* networks such as the -Internet. - -* External network - - * Network 203.0.113.0/24 - -* Provider network (VLAN) - - * Network 192.0.2.0/24 - * Gateway 192.0.2.1 with MAC address *TG* - -* Compute node 1 - - * Instance 1 192.0.2.11 with MAC address *I1* - -* Instance 1 resides on compute node 1 and uses a provider network. -* The instance sends a packet to a host on the external network. - -The following steps involve compute node 1. - -#. The instance 1 ``tap`` interface (1) forwards the packet to the Linux - bridge ``qbr``. The packet contains destination MAC address *TG* - because the destination resides on another network. -#. Security group rules (2) on the Linux bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the Open vSwitch integration - bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` adds the internal tag for - the provider network. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to the - Open vSwitch provider bridge ``br-provider``. -#. The Open vSwitch provider bridge ``br-provider`` replaces the internal tag - with the actual VLAN tag (segmentation ID) of the provider network. -#. The Open vSwitch provider bridge ``br-provider`` forwards the packet to the - physical network via the provider network interface. - -The following steps involve the physical network infrastructure: - -#. A switch (3) handles any VLAN tag operations between provider network 1 - and the router (4). -#. A router (4) routes the packet from provider network 1 to the external - network. -#. A switch (3) handles any VLAN tag operations between the router (4) and - the external network. -#. A switch (3) forwards the packet to the external network. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. figure:: figures/scenario-provider-ovs-flowns1.png - :alt: Network traffic flow - north/south - -Case 2: East-west for instances on different networks ------------------------------------------------------ - -The physical network infrastructure handles routing between the provider -networks. - -* Provider network 1 - - * Network: 192.0.2.0/24 - * Gateway: 192.0.2.1 with MAC address *TG1* - -* Provider network 2 - - * Network: 198.51.100.0/24 - * Gateway: 198.51.100.1 with MAC address *TG2* - -* Compute node 1 - - * Instance 1: 192.0.2.11 with MAC address *I1* - -* Compute node 2 - - * Instance 2: 198.51.100.11 with MAC address *I2* - -* Instance 1 resides on compute node 1 and uses provider network 1. -* Instance 2 resides on compute node 2 and uses provider network 2. -* Instance 1 sends a packet to instance 2. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface (1) forwards the packet to the Linux - bridge ``qbr``. The packet contains destination MAC address *TG1* - because the destination resides on another network. -#. Security group rules (2) on the Linux bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` adds the internal tag for - provider network 1. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch provider bridge ``br-provider``. -#. The Open vSwitch provider bridge ``br-provider`` replaces the internal tag - with the actual VLAN tag (segmentation ID) of provider network 1. -#. The Open vSwitch VLAN bridge ``br-provider`` forwards the packet to the - physical network infrastructure via the provider network interface. - -The following steps involve the physical network infrastructure: - -#. A switch (3) handles any VLAN tag operations between provider network 1 - and the router (4). -#. A router (4) routes the packet from provider network 1 to provider - network 2. -#. A switch (3) handles any VLAN tag operations between the router (4) and - provider network 2. -#. A switch (3) forwards the packet to compute node 2. - -The following steps involve compute node 2: - -#. The provider network interface forwards the packet to the Open vSwitch - provider bridge ``br-provider``. -#. The Open vSwitch provider bridge ``br-provider`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` replaces the actual - VLAN tag (segmentation ID) of provider network 2 with the internal tag. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Linux bridge ``qbr``. -#. Security group rules (5) on the Linux bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the ``tap`` interface (6) - on instance 2. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-provider-ovs-flowew1.png - :alt: Network traffic flow - east/west for instances on different networks - -Case 3: East-west for instances on the same network ---------------------------------------------------- - -The physical network infrastructure handles switching within the provider -network. - -* Provider network - - * Network: 192.0.2.0/24 - -* Compute node 1 - - * Instance 1: 192.0.2.11 with MAC address *I1* - -* Compute node 2 - - * Instance 2: 192.0.2.12 with MAC address *I2* - -* Instance 1 resides on compute node 1. -* Instance 2 resides on compute node 2. -* Both instances use the same provider network. -* Instance 1 sends a packet to instance 2. - -The following steps involve compute node 1: - -#. The instance 1 ``tap`` interface (1) forwards the packet to the Linux - bridge ``qbr``. The packet contains destination MAC address *I2* - because the destination resides on the same network. -#. Security group rules (2) on the Linux bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the Open vSwitch - integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` adds the internal tag for - the provider network. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Open vSwitch provider bridge ``br-provider``. -#. The Open vSwitch provider bridge ``br-provider`` replaces the internal tag - with the actual VLAN tag (segmentation ID) of the provider network. -#. The Open vSwitch VLAN bridge ``br-provider`` forwards the packet to the - physical network infrastructure via the provider network interface. - -The following steps involve the physical network infrastructure: - -#. A switch (3) forwards the packet from compute node 1 to compute node 2. - -The following steps involve compute node 2: - -#. The provider network interface forwards the packet to the Open vSwitch - provider bridge ``br-provider``. -#. The Open vSwitch provider bridge ``br-provider`` forwards the packet to the - Open vSwitch integration bridge ``br-int``. -#. The Open vSwitch integration bridge ``br-int`` replaces the actual - VLAN tag (segmentation ID) of provider network 1 with the internal tag. -#. The Open vSwitch integration bridge ``br-int`` forwards the packet to - the Linux bridge ``qbr``. -#. Security group rules (4) on the Linux bridge ``qbr`` handle firewalling - and state tracking for the packet. -#. The Linux bridge ``qbr`` forwards the packet to the ``tap`` interface (5) - on instance 2. - -.. note:: - - Return traffic follows similar steps in reverse. - -.. image:: figures/scenario-provider-ovs-flowew2.png - :alt: Network traffic flow - east/west for instances on the same network - -Example configuration -~~~~~~~~~~~~~~~~~~~~~ - -Use the following example configuration as a template to deploy this -scenario in your environment. - -.. note:: - - To further simplify this scenario, we recommend using a configuration drive - rather than the conventional metadata agent to provide instance metadata. - -Controller node ---------------- - -#. In the ``neutron.conf`` file: - - * Configure common options: - - .. code-block:: ini - - [DEFAULT] - core_plugin = ml2 - service_plugins = - - .. note:: - - The ``service_plugins`` option contains no value because the - Networking service does not provide layer-3 services such as - routing. However, this breaks portions of the dashboard that - manage the Networking service. See the - `Installation Guide `__ - for more information. - - * If necessary, :ref:`configure MTU `. - -#. In the ``ml2_conf.ini`` file: - - * Configure drivers and network types: - - .. code-block:: ini - - [ml2] - type_drivers = flat,vlan - tenant_network_types = - mechanism_drivers = openvswitch - extension_drivers = port_security - - * Configure network mappings: - - .. code-block:: ini - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vlan] - network_vlan_ranges = provider - - .. note:: - - The ``tenant_network_types`` option contains no value because the - architecture does not support project (private) networks. - - .. note:: - - The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN - ID ranges to support use of arbitrary VLAN IDs. - - * Configure the security group driver: - - .. code-block:: ini - - [securitygroup] - firewall_driver = iptables_hybrid - -#. In the ``openvswitch_agent.ini`` file, configure the Open vSwitch agent: - - .. code-block:: ini - - [ovs] - bridge_mappings = provider:br-provider - - [securitygroup] - firewall_driver = iptables_hybrid - -#. In the ``dhcp_agent.ini`` file, configure the DHCP agent: - - .. code-block:: ini - - [DEFAULT] - interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver - enable_isolated_metadata = True - -#. Start the following service: - - * Open vSwitch - -#. Create the Open vSwitch provider bridge ``br-provider``: - - .. code-block:: console - - $ ovs-vsctl add-br br-provider - -#. Add the provider network interface as a port on the Open vSwitch provider - bridge ``br-provider``: - - .. code-block:: console - - $ ovs-vsctl add-port br-provider PROVIDER_INTERFACE - - Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface - that handles provider networks. For example, ``eth1``. - -#. Start the following services: - - * Server - * Open vSwitch agent - * DHCP agent - -Compute nodes -------------- - -#. In the ``openvswitch_agent.ini`` file, configure the Open vSwitch agent: - - .. code-block:: ini - - [ovs] - bridge_mappings = provider:br-provider - - [securitygroup] - firewall_driver = iptables_hybrid - -#. Start the following service: - - * Open vSwitch - -#. Create the Open vSwitch provider bridge ``br-provider``: - - .. code-block:: console - - $ ovs-vsctl add-br br-provider - -#. Add the provider network interface as a port on the Open vSwitch provider - bridge ``br-provider``: - - .. code-block:: console - - $ ovs-vsctl add-port br-provider PROVIDER_INTERFACE - - Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface - that handles provider networks. For example, ``eth1``. - -#. Start the following services: - - * Open vSwitch agent - -Verify service operation ------------------------- - -#. Source the administrative project credentials. -#. Verify presence and operation of the agents: - - .. code-block:: console - - $ neutron agent-list - - +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+ - | id | agent_type | host | alive | admin_state_up | binary | - +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+ - | 09de6af6-c5f1-4548-8b09-18801f068c57 | Open vSwitch agent | controller | :-) | True | neutron-openvswitch-agent | - | 1c5eca1c-3672-40ae-93f1-6bde214fa303 | DHCP agent | controller | :-) | True | neutron-dhcp-agent | - | 6129b1ec-9946-4ec5-a4bd-460ca83a40cb | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent | - | 8a3fc26a-9268-416d-9d29-6d44f0e4a24f | Open vSwitch agent | compute2 | :-) | True | neutron-openvswitch-agent | - +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+ - -Create initial networks ------------------------ - -This example creates a VLAN provider network. Change the VLAN ID and IP -address range to values suitable for your environment. - -#. Source the administrative project credentials. -#. Create a provider network: - - .. code-block:: console - - $ neutron net-create provider-101 --shared \ - --provider:physical_network provider --provider:network_type vlan \ - --provider:segmentation_id 101 - - Created a new network: - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | id | 8b868082-e312-4110-8627-298109d4401c | - | name | provider-101 | - | provider:network_type | vlan | - | provider:physical_network | provider | - | provider:segmentation_id | 101 | - | router:external | False | - | shared | True | - | status | ACTIVE | - | subnets | | - | tenant_id | e0bddbc9210d409795887175341b7098 | - +---------------------------+--------------------------------------+ - - .. note:: - - The ``shared`` option allows any project to use this network. - -#. Create a subnet on the provider network: - - .. code-block:: console - - $ neutron subnet-create provider-101 203.0.113.0/24 \ - --name provider-101-subnet --gateway 203.0.113.1 - - Created a new subnet: - +-------------------+--------------------------------------------------+ - | Field | Value | - +-------------------+--------------------------------------------------+ - | allocation_pools | {"start": "203.0.113.2", "end": "203.0.113.254"} | - | cidr | 203.0.113.0/24 | - | dns_nameservers | | - | enable_dhcp | True | - | gateway_ip | 203.0.113.1 | - | host_routes | | - | id | 0443aeb0-1c6b-4d95-a464-c551c47a0a80 | - | ip_version | 4 | - | ipv6_address_mode | | - | ipv6_ra_mode | | - | name | provider-101-subnet | - | network_id | 8b868082-e312-4110-8627-298109d4401c | - | tenant_id | e0bddbc9210d409795887175341b7098 | - +-------------------+--------------------------------------------------+ - -Verify network operation ------------------------- - -#. On the controller node, verify creation of the ``qdhcp`` namespace: - - .. code-block:: console - - $ ip netns - qdhcp-8b868082-e312-4110-8627-298109d4401c - - .. note:: - - The ``qdhcp`` namespace might not exist until launching an instance. - -#. Source the regular project credentials. The following steps use the - ``demo`` project. -#. Create the appropriate security group rules to allow ping and SSH - access to the instance. For example: - - .. code-block:: console - - $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | icmp | -1 | -1 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - - $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 - - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | tcp | 22 | 22 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - -#. Launch an instance with an interface on the provider network. - - .. note:: - - This example uses a CirrOS image that was manually uploaded into the Image service - - .. code-block:: console - - $ nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64-disk test_server - - +--------------------------------------+-----------------------------------------------------------------+ - | Property | Value | - +--------------------------------------+-----------------------------------------------------------------+ - | OS-DCF:diskConfig | MANUAL | - | OS-EXT-AZ:availability_zone | nova | - | OS-EXT-SRV-ATTR:host | - | - | OS-EXT-SRV-ATTR:hypervisor_hostname | - | - | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | - | OS-EXT-STS:power_state | 0 | - | OS-EXT-STS:task_state | scheduling | - | OS-EXT-STS:vm_state | building | - | OS-SRV-USG:launched_at | - | - | OS-SRV-USG:terminated_at | - | - | accessIPv4 | | - | accessIPv6 | | - | adminPass | h7CkMdkRXuuh | - | config_drive | | - | created | 2015-07-22T20:40:16Z | - | flavor | m1.tiny (1) | - | hostId | | - | id | dee2a9f4-e24c-444d-8c94-386f11f74af5 | - | image | cirros-0.3.3-x86_64-disk (2b6bb38f-f69f-493c-a1c0-264dfd4188d8) | - | key_name | - | - | metadata | {} | - | name | test_server | - | os-extended-volumes:volumes_attached | [] | - | progress | 0 | - | security_groups | default | - | status | BUILD | - | tenant_id | 5f2db133e98e4bc2999ac2850ce2acd1 | - | updated | 2015-07-22T20:40:16Z | - | user_id | ea417ebfa86741af86f84a5dbcc97cd2 | - +--------------------------------------+-----------------------------------------------------------------+ - -#. Determine the IP address of the instance. The following step uses - 203.0.113.3. - - .. code-block:: console - - $ nova list - - +--------------------------------------+-------------+--------+------------+-------------+--------------------------+ - | ID | Name | Status | Task State | Power State | Networks | - +--------------------------------------+-------------+--------+------------+-------------+--------------------------+ - | dee2a9f4-e24c-444d-8c94-386f11f74af5 | test_server | ACTIVE | - | Running | provider-101=203.0.113.3 | - +--------------------------------------+-------------+--------+------------+-------------+--------------------------+ - - -#. On the controller node or any host with access to the provider network, - ping the IP address of the instance: - - .. code-block:: console - - $ ping -c 4 203.0.113.3 - PING 203.0.113.3 (203.0.113.3) 56(84) bytes of data. - 64 bytes from 203.0.113.3: icmp_req=1 ttl=63 time=3.18 ms - 64 bytes from 203.0.113.3: icmp_req=2 ttl=63 time=0.981 ms - 64 bytes from 203.0.113.3: icmp_req=3 ttl=63 time=1.06 ms - 64 bytes from 203.0.113.3: icmp_req=4 ttl=63 time=0.929 ms - - --- 203.0.113.3 ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3002ms - rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms - -#. Obtain access to the instance. -#. Test connectivity to the Internet: - - .. code-block:: console - - $ ping -c 4 openstack.org - PING openstack.org (174.143.194.225) 56(84) bytes of data. - 64 bytes from 174.143.194.225: icmp_req=1 ttl=53 time=17.4 ms - 64 bytes from 174.143.194.225: icmp_req=2 ttl=53 time=17.5 ms - 64 bytes from 174.143.194.225: icmp_req=3 ttl=53 time=17.7 ms - 64 bytes from 174.143.194.225: icmp_req=4 ttl=53 time=17.5 ms - - --- openstack.org ping statistics --- - 4 packets transmitted, 4 received, 0% packet loss, time 3003ms - rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms diff --git a/doc/networking-guide/source/shared/deploy-config-neutron-common.txt b/doc/networking-guide/source/shared/deploy-config-neutron-common.txt new file mode 100644 index 0000000000..d8a9879cc3 --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-config-neutron-common.txt @@ -0,0 +1,24 @@ +.. code-block:: ini + + [DEFAULT] + core_plugin = ml2 + auth_strategy = keystone + rpc_backend = rabbit + notify_nova_on_port_status_changes = true + notify_nova_on_port_data_changes = true + + [database] + ... + + [keystone_authtoken] + ... + + [oslo_messaging_rabbit] + ... + + [nova] + ... + +See the `Installation Guide `_ for your OpenStack +release to obtain the appropriate configuration for the ``[database]``, +``[keystone_authtoken]``, ``[oslo_messaging_rabbit]``, and ``[nova]`` sections. diff --git a/doc/networking-guide/source/shared/deploy-ha-vrrp-initialnetworks.txt b/doc/networking-guide/source/shared/deploy-ha-vrrp-initialnetworks.txt new file mode 100644 index 0000000000..11f72c0e36 --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-ha-vrrp-initialnetworks.txt @@ -0,0 +1,130 @@ +Similar to the self-service deployment example, this configuration supports +multiple VXLAN self-service networks. After enabling high-availability, all +additional routers use VRRP. The following procedure creates an additional +self-service network and router. The Networking service also supports adding +high-availability to existing routers. However, the procedure requires +administratively disabling and enabling each router which temporarily +interrupts network connectivity for self-service networks with interfaces +on that router. + +#. Source a regular (non-administrative) project credentials. +#. Create a self-service network. + + .. code-block:: console + + $ neutron net-create selfservice2 + Created a new network: + +-------------------------+--------------------------------------+ + | Field | Value | + +-------------------------+--------------------------------------+ + | admin_state_up | True | + | availability_zone_hints | | + | availability_zones | | + | description | | + | id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 | + | ipv4_address_scope | | + | ipv6_address_scope | | + | mtu | 1450 | + | name | selfservice2 | + | port_security_enabled | True | + | router:external | False | + | shared | False | + | status | ACTIVE | + | subnets | | + | tags | | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------------+--------------------------------------+ + +#. Create a IPv4 subnet on the self-service network. + + .. code-block:: console + + $ neutron subnet-create --name selfservice2-v4 --ip-version 4 \ + --dns-nameserver 8.8.4.4 selfservice2 192.168.2.0/24 + Created a new subnet: + +-------------------+--------------------------------------------------+ + | Field | Value | + +-------------------+--------------------------------------------------+ + | allocation_pools | {"start": "192.168.2.2", "end": "192.168.2.254"} | + | cidr | 192.168.2.0/24 | + | description | | + | dns_nameservers | 8.8.4.4 | + | enable_dhcp | True | + | gateway_ip | 192.168.2.1 | + | host_routes | | + | id | 12a41804-18bf-4cec-bde8-174cbdbf1573 | + | ip_version | 4 | + | ipv6_address_mode | | + | ipv6_ra_mode | | + | name | selfservice2-v4 | + | network_id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 | + | subnetpool_id | | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------+--------------------------------------------------+ + +#. Create a IPv6 subnet on the self-service network. + + .. code-block:: console + + $ neutron subnet-create --name selfservice2-v6 --ip-version 6 \ + --ipv6-address-mode slaac --ipv6-ra-mode slaac \ + --dns-nameserver 2001:4860:4860::8844 selfservice2 \ + fd00:192:168:2::/64 + Created a new subnet: + +-------------------+-----------------------------------------------------------------------------+ + | Field | Value | + +-------------------+-----------------------------------------------------------------------------+ + | allocation_pools | {"start": "fd00:192:168:2::2", "end": "fd00:192:168:2:ffff:ffff:ffff:ffff"} | + | cidr | fd00:192:168:2::/64 | + | description | | + | dns_nameservers | 2001:4860:4860::8844 | + | enable_dhcp | True | + | gateway_ip | fd00:192:168:2::1 | + | host_routes | | + | id | b0f122fe-0bf9-4f31-975d-a47e58aa88e3 | + | ip_version | 6 | + | ipv6_address_mode | slaac | + | ipv6_ra_mode | slaac | + | name | selfservice2-v6 | + | network_id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 | + | subnetpool_id | | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------+-----------------------------------------------------------------------------+ + +#. Create a router. + + .. code-block:: console + + $ neutron router-create router2 + Created a new router: + +-------------------------+--------------------------------------+ + | Field | Value | + +-------------------------+--------------------------------------+ + | admin_state_up | True | + | availability_zone_hints | | + | availability_zones | | + | description | | + | external_gateway_info | | + | id | b6206312-878e-497c-8ef7-eb384f8add96 | + | name | router2 | + | routes | | + | status | ACTIVE | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------------+--------------------------------------+ + +#. Add the IPv4 and IPv6 subnets as interfaces on the router. + + .. code-block:: console + + $ neutron router-interface-add router2 selfservice2-v4 + Added interface da3504ad-ba70-4b11-8562-2e6938690878 to router router2. + + $ neutron router-interface-add router2 selfservice2-v6 + Added interface 442e36eb-fce3-4cb5-b179-4be6ace595f0 to router router2. + +#. Add the provider network as a gateway on the router. + + .. code-block:: console + + $ neutron router-gateway-set router2 provider1 + Set gateway for router router2 diff --git a/doc/networking-guide/source/shared/deploy-ha-vrrp-verifyfailoveroperation.txt b/doc/networking-guide/source/shared/deploy-ha-vrrp-verifyfailoveroperation.txt new file mode 100644 index 0000000000..9f4d29329c --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-ha-vrrp-verifyfailoveroperation.txt @@ -0,0 +1,15 @@ +#. Begin a continuous ``ping`` of both the floating IPv4 address and IPv6 + address of the instance. While performing the next three steps, you + should see a minimal, if any, interruption of connectivity to the + instance. + +#. On the network node with the master router, administratively disable + the overlay network interface. + +#. On the other network node, verify promotion of the backup router to + master router by noting addition of IP addresses to the interfaces + in the ``qrouter`` namespace. + +#. On the original network node in step 2, administratively enable the + overlay network interface. Note that the master router remains on + the network node in step 3. diff --git a/doc/networking-guide/source/shared/deploy-ha-vrrp-verifynetworkoperation.txt b/doc/networking-guide/source/shared/deploy-ha-vrrp-verifynetworkoperation.txt new file mode 100644 index 0000000000..9639257172 --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-ha-vrrp-verifynetworkoperation.txt @@ -0,0 +1,153 @@ +#. Source the administrative project credentials. +#. Verify creation of the internal high-availability network that handles + VRRP *heartbeat* traffic. + + .. code-block:: console + + $ neutron net-list + +--------------------------------------+----------------------------------------------------+----------------------------------------------------------+ + | id | name | subnets | + +--------------------------------------+----------------------------------------------------+----------------------------------------------------------+ + | 1b8519c1-59c4-415c-9da2-a67d53c68455 | HA network tenant f986edf55ae945e2bef3cb4bfd589928 | 6843314a-1e76-4cc9-94f5-c64b7a39364a 169.254.192.0/18 | + +--------------------------------------+----------------------------------------------------+----------------------------------------------------------+ + +#. On each network node, verify creation of a ``qrouter`` namespace with + the same ID. + + Network node 1: + + .. code-block:: console + + # ip netns + qrouter-b6206312-878e-497c-8ef7-eb384f8add96 + + Network node 2: + + .. code-block:: console + + # ip netns + qrouter-b6206312-878e-497c-8ef7-eb384f8add96 + + .. note:: + + The namespace for router 1 from :ref:`deploy-lb-selfservice` should + only appear on network node 1 because of creation prior to enabling + VRRP. + +#. On each network node, show the IP address of interfaces in the ``qrouter`` + namespace. With the exception of the VRRP interface, only one namespace + belonging to the master router instance contains IP addresses on the + interfaces. + + Network node 1: + + .. code-block:: console + + # ip netns exec qrouter-b6206312-878e-497c-8ef7-eb384f8add96 ip addr show + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: ha-eb820380-40@if21: mtu 1450 qdisc noqueue state UP group default qlen 1000 + link/ether fa:16:3e:78:ba:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0 + inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-eb820380-40 + valid_lft forever preferred_lft forever + inet 169.254.0.1/24 scope global ha-eb820380-40 + valid_lft forever preferred_lft forever + inet6 fe80::f816:3eff:fe78:ba99/64 scope link + valid_lft forever preferred_lft forever + 3: qr-da3504ad-ba@if24: mtu 1450 qdisc noqueue state UP group default qlen 1000 + link/ether fa:16:3e:dc:8e:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0 + inet 192.168.2.1/24 scope global qr-da3504ad-ba + valid_lft forever preferred_lft forever + inet6 fe80::f816:3eff:fedc:8ea8/64 scope link + valid_lft forever preferred_lft forever + 4: qr-442e36eb-fc@if27: mtu 1450 qdisc noqueue state UP group default qlen 1000 + link/ether fa:16:3e:ee:c8:41 brd ff:ff:ff:ff:ff:ff link-netnsid 0 + inet6 fd00:192:168:2::1/64 scope global nodad + valid_lft forever preferred_lft forever + inet6 fe80::f816:3eff:feee:c841/64 scope link + valid_lft forever preferred_lft forever + 5: qg-33fedbc5-43@if28: mtu 1500 qdisc noqueue state UP group default qlen 1000 + link/ether fa:16:3e:03:1a:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 0 + inet 203.0.113.21/24 scope global qg-33fedbc5-43 + valid_lft forever preferred_lft forever + inet6 fd00:203:0:113::21/64 scope global nodad + valid_lft forever preferred_lft forever + inet6 fe80::f816:3eff:fe03:1af6/64 scope link + valid_lft forever preferred_lft forever + + Network node 2: + + .. code-block:: console + + # ip netns exec qrouter-b6206312-878e-497c-8ef7-eb384f8add96 ip addr show + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: ha-7a7ce184-36@if8: mtu 1450 qdisc noqueue state UP group default qlen 1000 + link/ether fa:16:3e:16:59:84 brd ff:ff:ff:ff:ff:ff link-netnsid 0 + inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-7a7ce184-36 + valid_lft forever preferred_lft forever + inet6 fe80::f816:3eff:fe16:5984/64 scope link + valid_lft forever preferred_lft forever + 3: qr-da3504ad-ba@if11: mtu 1450 qdisc noqueue state UP group default qlen 1000 + link/ether fa:16:3e:dc:8e:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0 + 4: qr-442e36eb-fc@if14: mtu 1450 qdisc noqueue state UP group default qlen 1000 + 5: qg-33fedbc5-43@if15: mtu 1500 qdisc noqueue state UP group default qlen 1000 + link/ether fa:16:3e:03:1a:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 0 + + .. note:: + + The master router may reside on network node 2. + +#. Launch an instance with an interface on the addtional self-service network. + For example, a CirrOS image using flavor ID 1. + + .. code-block:: console + + $ openstack server create --flavor 1 --image cirros --nic net-id=NETWORK_ID selfservice-instance2 + + Replace ``NETWORK_ID`` with the ID of the additional self-service + network. + +#. Determine the IPv4 and IPv6 addresses of the instance. + + .. code-block:: console + + $ openstack server list + +--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+ + | ID | Name | Status | Networks | + +--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+ + | bde64b00-77ae-41b9-b19a-cd8e378d9f8b | selfservice-instance2 | ACTIVE | selfservice2=fd00:192:168:2:f816:3eff:fe71:e93e, 192.168.2.4 | + +--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+ + +#. Create a floating IPv4 address on the provider network. + + .. code-block:: console + + $ openstack ip floating create provider1 + +-------------+--------------------------------------+ + | Field | Value | + +-------------+--------------------------------------+ + | fixed_ip | None | + | id | 0174056a-fa56-4403-b1ea-b5151a31191f | + | instance_id | None | + | ip | 203.0.113.17 | + | pool | provider1 | + +-------------+--------------------------------------+ + +#. Associate the floating IPv4 address with the instance. + + .. code-block:: console + + $ openstack ip floating add 203.0.113.17 selfservice-instance2 + + .. note:: + + This command provides no output. diff --git a/doc/networking-guide/source/shared/deploy-ha-vrrp.txt b/doc/networking-guide/source/shared/deploy-ha-vrrp.txt new file mode 100644 index 0000000000..e6ff28b969 --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-ha-vrrp.txt @@ -0,0 +1,58 @@ +This architecture example augments the self-service deployment example +with a high-availability mechanism using the Virtual Router Redundancy +Protocol (VRRP) via ``keepalived`` and provides failover of routing +for self-service networks. It requires a minimum of two network nodes +because VRRP creates one master (active) instance and at least one backup +instance of each router. + +During normal operation, ``keepalived`` on the master router periodically +transmits *heartbeat* packets over a hidden network that connects all VRRP +routers for a particular project. Each project with VRRP routers uses a +separate hidden network. By default this network uses the first value in +the ``tenant_network_types`` option in the ``ml2_conf.ini`` file. For +additional control, you can specify the self-service network type and physical +network name for the hidden network using the ``l3_ha_network_type`` and +``l3_ha_network_name`` options in the ``neutron.conf`` file. + +If ``keepalived`` on the backup router stops receiving *heartbeat* packets, +it assumes failure of the master router and promotes the backup router to +master router by configuring IP addresses on the interfaces in the +``qrouter`` namespace. In environments with more than one backup router, +``keepalived`` on the backup router with the next highest priority promotes +that backup router to master router. + +.. note:: + + This high-availability mechanism configures VRRP using the same priority + for all routers. Therefore, VRRP promotes the backup router with the + highest IP address to the master router. + +Interruption of VRRP *heartbeat* traffic between network nodes, typically +due to a network interface or physical network infrastructure failure, +triggers a failover. Restarting the layer-3 agent, or failure of it, does +not trigger a failover providing ``keepalived`` continues to operate. + +Consider the following attributes of this high-availability mechanism to +determine practicality in your environment: + +* Instance network traffic on self-service networks using a particular + router only traverses the master instance of that router. Thus, + resource limitations of a particular network node can impact all + master instances of routers on that network node without triggering + failover to another network node. However, you can configure the + scheduler to distribute the master instance of each router uniformly + across a pool of network nodes to reduce the chance of resource + contention on any particular network node. + +* Only supports self-service networks using a router. Provider networks + operate at layer-2 and rely on physical network infrastructure for + redundancy. + +* For instances with a floating IPv4 address, maintains state of network + connections during failover as a side effect of 1:1 static NAT. The + mechanism does not actually implement connection tracking. + +For production deployments, we recommend at least three network nodes +with sufficient resources to handle network traffic for the entire +environment if one network node fails. Also, the remaining two nodes +can continue to provide redundancy. diff --git a/doc/networking-guide/source/shared/deploy-provider-initialnetworks.txt b/doc/networking-guide/source/shared/deploy-provider-initialnetworks.txt new file mode 100644 index 0000000000..aaeeaba713 --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-provider-initialnetworks.txt @@ -0,0 +1,108 @@ +The configuration supports one flat or multiple VLAN provider networks. For +simplicity, the following procedure creates one flat provider network. + +#. Source the administrative project credentials. +#. Create a flat network. + + .. code-block:: console + + $ neutron net-create --shared --provider:physical_network provider \ + --provider:network_type flat provider1 + Created a new network: + +---------------------------+--------------------------------------+ + | Field | Value | + +---------------------------+--------------------------------------+ + | admin_state_up | True | + | availability_zone_hints | | + | availability_zones | | + | description | | + | id | 2b5ad13f-3859-4847-8db7-c695ab7dfce6 | + | ipv4_address_scope | | + | ipv6_address_scope | | + | mtu | 1500 | + | name | provider1 | + | port_security_enabled | True | + | provider:network_type | flat | + | provider:physical_network | provider | + | provider:segmentation_id | | + | router:external | False | + | shared | True | + | status | ACTIVE | + | subnets | | + | tags | | + | tenant_id | de59fed9547a4628b781df0862c837cf | + +---------------------------+--------------------------------------+ + + .. note:: + + The ``shared`` option allows any project to use this network. + + .. note:: + + To create a VLAN network instead of a flat network, change + ``--provider:network_type flat`` to ``--provider:network_type vlan`` + and add ``--provider:segmentation_id`` with a value referencing + the VLAN ID. + +#. Create a IPv4 subnet on the provider network. + + .. code-block:: console + + $ neutron subnet-create --name provider1-v4 --ip-version 4 \ + --allocation-pool start=203.0.113.11,end=203.0.113.250 \ + --gateway 203.0.113.1 --dns-nameserver 8.8.4.4 provider1 \ + 203.0.113.0/24 + Created a new subnet: + +-------------------+---------------------------------------------------+ + | Field | Value | + +-------------------+---------------------------------------------------+ + | allocation_pools | {"start": "203.0.113.11", "end": "203.0.113.250"} | + | cidr | 203.0.113.0/24 | + | description | | + | dns_nameservers | 8.8.4.4 | + | enable_dhcp | True | + | gateway_ip | 203.0.113.1 | + | host_routes | | + | id | 7ce3fd60-1d45-4432-a9a5-4f7645629bd9 | + | ip_version | 4 | + | ipv6_address_mode | | + | ipv6_ra_mode | | + | name | provider1-v4 | + | network_id | 2b5ad13f-3859-4847-8db7-c695ab7dfce6 | + | subnetpool_id | | + | tenant_id | de59fed9547a4628b781df0862c837cf | + +-------------------+---------------------------------------------------+ + +#. Create a IPv6 subnet on the provider network. + + .. code-block:: console + + $ neutron subnet-create --name provider1-v6 --ip-version 6 \ + --gateway fd00:203:0:113::1 --dns-nameserver 2001:4860:4860::8844 \ + provider1 fd00:203:0:113::/64 + Created a new subnet: + +-------------------+-----------------------------------------------------------------------------+ + | Field | Value | + +-------------------+-----------------------------------------------------------------------------+ + | allocation_pools | {"start": "fd00:203:0:113::2", "end": "fd00:203:0:113:ffff:ffff:ffff:ffff"} | + | cidr | fd00:203:0:113::/64 | + | description | | + | dns_nameservers | 2001:4860:4860::8844 | + | enable_dhcp | True | + | gateway_ip | fd00:203:0:113::1 | + | host_routes | | + | id | 773ea59c-e8c1-4254-baf3-27d5b2d42eb5 | + | ip_version | 6 | + | ipv6_address_mode | | + | ipv6_ra_mode | | + | name | provider1-v6 | + | network_id | 2b5ad13f-3859-4847-8db7-c695ab7dfce6 | + | subnetpool_id | | + | tenant_id | de59fed9547a4628b781df0862c837cf | + +-------------------+-----------------------------------------------------------------------------+ + + .. note:: + + By default, IPv6 provider networks rely on physical network + infrastructure for stateless address autoconfiguration (SLAAC) + and router advertisement. diff --git a/doc/networking-guide/source/shared/deploy-provider-networktrafficflow.txt b/doc/networking-guide/source/shared/deploy-provider-networktrafficflow.txt new file mode 100644 index 0000000000..5b3cce46e6 --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-provider-networktrafficflow.txt @@ -0,0 +1,31 @@ +The following sections describe the flow of network traffic in several +common scenarios. *North-south* network traffic travels between an instance +and external network such as the Internet. *East-west* network traffic +travels between instances on the same or different networks. In all scenarios, +the physical network infrastructure handles switching and routing among +provider networks and external networks such as the Internet. Each case +references one or more of the following components: + +* Provider network 1 (VLAN) + + * VLAN ID 101 (tagged) + * IP address ranges 203.0.113.0/24 and fd00:203:0:113::/64 + * Gateway (via physical network infrastructure) + + * IP addresses 203.0.113.1 and fd00:203:0:113:0::1 + +* Provider network 2 (VLAN) + + * VLAN ID 102 (tagged) + * IP address range 192.0.2.0/24 and fd00:192:0:2::/64 + * Gateway + + * IP addresses 192.0.2.1 and fd00:192:0:2::1 + +* Instance 1 + + * IP addresses 203.0.113.101 and fd00:203:0:113:0::101 + +* Instance 2 + + * IP addresses 192.0.2.101 and fd00:192:0:2:0::101 diff --git a/doc/networking-guide/source/shared/deploy-provider-verifynetworkoperation.txt b/doc/networking-guide/source/shared/deploy-provider-verifynetworkoperation.txt new file mode 100644 index 0000000000..c9a4b1479e --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-provider-verifynetworkoperation.txt @@ -0,0 +1,68 @@ +#. On each compute node, verify creation of the ``qdhcp`` namespace. + + .. code-block:: console + + # ip netns + qdhcp-8b868082-e312-4110-8627-298109d4401c + +#. Source a regular (non-administrative) project credentials. +#. Create the appropriate security group rules to allow ``ping`` and SSH + access instances using the network. + + .. include:: shared/deploy-secgrouprules.txt + +#. Launch an instance with an interface on the provider network. For example, + a CirrOS image using flavor ID 1. + + .. code-block:: console + + $ openstack server create --flavor 1 --image cirros \ + --nic net-id=NETWORK_ID provider-instance1 + + Replace ``NETWORK_ID`` with the ID of the provider network. + +#. Determine the IPv4 and IPv6 addresses of the instance. + + .. code-block:: console + + $ openstack server list + +--------------------------------------+--------------------+--------+--------------------------------------------+ + | ID | Name | Status | Networks | + +--------------------------------------+--------------------+--------+--------------------------------------------+ + | 018e0ae2-b43c-4271-a78d-62653dd03285 | provider-instance1 | ACTIVE | provider1=203.0.113.13, fd00:203:0:113::13 | + +--------------------------------------+--------------------+--------+--------------------------------------------+ + + .. note:: + + The IPv4 and IPv6 addresses appear similar only for illustration + purposes. + +#. On the controller node or any host with access to the provider network, + ``ping`` the IPv4 and IPv6 addresses of the instance. + + .. code-block:: console + + $ ping -c 4 203.0.113.13 + PING 203.0.113.13 (203.0.113.13) 56(84) bytes of data. + 64 bytes from 203.0.113.13: icmp_req=1 ttl=63 time=3.18 ms + 64 bytes from 203.0.113.13: icmp_req=2 ttl=63 time=0.981 ms + 64 bytes from 203.0.113.13: icmp_req=3 ttl=63 time=1.06 ms + 64 bytes from 203.0.113.13: icmp_req=4 ttl=63 time=0.929 ms + + --- 203.0.113.13 ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3002ms + rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms + + $ ping6 -c 4 fd00:203:0:113::13 + PING fd00:203:0:113::13(fd00:203:0:113::13) 56 data bytes + 64 bytes from fd00:203:0:113::13: icmp_seq=1 ttl=64 time=1.25 ms + 64 bytes from fd00:203:0:113::13: icmp_seq=2 ttl=64 time=0.683 ms + 64 bytes from fd00:203:0:113::13: icmp_seq=3 ttl=64 time=0.762 ms + 64 bytes from fd00:203:0:113::13: icmp_seq=4 ttl=64 time=0.486 ms + + --- fd00:203:0:113::13 ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 2999ms + rtt min/avg/max/mdev = 0.486/0.796/1.253/0.282 ms + +#. Obtain access to the instance. +#. Test IPv4 and IPv6 connectivity to the Internet or other external network. diff --git a/doc/networking-guide/source/shared/deploy-secgrouprules.txt b/doc/networking-guide/source/shared/deploy-secgrouprules.txt new file mode 100644 index 0000000000..7bb87f340b --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-secgrouprules.txt @@ -0,0 +1,37 @@ +.. code-block:: console + + $ openstack security group rule create --proto icmp default + +-----------------------+--------------------------------------+ + | Field | Value | + +-----------------------+--------------------------------------+ + | id | 2b45fbf8-45db-486c-915f-3f254740ae76 | + | ip_protocol | icmp | + | ip_range | 0.0.0.0/0 | + | parent_group_id | d35188d0-6b10-4fb9-a6b9-891ed3feeb54 | + | port_range | | + | remote_security_group | | + +-----------------------+--------------------------------------+ + + $ openstack security group rule create --proto ipv6-icmp default + +-----------------------+--------------------------------------+ + | Field | Value | + +-----------------------+--------------------------------------+ + | id | 2b45fbf8-45db-486c-915f-3f254740ae76 | + | ip_protocol | ipv6-icmp | + | ip_range | ::/0 | + | parent_group_id | d35188d0-6b10-4fb9-a6b9-891ed3feeb54 | + | port_range | | + | remote_security_group | | + +-----------------------+--------------------------------------+ + + $ openstack security group rule create --proto tcp --dst-port 22 default + +-----------------------+--------------------------------------+ + | Field | Value | + +-----------------------+--------------------------------------+ + | id | 86e5cc55-bb08-447a-a807-d36e2b9f56af | + | ip_protocol | tcp | + | ip_range | 0.0.0.0/0 | + | parent_group_id | d35188d0-6b10-4fb9-a6b9-891ed3feeb54 | + | port_range | 22:22 | + | remote_security_group | | + +-----------------------+--------------------------------------+ diff --git a/doc/networking-guide/source/shared/deploy-selfservice-initialnetworks.txt b/doc/networking-guide/source/shared/deploy-selfservice-initialnetworks.txt new file mode 100644 index 0000000000..34f2f697f5 --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-selfservice-initialnetworks.txt @@ -0,0 +1,140 @@ +The configuration supports multiple VXLAN self-service networks. For +simplicity, the following procedure creates one self-service network and +a router with a gateway on the flat provider network. The router uses +NAT for IPv4 network traffic and directly routes IPv6 network traffic. + +.. note:: + + IPv6 connectivity with self-service networks often requires addition of + static routes to nodes and physical network infrastructure. + +#. Source the administrative project credentials. +#. Update the provider network to support external connectivity for + self-service networks. + + .. code-block:: console + + $ neutron net-update --router:external provider1 + Updated network: provider1 + +#. Source a regular (non-administrative) project credentials. +#. Create a self-service network. + + .. code-block:: console + + $ neutron net-create selfservice1 + Created a new network: + +-------------------------+--------------------------------------+ + | Field | Value | + +-------------------------+--------------------------------------+ + | admin_state_up | True | + | availability_zone_hints | | + | availability_zones | | + | description | | + | id | 8fbc13ca-cfe0-4b8a-993b-e33f37ba66d1 | + | ipv4_address_scope | | + | ipv6_address_scope | | + | mtu | 1450 | + | name | selfservice1 | + | port_security_enabled | True | + | router:external | False | + | shared | False | + | status | ACTIVE | + | subnets | | + | tags | | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------------+--------------------------------------+ + +#. Create a IPv4 subnet on the self-service network. + + .. code-block:: console + + $ neutron subnet-create --name selfservice1-v4 --ip-version 4 \ + --dns-nameserver 8.8.4.4 selfservice1 192.168.1.0/24 + Created a new subnet: + +-------------------+--------------------------------------------------+ + | Field | Value | + +-------------------+--------------------------------------------------+ + | allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} | + | cidr | 192.168.1.0/24 | + | description | | + | dns_nameservers | 8.8.4.4 | + | enable_dhcp | True | + | gateway_ip | 192.168.1.1 | + | host_routes | | + | id | db1e5c17-2968-4533-8722-512c29fd1b88 | + | ip_version | 4 | + | ipv6_address_mode | | + | ipv6_ra_mode | | + | name | selfservice1-v4 | + | network_id | 8fbc13ca-cfe0-4b8a-993b-e33f37ba66d1 | + | subnetpool_id | | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------+--------------------------------------------------+ + +#. Create a IPv6 subnet on the self-service network. + + .. code-block:: console + + $ neutron subnet-create --name selfservice1-v6 --ip-version 6 \ + --ipv6-address-mode slaac --ipv6-ra-mode slaac \ + --dns-nameserver 2001:4860:4860::8844 selfservice1 \ + fd00:192:168:1::/64 + Created a new subnet: + +-------------------+-----------------------------------------------------------------------------+ + | Field | Value | + +-------------------+-----------------------------------------------------------------------------+ + | allocation_pools | {"start": "fd00:192:168:1::2", "end": "fd00:192:168:1:ffff:ffff:ffff:ffff"} | + | cidr | fd00:192:168:1::/64 | + | description | | + | dns_nameservers | 2001:4860:4860::8844 | + | enable_dhcp | True | + | gateway_ip | fd00:192:168:1::1 | + | host_routes | | + | id | 6299cc4e-6581-4626-9720-03c808c662b3 | + | ip_version | 6 | + | ipv6_address_mode | slaac | + | ipv6_ra_mode | slaac | + | name | selfservice1-v6 | + | network_id | 8fbc13ca-cfe0-4b8a-993b-e33f37ba66d1 | + | subnetpool_id | | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------+-----------------------------------------------------------------------------+ + +#. Create a router. + + .. code-block:: console + + $ neutron router-create router1 + Created a new router: + +-------------------------+--------------------------------------+ + | Field | Value | + +-------------------------+--------------------------------------+ + | admin_state_up | True | + | availability_zone_hints | | + | availability_zones | | + | description | | + | external_gateway_info | | + | id | 17db2a15-e024-46d0-9250-4cd4d336a2cc | + | name | router1 | + | routes | | + | status | ACTIVE | + | tenant_id | f986edf55ae945e2bef3cb4bfd589928 | + +-------------------------+--------------------------------------+ + +#. Add the IPv4 and IPv6 subnets as interfaces on the router. + + .. code-block:: console + + $ neutron router-interface-add router1 selfservice1-v4 + Added interface 77ebe721-a7d3-457c-9534-bce4657da9da to router router1. + + $ neutron router-interface-add router1 selfservice1-v6 + Added interface 695e0993-394d-4c40-a338-d4ba4061491a to router router1. + +#. Add the provider network as the gateway on the router. + + .. code-block:: console + + $ neutron router-gateway-set router1 provider1 + Set gateway for router router1 diff --git a/doc/networking-guide/source/shared/deploy-selfservice-networktrafficflow.txt b/doc/networking-guide/source/shared/deploy-selfservice-networktrafficflow.txt new file mode 100644 index 0000000000..688f128ac6 --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-selfservice-networktrafficflow.txt @@ -0,0 +1,29 @@ +The following sections describe the flow of network traffic in several +common scenarios. *North-south* network traffic travels between an instance +and external network such as the Internet. *East-west* network traffic +travels between instances on the same or different networks. In all scenarios, +the physical network infrastructure handles switching and routing among +provider networks and external networks such as the Internet. Each case +references one or more of the following components: + +* Provider network (VLAN) + + * VLAN ID 101 (tagged) + +* Self-service network 1 (VXLAN) + + * VXLAN ID (VNI) 101 + +* Self-service network 2 (VXLAN) + + * VXLAN ID (VNI) 102 + +* Self-service router + + * Gateway on the provider network + * Interface on self-service network 1 + * Interface on self-service network 2 + +* Instance 1 + +* Instance 2 diff --git a/doc/networking-guide/source/shared/deploy-selfservice-verifynetworkoperation.txt b/doc/networking-guide/source/shared/deploy-selfservice-verifynetworkoperation.txt new file mode 100644 index 0000000000..933614def7 --- /dev/null +++ b/doc/networking-guide/source/shared/deploy-selfservice-verifynetworkoperation.txt @@ -0,0 +1,120 @@ +#. On each compute node, verify creation of a second ``qdhcp`` namespace. + + .. code-block:: console + + # ip netns + qdhcp-8b868082-e312-4110-8627-298109d4401c + qdhcp-8fbc13ca-cfe0-4b8a-993b-e33f37ba66d1 + +#. On the network node, verify creation of the ``qrouter`` namespace. + + .. code-block:: console + + # ip netns + qrouter-17db2a15-e024-46d0-9250-4cd4d336a2cc + +#. Source a regular (non-administrative) project credentials. +#. Create the appropriate security group rules to allow ``ping`` and SSH + access instances using the network. + + .. include:: shared/deploy-secgrouprules.txt + +#. Launch an instance with an interface on the self-service network. For + example, a CirrOS image using flavor ID 1. + + .. code-block:: console + + $ openstack server create --flavor 1 --image cirros --nic net-id=NETWORK_ID selfservice-instance1 + + Replace ``NETWORK_ID`` with the ID of the self-service network. + +#. Determine the IPv4 and IPv6 addresses of the instance. + + .. code-block:: console + + $ openstack server list + +--------------------------------------+-----------------------+--------+--------------------------------------------------------------+ + | ID | Name | Status | Networks | + +--------------------------------------+-----------------------+--------+--------------------------------------------------------------+ + | c055cdb0-ebb4-4d65-957c-35cbdbd59306 | selfservice-instance1 | ACTIVE | selfservice1=192.168.1.4, fd00:192:168:1:f816:3eff:fe30:9cb0 | + +--------------------------------------+-----------------------+--------+--------------------------------------------------------------+ + + .. warning:: + + The IPv4 address resides in a private IP address range (RFC1918). Thus, + the Networking service performs source network address translation (SNAT) + for the instance to access external networks such as the Internet. Access + from external networks such as the Internet to the instance requires a + floating IPv4 address. The Networking service performs destination + network address translation (DNAT) from the floating IPv4 address to the + instance IPv4 address on the self-service network. On the other hand, + the Networking service architecture for IPv6 lacks support for NAT due + to the significantly larger address space and complexity of NAT. Thus, + floating IP addresses do not exist for IPv6 and the Networking service + only performs routing for IPv6 subnets on self-service networks. In + other words, you cannot rely on NAT to "hide" instances with IPv4 and + IPv6 addresses or only IPv6 addresses and must properly implement + security groups to restrict access. + +#. On the controller node or any host with access to the provider network, + ``ping`` the IPv6 address of the instance. + + .. code-block:: console + + $ ping6 -c 4 fd00:192:168:1:f816:3eff:fe30:9cb0 + PING fd00:192:168:1:f816:3eff:fe30:9cb0(fd00:192:168:1:f816:3eff:fe30:9cb0) 56 data bytes + 64 bytes from fd00:192:168:1:f816:3eff:fe30:9cb0: icmp_seq=1 ttl=63 time=2.08 ms + 64 bytes from fd00:192:168:1:f816:3eff:fe30:9cb0: icmp_seq=2 ttl=63 time=1.88 ms + 64 bytes from fd00:192:168:1:f816:3eff:fe30:9cb0: icmp_seq=3 ttl=63 time=1.55 ms + 64 bytes from fd00:192:168:1:f816:3eff:fe30:9cb0: icmp_seq=4 ttl=63 time=1.62 ms + + --- fd00:192:168:1:f816:3eff:fe30:9cb0 ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3004ms + rtt min/avg/max/mdev = 1.557/1.788/2.085/0.217 ms + +#. Optionally, enable IPv4 access from external networks such as the + Internet to the instance. + + #. Create a floating IPv4 address on the provider network. + + .. code-block:: console + + $ openstack ip floating create provider1 + +-------------+--------------------------------------+ + | Field | Value | + +-------------+--------------------------------------+ + | fixed_ip | None | + | id | 22a1b088-5c9b-43b4-97f3-970ce5df77f2 | + | instance_id | None | + | ip | 203.0.113.16 | + | pool | provider1 | + +-------------+--------------------------------------+ + + #. Associate the floating IPv4 address with the instance. + + .. code-block:: console + + $ openstack ip floating add 203.0.113.16 selfservice-instance1 + + .. note:: + + This command provides no output. + + #. On the controller node or any host with access to the provider network, + ``ping`` the floating IPv4 address of the instance. + + .. code-block:: console + + $ ping -c 4 203.0.113.16 + PING 203.0.113.16 (203.0.113.16) 56(84) bytes of data. + 64 bytes from 203.0.113.16: icmp_seq=1 ttl=63 time=3.41 ms + 64 bytes from 203.0.113.16: icmp_seq=2 ttl=63 time=1.67 ms + 64 bytes from 203.0.113.16: icmp_seq=3 ttl=63 time=1.47 ms + 64 bytes from 203.0.113.16: icmp_seq=4 ttl=63 time=1.59 ms + + --- 203.0.113.16 ping statistics --- + 4 packets transmitted, 4 received, 0% packet loss, time 3005ms + rtt min/avg/max/mdev = 1.473/2.040/3.414/0.798 ms + +#. Obtain access to the instance. +#. Test IPv4 and IPv6 connectivity to the Internet or other external network. diff --git a/tox.ini b/tox.ini index 51c7b0c10f..541d6c5f4f 100644 --- a/tox.ini +++ b/tox.ini @@ -114,7 +114,7 @@ commands = [doc8] # Settings for doc8: # Ignore target directories -ignore-path = doc/*/target,doc/*/build*,doc/install-guide/source/swift-controller-include.txt,doc/install-guide-debconf/source/swift-controller-include.txt +ignore-path = doc/*/target,doc/*/build*,doc/install-guide/source/swift-controller-include.txt,doc/install-guide-debconf/source/swift-controller-include.txt,doc/networking-guide/source/shared/*.txt # File extensions to use extensions = .rst,.txt # Maximal line length should be 79 but we have some overlong lines.