Networking scenarios This chapter describes two networking scenarios and how the Open vSwitch plug-in and the Linux Bridge plug-in implement these scenarios.
Open vSwitch This section describes how the Open vSwitch plug-in implements the Networking abstractions.
Configuration This example uses VLAN segmentation on the switches to isolate tenant networks. This configuration labels the physical network associated with the public network as physnet1, and the physical network associated with the data network as physnet2, which leads to the following configuration options in ovs_neutron_plugin.ini: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet2:100:110 integration_bridge = br-int bridge_mappings = physnet2:br-eth1
Scenario 1: one tenant, two networks, one router The first scenario has two private networks (net01, and net02), each with one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24). Both private networks are attached to a router that connects them to the public network (10.64.201.0/24). Under the service tenant, create the shared router, define the public network, and set it as the default gateway of the router$ tenant=$(keystone tenant-list | awk '/service/ {print $2}') $ neutron router-create router01 $ neutron net-create --tenant-id $tenant public01 \ --provider:network_type flat \ --provider:physical_network physnet1 \ --router:external=True $ neutron subnet-create --tenant-id $tenant --name public01_subnet01 \ --gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp $ neutron router-gateway-set router01 public01 Under the demo user tenant, create the private network net01 and corresponding subnet, and connect it to the router01 router. Configure it to use VLAN ID 101 on the physical switch.$ tenant=$(keystone tenant-list|awk '/demo/ {print $2}' $ neutron net-create --tenant-id $tenant net01 \ --provider:network_type vlan \ --provider:physical_network physnet2 \ --provider:segmentation_id 101 $ neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24 $ neutron router-interface-add router01 net01_subnet01 Similarly, for net02, using VLAN ID 102 on the physical switch:$ neutron net-create --tenant-id $tenant net02 \ --provider:network_type vlan \ --provider:physical_network physnet2 \ --provider:segmentation_id 102 $ neutron subnet-create --tenant-id $tenant --name net02_subnet01 net02 192.168.102.0/24 $ neutron router-interface-add router01 net02_subnet01
Scenario 1: Compute host config The following figure shows how to configure various Linux networking devices on the Compute host: Types of network devices There are four distinct type of virtual networking devices: TAP devices, veth pairs, Linux bridges, and Open vSwitch bridges. For an ethernet frame to travel from eth0 of virtual machine vm01 to the physical network, it must pass through nine devices inside of the host: TAP vnet0, Linux bridge qbrnnn, veth pair (qvbnnn, qvonnn), Open vSwitch bridge br-int, veth pair (int-br-eth1, phy-br-eth1), and, finally, the physical network interface card eth1. A TAP device, such as vnet0 is how hypervisors such as KVM and Xen implement a virtual network interface card (typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received by the guest operating system. A veth pair is a pair of directly connected virtual network interfaces. An ethernet frame sent to one end of a veth pair is received by the other end of a veth pair. Networking uses veth pairs as virtual patch cables to make connections between virtual bridges. A Linux bridge behaves like a hub: you can connect multiple (physical or virtual) network interfaces devices to a Linux bridge. Any ethernet frames that come in from one interface attached to the bridge is transmitted to all of the other devices. An Open vSwitch bridge behaves like a virtual switch: network interface devices connect to Open vSwitch bridge's ports, and the ports can be configured much like a physical switch's ports, including VLAN configurations. Integration bridge The br-int Open vSwitch bridge is the integration bridge: all guests running on the compute host connect to this bridge. Networking implements isolation across these guests by configuring the br-int ports. Physical connectivity bridge The br-eth1 bridge provides connectivity to the physical network interface card, eth1. It connects to the integration bridge by a veth pair: (int-br-eth1, phy-br-eth1). VLAN translation In this example, net01 and net02 have VLAN ids of 1 and 2, respectively. However, the physical network in our example only supports VLAN IDs in the range 101 through 110. The Open vSwitch agent is responsible for configuring flow rules on br-int and br-eth1 to do VLAN translation. When br-eth1 receives a frame marked with VLAN ID 1 on the port associated with phy-br-eth1, it modifies the VLAN ID in the frame to 101. Similarly, when br-int receives a frame marked with VLAN ID 101 on the port associated with int-br-eth1, it modifies the VLAN ID in the frame to 1. Security groups: iptables and Linux bridges Ideally, the TAP device vnet0 would be connected directly to the integration bridge, br-int. Unfortunately, this isn't possible because of how OpenStack security groups are currently implemented. OpenStack uses iptables rules on the TAP devices such as vnet0 to implement security groups, and Open vSwitch is not compatible with iptables rules that are applied directly on TAP devices that are connected to an Open vSwitch port. Networking uses an extra Linux bridge and a veth pair as a workaround for this issue. Instead of connecting vnet0 to an Open vSwitch bridge, it is connected to a Linux bridge, qbrXXX. This bridge is connected to the integration bridge, br-int, through the (qvbXXX, qvoXXX) veth pair.
Scenario 1: Network host config The network host runs the neutron-openvswitch-plugin-agent, the neutron-dhcp-agent, neutron-l3-agent, and neutron-metadata-agent services. On the network host, assume that eth0 is connected to the external network, and eth1 is connected to the data network, which leads to the following configuration in the ovs_neutron_plugin.ini file: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet2:101:110 integration_bridge = br-int bridge_mappings = physnet1:br-ex,physnet2:br-eth1 The following figure shows the network devices on the network host: As on the compute host, there is an Open vSwitch integration bridge (br-int) and an Open vSwitch bridge connected to the data network (br-eth1), and the two are connected by a veth pair, and the neutron-openvswitch-plugin-agent configures the ports on both switches to do VLAN translation. An additional Open vSwitch bridge, br-ex, connects to the physical interface that is connected to the external network. In this example, that physical interface is eth0. While the integration bridge and the external bridge are connected by a veth pair (int-br-ex, phy-br-ex), this example uses layer 3 connectivity to route packets from the internal networks to the public network: no packets traverse that veth pair in this example. Open vSwitch internal ports The network host uses Open vSwitch internal ports. Internal ports enable you to assign one or more IP addresses to an Open vSwitch bridge. In previous example, the br-int bridge has four internal ports: tapXXX, qr-YYY, qr-ZZZ, and tapWWW. Each internal port has a separate IP address associated with it. An internal port, qg-VVV, is on the br-ex bridge. DHCP agent By default, The Networking DHCP agent uses a process called dnsmasq to provide DHCP services to guests. Networking must create an internal port for each network that requires DHCP services and attach a dnsmasq process to that port. In the previous example, the tapXXX interface is on net01_subnet01, and the tapWWW interface is on net02_subnet01. L3 agent (routing) The Networking L3 agent uses Open vSwitch internal ports to implement routing and relies on the network host to route the packets across the interfaces. In this example, the qr-YYY interface is on net01_subnet01 and has the IP address 192.168.101.1/24. The qr-ZZZ, interface is on net02_subnet01 and has the IP address 192.168.102.1/24. The qg-VVV interface has the IP address 10.64.201.254/24. Because each of these interfaces is visible to the network host operating system, the network host routes the packets across the interfaces, as long as an administrator has enabled IP forwarding. The L3 agent uses iptables to implement floating IPs to do the network address translation (NAT). Overlapping subnets and network namespaces One problem with using the host to implement routing is that one of the Networking subnets might overlap with one of the physical networks that the host uses. For example, if the management network is implemented on eth2 and also happens to be on the 192.168.101.0/24 subnet, routing problems will occur because the host can't determine whether to send a packet on this subnet to qr-YYY or eth2. If end users are permitted to create their own logical networks and subnets, you must design the system so that such collisions do not occur. Networking uses Linux network namespaces to prevent collisions between the physical networks on the network host, and the logical networks used by the virtual machines. It also prevents collisions across different logical networks that are not routed to each other, as the following scenario shows. A network namespace is an isolated environment with its own networking stack. A network namespace has its own network interfaces, routes, and iptables rules. Consider it a chroot jail, except for networking instead of for a file system. LXC (Linux containers) use network namespaces to implement networking virtualization. Networking creates network namespaces on the network host to avoid subnet collisions. In this example, there are three network namespaces, as shown in the figure above: qdhcp-aaa: contains the tapXXX interface and the dnsmasq process that listens on that interface to provide DHCP services for net01_subnet01. This allows overlapping IPs between net01_subnet01 and any other subnets on the network host. qrouter-bbbb: contains the qr-YYY, qr-ZZZ, and qg-VVV interfaces, and the corresponding routes. This namespace implements router01 in our example. qdhcp-ccc: contains the tapWWW interface and the dnsmasq process that listens on that interface, to provide DHCP services for net02_subnet01. This allows overlapping IPs between net02_subnet01 and any other subnets on the network host.
Scenario 2: two tenants, two networks, two routers In this scenario, tenant A and tenant B each have a network with one subnet and one router that connects the tenants to the public Internet. Under the service tenant, define the public network:$ tenant=$(keystone tenant-list | awk '/service/ {print $2}') $ neutron net-create --tenant-id $tenant public01 \ --provider:network_type flat \ --provider:physical_network physnet1 \ --router:external=True $ neutron subnet-create --tenant-id $tenant --name public01_subnet01 \ --gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp Under the tenantA user tenant, create the tenant router and set its gateway for the public network.$ tenant=$(keystone tenant-list|awk '/tenantA/ {print $2}') $ neutron router-create --tenant-id $tenant router01 $ neutron router-gateway-set router01 public01 Then, define private network net01 using VLAN ID 102 on the physical switch, along with its subnet, and connect it to the router. $ neutron net-create --tenant-id $tenant net01 \ --provider:network_type vlan \ --provider:physical_network physnet2 \ --provider:segmentation_id 101 $ neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24 $ neutron router-interface-add router01 net01_subnet01 Similarly, for tenantB, create a router and another network, using VLAN ID 102 on the physical switch:$ tenant=$(keystone tenant-list|awk '/tenantB/ {print $2}') $ neutron router-create --tenant-id $tenant router02 $ neutron router-gateway-set router02 public01 $ neutron net-create --tenant-id $tenant net02 \ --provider:network_type vlan \ --provider:physical_network physnet2 \ --provider:segmentation_id 102 $ neutron subnet-create --tenant-id $tenant --name net02_subnet01 net01 192.168.101.0/24 $ neutron router-interface-add router02 net02_subnet01
Scenario 2: Compute host config The following figure shows how to configure Linux networking devices on the Compute host: The Compute host configuration resembles the configuration in scenario 1. However, in scenario 1, a guest connects to two subnets while in this scenario, the subnets belong to different tenants.
Scenario 2: Network host config The following figure shows the network devices on the network host for the second scenario. In this configuration, the network namespaces are organized to isolate the two subnets from each other as shown in the following figure. In this scenario, there are four network namespaces (qhdcp-aaa, qrouter-bbbb, qrouter-cccc, and qhdcp-dddd), instead of three. Since there is no connectivity between the two networks, and so each router is implemented by a separate namespace.
Linux Bridge This section describes how the Linux Bridge plug-in implements the Networking abstractions. For information about DHCP and L3 agents, see .
Configuration This example uses VLAN isolation on the switches to isolate tenant networks. This configuration labels the physical network associated with the public network as physnet1, and the physical network associated with the data network as physnet2, which leads to the following configuration options in linuxbridge_conf.ini: [vlans] tenant_network_type = vlan network_vlan_ranges = physnet2:100:110 [linux_bridge] physical_interface_mappings = physnet2:eth1
Scenario 1: one tenant, two networks, one router The first scenario has two private networks (net01, and net02), each with one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24). Both private networks are attached to a router that contains them to the public network (10.64.201.0/24). Under the service tenant, create the shared router, define the public network, and set it as the default gateway of the router$ tenant=$(keystone tenant-list | awk '/service/ {print $2}') $ neutron router-create router01 $ neutron net-create --tenant-id $tenant public01 \ --provider:network_type flat \ --provider:physical_network physnet1 \ --router:external=True $ neutron subnet-create --tenant-id $tenant --name public01_subnet01 \ --gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp $ neutron router-gateway-set router01 public01 Under the demo user tenant, create the private network net01 and corresponding subnet, and connect it to the router01 router. Configure it to use VLAN ID 101 on the physical switch.$ tenant=$(keystone tenant-list|awk '/demo/ {print $2}' $ neutron net-create --tenant-id $tenant net01 \ --provider:network_type vlan \ --provider:physical_network physnet2 \ --provider:segmentation_id 101 $ neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24 $ neutron router-interface-add router01 net01_subnet01 Similarly, for net02, using VLAN ID 102 on the physical switch:$ neutron net-create --tenant-id $tenant net02 \ --provider:network_type vlan \ --provider:physical_network physnet2 \ --provider:segmentation_id 102 $ neutron subnet-create --tenant-id $tenant --name net02_subnet01 net02 192.168.102.0/24 $ neutron router-interface-add router01 net02_subnet01
Scenario 1: Compute host config The following figure shows how to configure the various Linux networking devices on the compute host. Types of network devices There are three distinct type of virtual networking devices: TAP devices, VLAN devices, and Linux bridges. For an ethernet frame to travel from eth0 of virtual machine vm01, to the physical network, it must pass through four devices inside of the host: TAP vnet0, Linux bridge brqXXX, VLAN eth1.101), and, finally, the physical network interface card eth1. A TAP device, such as vnet0 is how hypervisors such as KVM and Xen implement a virtual network interface card (typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received by the guest operating system. A VLAN device is associated with a VLAN tag attaches to an existing interface device and adds or removes VLAN tags. In the preceding example, VLAN device eth1.101 is associated with VLAN ID 101 and is attached to interface eth1. Packets received from the outside by eth1 with VLAN tag 101 will be passed to device eth1.101, which will then strip the tag. In the other direction, any ethernet frame sent directly to eth1.101 will have VLAN tag 101 added and will be forward to eth1 for sending out to the network. A Linux bridge behaves like a hub: you can connect multiple (physical or virtual) network interfaces devices to a Linux bridge. Any ethernet frames that come in from one interface attached to the bridge is transmitted to all of the other devices.
Scenario 1: Network host config The following figure shows the network devices on the network host. The following figure shows how the Linux Bridge plug-in uses network namespaces to provide isolation.veth pairs form connections between the Linux bridges and the network namespaces.
Scenario 2: two tenants, two networks, two routers The second scenario has two tenants (A, B). Each tenant has a network with one subnet, and each one has a router that connects them to the public Internet. Under the service tenant, define the public network:$ tenant=$(keystone tenant-list | awk '/service/ {print $2}') $ neutron net-create --tenant-id $tenant public01 \ --provider:network_type flat \ --provider:physical_network physnet1 \ --router:external=True $ neutron subnet-create --tenant-id $tenant --name public01_subnet01 \ --gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp Under the tenantA user tenant, create the tenant router and set its gateway for the public network.$ tenant=$(keystone tenant-list|awk '/tenantA/ {print $2}') $ neutron router-create --tenant-id $tenant router01 $ neutron router-gateway-set router01 public01 Then, define private network net01 using VLAN ID 102 on the physical switch, along with its subnet, and connect it to the router. $ neutron net-create --tenant-id $tenant net01 \ --provider:network_type vlan \ --provider:physical_network physnet2 \ --provider:segmentation_id 101 $ neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24 $ neutron router-interface-add router01 net01_subnet01 Similarly, for tenantB, create a router and another network, using VLAN ID 102 on the physical switch:$ tenant=$(keystone tenant-list|awk '/tenantB/ {print $2}') $ neutron router-create --tenant-id $tenant router02 $ neutron router-gateway-set router02 public01 $ neutron net-create --tenant-id $tenant net02 \ --provider:network_type vlan \ --provider:physical_network physnet2 \ --provider:segmentation_id 102 $ neutron subnet-create --tenant-id $tenant --name net02_subnet01 net01 192.168.101.0/24 $ neutron router-interface-add router02 net02_subnet01
Scenario 2: Compute host config The following figure shows how the various Linux networking devices would be configured on the Compute host under this scenario. The configuration on the Compute host is very similar to the configuration in scenario 1. The only real difference is that scenario 1 had a guest connected to two subnets, and in this scenario the subnets belong to different tenants.
Scenario 2: Network host config The following figure shows the network devices on the network host for the second scenario. The main difference between the configuration in this scenario and the previous one is the organization of the network namespaces, in order to provide isolation across the two subnets, as shown in the following figure. In this scenario, there are four network namespaces (qhdcp-aaa, qrouter-bbbb, qrouter-cccc, and qhdcp-dddd), instead of three. Each router is implemented by a separate namespace, since there is no connectivity between the two networks.
ML2 The Modular Layer 2 plugin allows OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently includes drivers for the local, flat, VLAN, GRE and VXLAN network types and works with the existing Open vSwitch, Linux Bridge , and HyperV L2 agents. The ML2 plug-in can be extended through mechanism drivers, allowing multiple mechanisms to be used simultaneously. This section describes different ML2 plug-in and agent configurations with different type drivers and mechanism drivers.
ML2 with L2 population mechanism driver Current Open vSwitch and Linux Bridge tunneling implementations broadcast to every agent, even if they don’t host the corresponding network as illustrated below. As broadcast emulation on overlay is costly, it may be better to avoid its use for MAC learning and ARP resolution. This supposes the use of proxy ARP on the agent to answer VM requests, and to populate forwarding table. Currently only the Linux Bridge Agent implements an ARP proxy. The prepopulation limits L2 broadcasts in overlay, however it may anyway be necessary to provide broadcast emulation. This is achieved by broadcasting packets via unicast only to the relevant agents as illustrated below. The partial-mesh is available with the Open vSwitch and Linux Bridge agents. The following scenarios will use the L2 population mechanism driver with an Open vSwitch agent and a Linux Bridge agent. Enable the l2 population driver by adding it to the list of mechanism drivers. In addition, a tunneling driver must be selected. Supported options are GRE, VXLAN, or a combination of both. Configuration settings are enabled in ml2_conf.ini:[ml2] type_drivers = local,flat,vlan,gre,vxlan mechanism_drivers = openvswitch,linuxbridge,l2population
Scenario 1: L2 population with Open vSwitch agent Enable the l2 population extension in the Open vSwitch agent, and configure the and parameters in the ml2_conf.ini file: [ovs] local_ip = 192.168.1.10 [agent] tunnel_types = gre,vxlan l2_population = True
Scenario 2: L2 population with <emphasis>Linux Bridge</emphasis> agent Enable the l2 population extension on the Linux Bridge agent. Enable VXLAN and configure the local_ip parameter in ml2_conf.ini. [vxlan] enable_vxlan = True local_ip = 192.168.1.10 l2_population = True
Enable security group API Since the ML2 plugin can concurrently support different L2 agents (or other mechanisms) with different configuration files, the actual value in the ml2_conf.ini file does not matter in the server, but must be set to a non-default value in the ml2 configuration to enable the securitygroup extension. To enable securitygroup API, edit the ml2_conf.ini file:[securitygroup] firewall_driver = dummy Each L2 agent configuration file (such as ovs_neutron_plugin.ini or linuxbridge_conf.ini) should contain the appropriate value for that agent. To disable securitygroup API, edit theml2_conf.ini file:[securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver Also, each L2 agent configuration file (such as ovs_neutron_plugin.ini or linuxbridge_conf.ini) should contain this value in parameter for that agent.