openstack-manuals/doc/admin-guide-cloud/section_networking-scenarios.xml
Jackie Heitzer 14adcf8e2c Typos in doc/admin-guide-cloud/section_networking xml
Modified a few typos that I found in section_networking-scenarios.xml,
  section_netowrking_arch.xml, and section_networking_introductions.xml

Change-Id: I0a01eb1a86365338d5c25cab20c497e7515334a6
2014-02-20 08:52:43 -05:00

691 lines
45 KiB
XML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="section_networking-scenarios"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Networking scenarios</title>
<para>This chapter describes two networking scenarios and how the
Open vSwitch plug-in and the Linux Bridge plug-in implement these
scenarios.</para>
<section xml:id="under_the_hood_openvswitch">
<?dbhtml stop-chunking?>
<title>Open vSwitch</title>
<para>This section describes how the Open vSwitch plug-in
implements the Networking abstractions.</para>
<section xml:id="under_the_hood_openvswitch_configuration">
<title>Configuration</title>
<para>This example uses VLAN segmentation on the switches
to isolate tenant networks. This configuration labels the
physical network associated with the public network as
<literal>physnet1</literal>, and the physical network
associated with the data network as
<literal>physnet2</literal>, which leads to the following
configuration options in <filename>ovs_neutron_plugin.ini</filename>:
<programlisting language="ini">[ovs]
tenant_network_type = vlan
network_vlan_ranges = physnet2:100:110
integration_bridge = br-int
bridge_mappings = physnet2:br-eth1</programlisting></para>
</section>
<section xml:id="under_the_hood_openvswitch_scenario1">
<title>Scenario 1: one tenant, two networks, one router</title>
<para>The first scenario has two private networks (<literal>net01</literal>, and
<literal>net02</literal>), each with one subnet
(<literal>net01_subnet01</literal>: 192.168.101.0/24,
<literal>net02_subnet01</literal>, 192.168.102.0/24). Both private networks are
attached to a router that connects them to the public network (10.64.201.0/24).</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-1.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, create the shared router, define the
public network, and set it as the default gateway of the
router<screen><prompt>$</prompt> <userinput>tenant=$(keystone tenant-list | awk '/service/ {print $2}')</userinput>
<prompt>$</prompt> <userinput>neutron router-create router01</userinput>
<prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant public01 \
--provider:network_type flat \
--provider:physical_network physnet1 \
--router:external=True</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name public01_subnet01 \
--gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp</userinput>
<prompt>$</prompt> <userinput>neutron router-gateway-set router01 public01</userinput></screen></para>
<para>Under the <literal>demo</literal> user tenant, create the private network
<literal>net01</literal> and corresponding subnet, and connect it to the
<literal>router01</literal> router. Configure it to use VLAN ID 101 on the
physical
switch.<screen><prompt>$</prompt> <userinput>tenant=$(keystone tenant-list|awk '/demo/ {print $2}'</userinput>
<prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant net01 \
--provider:network_type vlan \
--provider:physical_network physnet2 \
--provider:segmentation_id 101</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router01 net01_subnet01</userinput></screen></para>
<para>Similarly, for <literal>net02</literal>, using VLAN ID 102 on the physical
switch:<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant net02 \
--provider:network_type vlan \
--provider:physical_network physnet2 \
--provider:segmentation_id 102</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name net02_subnet01 net02 192.168.102.0/24</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router01 net02_subnet01</userinput></screen></para>
<section xml:id="under_the_hood_openvswitch_scenario1_compute">
<title>Scenario 1: Compute host config</title>
<para>The following figure shows how to configure various Linux networking devices on the Compute host:</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-ovs-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<simplesect>
<title>Types of network devices</title>
<note><para>There are four distinct type of virtual networking devices: TAP devices,
veth pairs, Linux bridges, and Open vSwitch bridges. For an ethernet frame to travel
from <literal>eth0</literal> of virtual machine <literal>vm01</literal> to the
physical network, it must pass through nine devices inside of the host: TAP
<literal>vnet0</literal>, Linux bridge
<literal>qbr<replaceable>nnn</replaceable></literal>, veth pair
<literal>(qvb<replaceable>nnn</replaceable>,
qvo<replaceable>nnn</replaceable>)</literal>, Open vSwitch bridge
<literal>br-int</literal>, veth pair <literal>(int-br-eth1,
phy-br-eth1)</literal>, and, finally, the physical network interface card
<literal>eth1</literal>.</para></note>
<para>A <emphasis role="italic">TAP device</emphasis>, such as <literal>vnet0</literal>
is how hypervisors such as KVM and Xen implement a virtual network interface card
(typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received
by the guest operating system.</para>
<para>A <emphasis role="italic">veth pair</emphasis> is a pair of directly connected
virtual network interfaces. An ethernet frame sent to one end of a veth pair
is received by the other end of a veth pair. Networking uses veth pairs as
virtual patch cables to make connections between virtual bridges.</para>
<para>A <emphasis role="italic">Linux bridge</emphasis> behaves like a hub: you can
connect multiple (physical or virtual) network interfaces devices to a Linux bridge.
Any ethernet frames that come in from one interface attached to the bridge is
transmitted to all of the other devices.</para>
<para>An <emphasis role="italic">Open vSwitch bridge</emphasis> behaves like a virtual
switch: network interface devices connect to Open vSwitch bridge's ports, and the
ports can be configured much like a physical switch's ports, including VLAN
configurations.</para>
</simplesect>
<simplesect>
<title>Integration bridge</title>
<para>The <literal>br-int</literal> Open vSwitch bridge is the integration bridge: all
guests running on the compute host connect to this bridge. Networking
implements isolation across these guests by configuring the
<literal>br-int</literal> ports.</para>
</simplesect>
<simplesect>
<title>Physical connectivity bridge</title>
<para>The <literal>br-eth1</literal> bridge provides connectivity to the physical
network interface card, <literal>eth1</literal>. It connects to the integration
bridge by a veth pair: <literal>(int-br-eth1, phy-br-eth1)</literal>.</para>
</simplesect>
<simplesect>
<title>VLAN translation</title>
<para>In this example, net01 and net02 have VLAN ids of 1 and 2, respectively. However,
the physical network in our example only supports VLAN IDs in the range 101 through 110. The
Open vSwitch agent is responsible for configuring flow rules on
<literal>br-int</literal> and <literal>br-eth1</literal> to do VLAN translation.
When <literal>br-eth1</literal> receives a frame marked with VLAN ID 1 on the port
associated with <literal>phy-br-eth1</literal>, it modifies the VLAN ID in the frame
to 101. Similarly, when <literal>br-int</literal> receives a frame marked with VLAN ID 101 on the port
associated with <literal>int-br-eth1</literal>, it modifies the VLAN ID in the frame
to 1.</para>
</simplesect>
<simplesect>
<title>Security groups: iptables and Linux bridges</title>
<para>Ideally, the TAP device <literal>vnet0</literal> would be connected directly to
the integration bridge, <literal>br-int</literal>. Unfortunately, this isn't
possible because of how OpenStack security groups are currently implemented.
OpenStack uses iptables rules on the TAP devices such as
<literal>vnet0</literal> to implement security groups, and Open vSwitch
is not compatible with iptables rules that are applied directly on TAP
devices that are connected to an Open vSwitch port.</para>
<para>Networking uses an extra Linux bridge and a veth pair as a workaround for this
issue. Instead of connecting <literal>vnet0</literal> to an Open vSwitch
bridge, it is connected to a Linux bridge,
<literal>qbr<replaceable>XXX</replaceable></literal>. This bridge is
connected to the integration bridge, <literal>br-int</literal>, through the
<literal>(qvb<replaceable>XXX</replaceable>,
qvo<replaceable>XXX</replaceable>)</literal> veth pair.</para>
</simplesect>
</section>
<section xml:id="under_the_hood_openvswitch_scenario1_network">
<title>Scenario 1: Network host config</title>
<para>The network host runs the neutron-openvswitch-plugin-agent, the
neutron-dhcp-agent, neutron-l3-agent, and neutron-metadata-agent services.</para>
<para>On the network host, assume that eth0 is connected to the external network, and
eth1 is connected to the data network, which leads to the following configuration
in the
<filename>ovs_neutron_plugin.ini</filename> file:
<programlisting language="bash">[ovs]
tenant_network_type = vlan
network_vlan_ranges = physnet2:101:110
integration_bridge = br-int
bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
The following figure shows the network devices on the network host:</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-ovs-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>As on the compute host, there is an Open vSwitch integration bridge
(<literal>br-int</literal>) and an Open vSwitch bridge connected to the data
network (<literal>br-eth1</literal>), and the two are connected by a veth pair, and
the neutron-openvswitch-plugin-agent configures the ports on both switches to do
VLAN translation.</para>
<para>An additional Open vSwitch bridge, <literal>br-ex</literal>,
connects to the physical interface that is connected to the external network. In
this example, that physical interface is <literal>eth0</literal>.</para>
<note><para>While the integration bridge and the external bridge are connected by
a veth pair <literal>(int-br-ex, phy-br-ex)</literal>, this example uses layer 3
connectivity to route packets from the internal networks to the public network: no
packets traverse that veth pair in this example.</para></note>
<simplesect><title>Open vSwitch internal ports</title>
<para>The network host uses Open vSwitch <emphasis role="italic">internal
ports</emphasis>. Internal ports enable you to assign one or more IP
addresses to an Open vSwitch bridge. In previous example, the
<literal>br-int</literal> bridge has four internal ports:
<literal>tap<replaceable>XXX</replaceable></literal>,
<literal>qr-<replaceable>YYY</replaceable></literal>,
<literal>qr-<replaceable>ZZZ</replaceable></literal>, and
<literal>tap<replaceable>WWW</replaceable></literal>. Each internal
port has a separate IP address associated with it. An internal port,
<literal>qg-VVV</literal>, is on the <literal>br-ex</literal>
bridge.</para>
</simplesect>
<simplesect><title>DHCP agent</title>
<para>By default, The Networking DHCP agent uses a process called dnsmasq to provide
DHCP services to guests. Networking must create an internal port for each
network that requires DHCP services and attach a dnsmasq process to that
port. In the previous example, the
<literal>tap<replaceable>XXX</replaceable></literal> interface is on
<literal>net01_subnet01</literal>, and the
<literal>tap<replaceable>WWW</replaceable></literal> interface is on
<literal>net02_subnet01</literal>.</para>
</simplesect>
<simplesect>
<title>L3 agent (routing)</title>
<para>The Networking L3 agent uses Open vSwitch internal ports to implement routing and
relies on the network host to route the packets across the interfaces. In
this example, the <literal>qr-YYY</literal> interface is on
<literal>net01_subnet01</literal> and has the IP address
192.168.101.1/24. The <literal>qr-<replaceable>ZZZ</replaceable></literal>,
interface is on <literal>net02_subnet01</literal> and has the IP address
<literal>192.168.102.1/24</literal>. The
<literal>qg-<replaceable>VVV</replaceable></literal> interface has
the IP address <literal>10.64.201.254/24</literal>. Because each of these
interfaces is visible to the network host operating system, the network host
routes the packets across the interfaces, as long as an administrator has
enabled IP forwarding.</para>
<para>The L3 agent uses iptables to implement floating IPs to do the network address
translation (NAT).</para>
</simplesect>
<simplesect>
<title>Overlapping subnets and network namespaces</title>
<para>One problem with using the host to implement routing is that one of the
Networking subnets might overlap with one of the physical networks that the
host uses. For example, if the management network is implemented on
<literal>eth2</literal> and also happens to be on the
<literal>192.168.101.0/24</literal> subnet, routing problems will occur
because the host can't determine whether to send a packet on this subnet to
<literal>qr-YYY</literal> or <literal>eth2</literal>. If end users are
permitted to create their own logical networks and subnets, you must design
the system so that such collisions do not occur.</para>
<para>Networking uses Linux <emphasis role="italic">network namespaces </emphasis>to
prevent collisions between the physical networks on the network host, and
the logical networks used by the virtual machines. It also prevents
collisions across different logical networks that are not routed to each
other, as the following scenario shows.</para>
<para>A network namespace is an isolated environment with its own networking stack. A
network namespace has its own network interfaces, routes, and iptables
rules. Consider it a chroot jail, except for networking instead of for a
file system. LXC (Linux containers) use network namespaces to implement
networking virtualization.</para>
<para>Networking creates network namespaces on the network host to avoid subnet
collisions.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-ovs-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this example, there are three network namespaces, as shown in the figure above:<itemizedlist>
<listitem>
<para><literal>qdhcp-<replaceable>aaa</replaceable></literal>:
contains the
<literal>tap<replaceable>XXX</replaceable></literal>
interface and the dnsmasq process that listens on that interface
to provide DHCP services for <literal>net01_subnet01</literal>.
This allows overlapping IPs between
<literal>net01_subnet01</literal> and any other subnets on
the network host.</para>
</listitem>
<listitem>
<para><literal>qrouter-<replaceable>bbbb</replaceable></literal>:
contains the
<literal>qr-<replaceable>YYY</replaceable></literal>,
<literal>qr-<replaceable>ZZZ</replaceable></literal>,
and <literal>qg-<replaceable>VVV</replaceable></literal>
interfaces, and the corresponding routes. This namespace
implements <literal>router01</literal> in our example.</para>
</listitem>
<listitem>
<para><literal>qdhcp-<replaceable>ccc</replaceable></literal>:
contains the
<literal>tap<replaceable>WWW</replaceable></literal>
interface and the dnsmasq process that listens on that
interface, to provide DHCP services for
<literal>net02_subnet01</literal>. This allows overlapping
IPs between <literal>net02_subnet01</literal> and any other
subnets on the network host.</para>
</listitem>
</itemizedlist></para>
</simplesect>
</section>
</section>
<section xml:id="under_the_hood_openvswitch_scenario2">
<title>Scenario 2: two tenants, two networks, two routers</title>
<para>In this scenario, tenant A and tenant B each have a
network with one subnet and one router that connects the
tenants to the public Internet.
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, define the public
network:<screen><prompt>$</prompt> <userinput>tenant=$(keystone tenant-list | awk '/service/ {print $2}')</userinput>
<prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant public01 \
--provider:network_type flat \
--provider:physical_network physnet1 \
--router:external=True</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name public01_subnet01 \
--gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp</userinput></screen></para>
<para>Under the <literal>tenantA</literal> user tenant, create the tenant router and set
its gateway for the public
network.<screen><prompt>$</prompt> <userinput>tenant=$(keystone tenant-list|awk '/tenantA/ {print $2}')</userinput>
<prompt>$</prompt> <userinput>neutron router-create --tenant-id $tenant router01</userinput>
<prompt>$</prompt> <userinput>neutron router-gateway-set router01 public01</userinput></screen>
Then, define private network <literal>net01</literal> using VLAN ID 102 on the
physical switch, along with its subnet, and connect it to the router.
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant net01 \
--provider:network_type vlan \
--provider:physical_network physnet2 \
--provider:segmentation_id 101</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router01 net01_subnet01</userinput></screen></para>
<para>Similarly, for <literal>tenantB</literal>, create a router and another network,
using VLAN ID 102 on the physical
switch:<screen><prompt>$</prompt> <userinput>tenant=$(keystone tenant-list|awk '/tenantB/ {print $2}')</userinput>
<prompt>$</prompt> <userinput>neutron router-create --tenant-id $tenant router02</userinput>
<prompt>$</prompt> <userinput>neutron router-gateway-set router02 public01</userinput>
<prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant net02 \
--provider:network_type vlan \
--provider:physical_network physnet2 \
--provider:segmentation_id 102</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name net02_subnet01 net01 192.168.101.0/24</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router02 net02_subnet01</userinput></screen></para>
<section xml:id="under_the_hood_openvswitch_scenario2_compute">
<title>Scenario 2: Compute host config</title>
<para>The following figure shows how to configure Linux networking devices on the Compute host:
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-ovs-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<note><para>The Compute host configuration resembles the
configuration in scenario 1. However, in scenario 1, a
guest connects to two subnets while in this scenario, the
subnets belong to different tenants.
</para></note>
</section>
<section xml:id="under_the_hood_openvswitch_scenario2_network">
<title>Scenario 2: Network host config</title>
<para>The following figure shows the network devices on the network host for the second
scenario.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-ovs-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this configuration, the network namespaces are
organized to isolate the two subnets from each other as
shown in the following figure.
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-ovs-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this scenario, there are four network namespaces
(<literal>qhdcp-<replaceable>aaa</replaceable></literal>,
<literal>qrouter-<replaceable>bbbb</replaceable></literal>,
<literal>qrouter-<replaceable>cccc</replaceable></literal>, and
<literal>qhdcp-<replaceable>dddd</replaceable></literal>), instead of three.
Since there is no connectivity between the two networks, and so each router is
implemented by a separate namespace.</para>
</section>
</section>
</section>
<section xml:id="under_the_hood_linuxbridge">
<title>Linux Bridge</title>
<para>This section describes how the Linux Bridge plug-in
implements the Networking abstractions. For information about
DHCP and L3 agents, see <xref
linkend="under_the_hood_openvswitch_scenario1"/>.</para>
<section xml:id="under_the_hood_linuxbridge_configuration">
<title>Configuration</title>
<para>This example uses VLAN isolation on the switches to
isolate tenant networks. This configuration labels the
physical network associated with the public network as
<literal>physnet1</literal>, and the physical network
associated with the data network as <literal>physnet2</literal>,
which leads to the following configuration options in
<filename>linuxbridge_conf.ini</filename>:
<programlisting language="ini">[vlans]
tenant_network_type = vlan
network_vlan_ranges = physnet2:100:110
[linux_bridge]
physical_interface_mappings = physnet2:eth1</programlisting></para>
</section>
<section xml:id="under_the_hood_linuxbridge_scenario1">
<title>Scenario 1: one tenant, two networks, one router</title>
<para>The first scenario has two private networks (<literal>net01</literal>, and
<literal>net02</literal>), each with one subnet
(<literal>net01_subnet01</literal>: 192.168.101.0/24,
<literal>net02_subnet01</literal>, 192.168.102.0/24).
Both private networks are attached to a router that
contains them to the public network (10.64.201.0/24).</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-1.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, create the shared router, define the
public network, and set it as the default gateway of the
router<screen><prompt>$</prompt> <userinput>tenant=$(keystone tenant-list | awk '/service/ {print $2}')</userinput>
<prompt>$</prompt> <userinput>neutron router-create router01</userinput>
<prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant public01 \
--provider:network_type flat \
--provider:physical_network physnet1 \
--router:external=True</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name public01_subnet01 \
--gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp</userinput>
<prompt>$</prompt> <userinput>neutron router-gateway-set router01 public01</userinput></screen></para>
<para>Under the <literal>demo</literal> user tenant, create the private network
<literal>net01</literal> and corresponding subnet, and connect it to the
<literal>router01</literal> router. Configure it to use VLAN ID 101 on the
physical
switch.<screen><prompt>$</prompt> <userinput>tenant=$(keystone tenant-list|awk '/demo/ {print $2}'</userinput>
<prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant net01 \
--provider:network_type vlan \
--provider:physical_network physnet2 \
--provider:segmentation_id 101</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router01 net01_subnet01</userinput></screen></para>
<para>Similarly, for <literal>net02</literal>, using VLAN ID 102 on the physical
switch:<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant net02 \
--provider:network_type vlan \
--provider:physical_network physnet2 \
--provider:segmentation_id 102</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name net02_subnet01 net02 192.168.102.0/24</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router01 net02_subnet01</userinput></screen></para>
<section xml:id="under_the_hood_linuxbridge_scenario1_compute">
<title>Scenario 1: Compute host config</title>
<para>The following figure shows how to configure the various Linux networking devices on the
compute host.
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-linuxbridge-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<simplesect>
<title>Types of network devices</title>
<note><para>There are three distinct type of virtual networking devices: TAP devices,
VLAN devices, and Linux bridges. For an ethernet frame to travel from
<literal>eth0</literal> of virtual machine <literal>vm01</literal>, to the
physical network, it must pass through four devices inside of the host: TAP
<literal>vnet0</literal>, Linux bridge
<literal>brq<replaceable>XXX</replaceable></literal>, VLAN
<literal>eth1.101)</literal>, and, finally, the physical network interface card
<literal>eth1</literal>.</para></note>
<para>A <emphasis role="italic">TAP device</emphasis>, such as <literal>vnet0</literal>
is how hypervisors such as KVM and Xen implement a virtual network interface card
(typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received
by the guest operating system.</para>
<para>A <emphasis role="italic">VLAN device</emphasis> is associated with a VLAN tag
attaches to an existing interface device and adds or removes VLAN tags. In the
preceding example, VLAN device <literal>eth1.101</literal> is associated with VLAN ID
101 and is attached to interface <literal>eth1</literal>. Packets received from the
outside by <literal>eth1</literal> with VLAN tag 101 will be passed to device
<literal>eth1.101</literal>, which will then strip the tag. In the other
direction, any ethernet frame sent directly to eth1.101 will have VLAN tag 101 added
and will be forward to <literal>eth1</literal> for sending out to the
network.</para>
<para>A <emphasis role="italic">Linux bridge</emphasis> behaves like a hub: you can
connect multiple (physical or virtual) network interfaces devices to a Linux bridge.
Any ethernet frames that come in from one interface attached to the bridge is
transmitted to all of the other devices.</para>
</simplesect>
</section>
<section xml:id="under_the_hood_linuxbridge_scenario1_network">
<title>Scenario 1: Network host config</title>
<para>The following figure shows the network devices on the network host.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-linuxbridge-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>The following figure shows how the Linux Bridge plug-in uses network namespaces to
provide isolation.</para><note><para>veth pairs form connections between the
Linux bridges and the network namespaces.</para></note><mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-linuxbridge-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
</section>
</section>
<section xml:id="under_the_hood_linuxbridge_scenario2">
<title>Scenario 2: two tenants, two networks, two routers</title>
<para>The second scenario has two tenants (A, B). Each tenant has a network with
one subnet, and each one has a router that connects them to the public
Internet.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, define
the public network:<screen><prompt>$</prompt> <userinput>tenant=$(keystone tenant-list | awk '/service/ {print $2}')</userinput>
<prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant public01 \
--provider:network_type flat \
--provider:physical_network physnet1 \
--router:external=True</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name public01_subnet01 \
--gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp</userinput></screen></para>
<para>Under the <literal>tenantA</literal> user tenant, create the tenant router and set
its gateway for the public
network.<screen><prompt>$</prompt> <userinput>tenant=$(keystone tenant-list|awk '/tenantA/ {print $2}')</userinput>
<prompt>$</prompt> <userinput>neutron router-create --tenant-id $tenant router01</userinput>
<prompt>$</prompt> <userinput>neutron router-gateway-set router01 public01</userinput></screen>
Then, define private network <literal>net01</literal> using VLAN ID 102 on the
physical switch, along with its subnet, and connect it to the router.
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant net01 \
--provider:network_type vlan \
--provider:physical_network physnet2 \
--provider:segmentation_id 101</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router01 net01_subnet01</userinput></screen></para>
<para>Similarly, for <literal>tenantB</literal>, create a router and another network,
using VLAN ID 102 on the physical
switch:<screen><prompt>$</prompt> <userinput>tenant=$(keystone tenant-list|awk '/tenantB/ {print $2}')</userinput>
<prompt>$</prompt> <userinput>neutron router-create --tenant-id $tenant router02</userinput>
<prompt>$</prompt> <userinput>neutron router-gateway-set router02 public01</userinput>
<prompt>$</prompt> <userinput>neutron net-create --tenant-id $tenant net02 \
--provider:network_type vlan \
--provider:physical_network physnet2 \
--provider:segmentation_id 102</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $tenant --name net02_subnet01 net01 192.168.101.0/24</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router02 net02_subnet01</userinput></screen></para>
<section xml:id="under_the_hood_linuxbridge_scenario2_compute">
<title>Scenario 2: Compute host config</title>
<para>The following figure shows how the various Linux
networking devices would be configured on the Compute host
under this scenario.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-linuxbridge-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<note><para>The configuration on the Compute host is very
similar to the configuration in scenario 1. The only real
difference is that scenario 1 had a guest connected to two
subnets, and in this scenario the subnets belong to
different tenants.</para></note>
</section>
<section xml:id="under_the_hood_linuxbridge_scenario2_network">
<title>Scenario 2: Network host config</title>
<para>The following figure shows the network devices on the
network host for the second scenario.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-linuxbridge-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>The main difference between the configuration in this scenario and the previous one
is the organization of the network namespaces, in order to provide isolation
across the two subnets, as shown in the following figure.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-linuxbridge-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this scenario, there are four network namespaces
(<literal>qhdcp-<replaceable>aaa</replaceable></literal>,
<literal>qrouter-<replaceable>bbbb</replaceable></literal>,
<literal>qrouter-<replaceable>cccc</replaceable></literal>, and
<literal>qhdcp-<replaceable>dddd</replaceable></literal>), instead of three.
Each router is implemented by a separate namespace,
since there is no connectivity between the two networks.</para>
</section>
</section>
</section>
<section xml:id="ml2_scenarios">
<title>ML2</title>
<para>The Modular Layer 2 plugin allows OpenStack Networking
to simultaneously utilize the variety of layer 2 networking
technologies found in complex real-world data centers.
It currently includes drivers for the local, flat, VLAN,
GRE and VXLAN network types and works with the existing
<emphasis>Open vSwitch</emphasis>, <emphasis>Linux Bridge
</emphasis>, and <emphasis>HyperV</emphasis> L2 agents. The
<emphasis>ML2</emphasis> plug-in can be extended through
mechanism drivers, allowing multiple mechanisms to be used
simultaneously. This section describes different
<emphasis>ML2</emphasis> plug-in and agent configurations with
different type drivers and mechanism drivers.</para>
<section xml:id="ml2_l2pop_scenarios">
<title>ML2 with L2 population mechanism driver</title>
<para>Current <emphasis>Open vSwitch</emphasis> and
<emphasis>Linux Bridge</emphasis> tunneling implementations
broadcast to every agent, even if they dont host the
corresponding network as illustrated below.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/ml2_without_l2pop_full_mesh.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>As broadcast emulation on overlay is costly, it may
be better to avoid its use for MAC learning and ARP
resolution. This supposes the use of proxy ARP on the agent
to answer VM requests, and to populate forwarding table.
Currently only the <emphasis>Linux Bridge</emphasis> Agent
implements an ARP proxy. The prepopulation limits L2
broadcasts in overlay, however it may anyway be necessary
to provide broadcast emulation. This is achieved by
broadcasting packets via unicast only to the relevant agents
as illustrated below.<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/ml2_without_l2pop_partial_mesh.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>The partial-mesh is available with the
<emphasis>Open vSwitch</emphasis> and <emphasis>Linux
Bridge</emphasis> agents. The following scenarios will
use the L2 population mechanism driver with an
<emphasis>Open vSwitch</emphasis> agent and a
<emphasis>Linux Bridge</emphasis> agent. Enable the
l2 population driver by adding it to the list of
mechanism drivers. In addition, a tunneling driver must
be selected. Supported options are GRE, VXLAN, or a
combination of both. Configuration settings are enabled in
<filename>ml2_conf.ini</filename>:<programlisting language="ini">[ml2]
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,linuxbridge,l2population</programlisting></para>
</section>
<section xml:id="ml2_l2pop_ovs_scenarios">
<title>Scenario 1: L2 population with Open vSwitch agent</title>
<para>Enable the l2 population extension in the
<emphasis>Open vSwitch</emphasis> agent, and configure the
<option>local_ip</option> and <option>tunnel_types</option>
parameters in the <filename>ml2_conf.ini</filename> file:
<programlisting language="ini">[ovs]
local_ip = <replaceable>192.168.1.10</replaceable>
[agent]
tunnel_types = <replaceable>gre</replaceable>,<replaceable>vxlan</replaceable>
l2_population = True</programlisting></para>
</section>
<section xml:id="ml2_l2pop_lb_scenarios">
<title>Scenario 2: L2 population with <emphasis>Linux Bridge</emphasis> agent</title>
<para>Enable the l2 population extension on the
<emphasis>Linux Bridge</emphasis> agent. Enable VXLAN and
configure the local_ip parameter in
<filename>ml2_conf.ini</filename>.
<programlisting language="ini">[vxlan]
enable_vxlan = True
local_ip = <replaceable>192.168.1.10</replaceable>
l2_population = True</programlisting></para>
</section>
<section xml:id="ml2_l2_security_group">
<title>Enable security group API</title>
<para>Since the ML2 plugin can concurrently support
different L2 agents (or other mechanisms) with different
configuration files, the actual <option>firewall_driver
</option> value in the <filename>ml2_conf.ini</filename>
file does not matter in the server, but
<option>firewall_driver</option> must be set to a
non-default value in the ml2 configuration to enable the
securitygroup extension. To enable securitygroup API, edit
the <filename>ml2_conf.ini</filename>
file:<programlisting language="ini">[securitygroup]
firewall_driver = dummy</programlisting>
Each L2 agent configuration file (such as <filename>ovs_neutron_plugin.ini</filename> or
<filename>linuxbridge_conf.ini</filename>) should contain the appropriate
<option>firewall_driver</option> value for that agent. To disable
securitygroup API, edit the<filename>ml2_conf.ini</filename>
file:<programlisting language="ini">[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver</programlisting>
Also, each L2 agent configuration file (such as <filename>ovs_neutron_plugin.ini</filename> or
<filename>linuxbridge_conf.ini</filename>) should contain this value in
<option>firewall_driver</option> parameter for that
agent.</para></section>
</section>
</section>