openstack-manuals/doc/src/docbkx/openstack-compute-admin/computenetworking.xml

629 lines
48 KiB
XML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_networking">
<title>Networking</title>
<para>By understanding the available networking configuration options you can design the best
configuration for your OpenStack Compute instances.</para>
<section xml:id="networking-options">
<title>Networking Options</title>
<para>This section offers a brief overview of each concept in networking for Compute. </para>
<para>In Compute, users organize their cloud resources in projects. A Compute project
consists of a number of VM instances created by a user. For each VM instance, Compute
assigns to it a private IP address. (Currently, Nova only supports Linux bridge
networking that allows the virtual interfaces to connect to the outside network through
the physical interface.)</para>
<para>The Network Controller provides virtual networks to enable compute servers to interact
with each other and with the public network.</para>
<para>Currently, Nova supports three kinds of networks, implemented in three “Network
Manager” types respectively: Flat Network Manager, Flat DHCP Network Manager, and VLAN
Network Manager. The three kinds of networks can co-exist in a cloud system. However,
since you can't yet select the type of network for a given project, you cannot configure
more than one type of network in a given Compute installation.</para>
<para>Nova has a concept of Fixed IPs and Floating IPs. Fixed IPs are assigned to an
instance on creation and stay the same until the instance is explicitly terminated.
Floating IPs are IP addresses that can be dynamically associated with an instance. This
address can be disassociated and associated with another instance at any time. A user
can reserve a floating IP for their project. </para>
<para>In Flat Mode, a network administrator specifies a subnet. The IP addresses for VM
instances are grabbed from the subnet, and then injected into the image on launch. Each
instance receives a fixed IP address from the pool of available addresses. A network
administrator must configure the Linux networking bridge (named br100) both on the
network controller hosting the network and on the cloud controllers hosting the
instances. All instances of the system are attached to the same bridge, configured
manually by the network administrator.</para>
<para>
<note>
<para>The configuration injection currently only works on Linux-style systems that
keep networking configuration in /etc/network/interfaces.</para>
</note>
</para>
<para>In Flat DHCP Mode, you start a DHCP server to pass out IP addresses to VM instances
from the specified subnet in addition to manually configuring the networking bridge. IP
addresses for VM instances are grabbed from a subnet specified by the network
administrator. </para>
<para>Like Flat Mode, all instances are attached to a single bridge on the compute node. In
addition a DHCP server is running to configure instances. In this mode, Compute does a
bit more configuration in that it attempts to bridge into an ethernet device (eth0 by
default). It will also run dnsmasq as a dhcpserver listening on this bridge. Instances
receive their fixed IPs by doing a dhcpdiscover. </para>
<para>In both flat modes, the network nodes do not act as a default gateway. Instances are
given public IP addresses. Compute nodes have iptables/ebtables entries created per
project and instance to protect against IP/MAC address spoofing and ARP poisoning. </para>
<para>VLAN Network Mode is the default mode for OpenStack Compute. In this mode, Compute
creates a VLAN and bridge for each project. For multiple machine installation, the VLAN
Network Mode requires a switch that supports VLAN tagging (IEEE 802.1Q). The project
gets a range of private IPs that are only accessible from inside the VLAN. In order for
a user to access the instances in their project, a special VPN instance (code named
cloudpipe) needs to be created. Compute generates a certificate and key for the user to
access the VPN and starts the VPN automatically. It provides a private network segment
for each project's instances that can be accessed via a dedicated VPN connection from
the Internet. In this mode, each project gets its own VLAN, Linux networking bridge, and
subnet. </para>
<para>The subnets are specified by the network administrator, and are assigned dynamically
to a project when required. A DHCP Server is started for each VLAN to pass out IP
addresses to VM instances from the subnet assigned to the project. All instances
belonging to one project are bridged into the same VLAN for that project. OpenStack
Compute creates the Linux networking bridges and VLANs when required.</para></section>
<section xml:id="configuring-networking-on-the-compute-node">
<title>Configuring Networking on the Compute Node</title>
<para>To configure the Compute node's networking for the VM images, the overall steps are:</para>
<orderedlist>
<listitem>
<para>Set the "--network-manager" flag in nova.conf.</para>
</listitem>
<listitem>
<para>Use the <code>nova-manage network create label CIDR n n</code> command
to create the subnet that the VMs reside on.</para>
</listitem>
<listitem>
<para>Integrate the bridge with your network. </para>
</listitem>
</orderedlist>
<para>By default, Compute uses the VLAN Network Mode. You choose the networking mode for your
virtual instances in the nova.conf file. Here are the three possible options: </para>
<itemizedlist>
<listitem>
<para>--network_manager=nova.network.manager.FlatManager</para>
<para>Simple, non-VLAN networking</para>
</listitem>
<listitem>
<para>--network_manager=nova.network.manager.FlatDHCPManager</para>
<para>Flat networking with DHCP, you must set a bridge using the
--flat_network_bridge flag</para>
</listitem>
<listitem>
<para>--network_manager=nova.network.manager.VlanManager</para>
<para>VLAN networking with DHCP. This is the Default if no network manager is
defined in nova.conf. </para>
</listitem>
</itemizedlist>
<para>Also, when you issue the nova-manage network create command, it uses the settings from
the nova.conf flag file. Use the following command to create the subnet that your VMs
will run on :
<literallayout class="monospaced">nova-manage network create private 192.168.0.0/24 1 256 </literallayout>
</para>
<section xml:id="configuring-flat-networking"><title>Configuring Flat Networking</title>
<para>FlatNetworking uses ethernet adapters configured as bridges to allow network
traffic to transit between all the various nodes. This setup can be done with a
single adapter on the physical host, or multiple. This option does not require a
switch that does VLAN tagging as VLAN networking does, and is a common development
installation or proof of concept setup. When you choose Flat networking, Nova does
not manage networking at all. Instead, IP addresses are injected into the instance
via the file system (or passed in via a guest agent). Metadata forwarding must be
configured manually on the gateway if it is required within your network. </para>
<para>To configure flat networking, ensure that your nova.conf file contains the
line:</para>
<para>
<literallayout>--network_manager=nova.network.manager.FlatManager</literallayout>
</para>
<para>Compute defaults to a bridge device named br100 which is stored in the Nova
database, so you can change the name of the bridge device by modifying the entry in
the database. Consult the diagrams for additional configuration options.</para>
<para>In any set up with FlatNetworking (either Flat or FlatDHCP), the host with
nova-network on it is responsible for forwarding traffic from the private network
configured with the --fixed_range= directive in nova.conf and the
--flat_network_bridge setting. This host needs to have br100 configured and talking
to any other nodes that are hosting VMs. With either of the Flat Networking options,
the default gateway for the virtual machines is set to the host which is running
nova-network. </para>
<para>Set the compute node's external IP address to be on the bridge and add eth0 to
that bridge. To do this, edit your network interfaces configuration to look like the
following example: </para>
<para>
<programlisting>
# The loopback network interface
auto lo
iface lo inet loopback
# Networking for OpenStack Compute
auto br100
iface br100 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
</programlisting>
</para>
<para>Next, restart networking to apply the changes: <code>sudo /etc/init.d/networking
restart</code></para>
<para>For an all-in-one development setup, this diagram represents the network
setup.</para>
<para><figure><title>Flat network, all-in-one server installation </title><mediaobject>
<imageobject>
<imagedata scale="80" fileref="figures/FlatNetworkSingleInterfaceAllInOne.png"/>
</imageobject>
</mediaobject></figure></para>
<para>For multiple compute nodes with a single network adapter, which you can use for
smoke testing or a proof of concept, this diagram represents the network
setup.</para>
<figure>
<title>Flat network, single interface, multiple servers</title>
<mediaobject>
<imageobject>
<imagedata scale="80" fileref="figures/FlatNetworkSingleInterface.png"/>
</imageobject>
</mediaobject>
</figure>
<para>For multiple compute nodes with multiple network adapters, this diagram
represents the network setup. You may want to use this setup for separate admin and
data traffic.</para>
<figure>
<title>Flat network, multiple interfaces, multiple servers</title>
<mediaobject>
<imageobject>
<imagedata scale="80" fileref="figures/FlatNetworkMultInterface.png"/>
</imageobject>
</mediaobject>
</figure>
</section>
<section xml:id="configuring-flat-dhcp-networking">
<title>Configuring Flat DHCP Networking</title><para>With Flat DHCP, the host running nova-network acts as the gateway to the virtual nodes. You
can run one nova-network per cluster. Set the flag "--network_host" on the nova.conf
stored on the nova-compute node to tell it which host the nova-network is running on
so it can communicate with nova-network. You must also set the
"--flat_network_bridge" setting to the name of the bridge (no default is set for
it). The nova-network service will track leases and releases in the database so it
knows if a VM instance has stopped properly configuring via DHCP. Lastly, it sets up
iptables rules to allow the VMs to communicate with the outside world and contact a
special metadata server to retrieve information from the cloud.</para>
<para>Compute hosts in the FlatDHCP model are responsible for bringing up a matching
bridge and bridging the VM tap devices into the same ethernet device that the
network host is on. The compute hosts do not need an IP address on the VM network,
because the bridging puts the VMs and the network host on the same logical network.
When a VM boots, the VM sends out DHCP packets, and the DHCP server on the network
host responds with their assigned IP address.</para>
<para>Visually, the setup looks like the diagram below:</para>
<figure>
<title>Flat DHCP network, multiple interfaces, multiple servers</title>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/flatdchp-net.jpg"/>
</imageobject>
</mediaobject>
</figure>
<para>FlatDHCP doesn't create VLANs, it creates a bridge. This bridge works just fine on
a single host, but when there are multiple hosts, traffic needs a way to get out of
the bridge onto a physical interface. </para>
<para>Be careful when setting up "--flat_interface", if you specify an interface that
already has an IP it will break and if this is the interface you are connecting
through with SSH, you cannot fix it unless you have ipmi/console access. In FlatDHCP
mode, the setting for "--network_size" should be number of IPs in the entire fixed
range. If you are doing a /12 in CIDR notation, then this number would be 2^20 or
1,048,576 IP addresses. That said, it will take a very long time for you to create
your initial network, as an entry for each IP will be created in the database. </para>
<para>If you have an unused interface on your hosts that has connectivity with no IP
address, you can simply tell FlatDHCP to bridge into the interface by specifying
"--flat_interface=&lt;interface>" in your flagfile. The network host will
automatically add the gateway ip to this bridge. You can also add the interface to
br100 manually and not set flat_interface. If this is the case for you, edit your
nova.conf file to contain the following lines: </para>
<para>
<programlisting>
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--network_manager=nova.network.manager.FlatDHCPManager
--flat_network_dhcp_start=10.0.0.2
--flat_network_bridge=br100
--flat_interface=eth2
--flat_injected=False
--public_interface=eth0
</programlisting>
</para>
<para>Integrate your network interfaces to match this configuration.</para></section>
<section xml:id="outbound-traffic-flow-with-any-flat-networking"><title>Outbound Traffic Flow with Any Flat Networking</title>
<para>
In any set up with FlatNetworking, the host with nova-network on it is responsible for forwarding traffic from the private network configured with the "--fixed_range=..." directive in nova.conf. This host needs to have br100 configured and talking to any other nodes that are hosting VMs. With either of the Flat Networking options, the default gateway for the virtual machines is set to the host which is running nova-network.
</para>
<para>
When a virtual machine sends traffic out to the public networks, it sends it first to its default gateway, which is where nova-network is configured.
</para>
<figure>
<title>Single adaptor hosts, first route</title>
<mediaobject>
<imageobject>
<imagedata scale="80" fileref="figures/SingleInterfaceOutbound_1.png"/>
</imageobject>
</mediaobject>
</figure>
<para>Next, the host on which nova-network is configured acts as a router and forwards the traffic out to the Internet.</para>
<figure>
<title>Single adaptor hosts, second route</title>
<mediaobject>
<imageobject>
<imagedata scale="80" fileref="figures/SingleInterfaceOutbound_2.png"/>
</imageobject>
</mediaobject>
</figure>
<warning><para>If you're using a single interface, then that interface (often eth0) needs to be set into promiscuous mode for the forwarding to happen correctly. This does not appear to be needed if you're running with physical hosts that have and use two interfaces.</para></warning>
</section>
<section xml:id="configuring-vlan-networking">
<title>Configuring VLAN Networking</title>
<para>In some networking environments, you may have a large IP space which is cut up
into smaller subnets. The smaller subnets are then trunked together at the switch
level (dividing layer 3 by layer 2) so that all machines in the larger IP space can
communicate. The purpose of this is generally to control the size of broadcast
domains.</para>
<para>Using projects as a way to logically separate each VLAN, we can setup our cloud
in this environment. Please note that you must have IP forwarding enabled for this
network mode to work.</para>
<para>Obtain the parameters for each network. You may need to ask a network administrator for this information, including netmask, broadcast, gateway, ethernet device and VLAN ID.</para> <para>You need to have networking hardware that supports VLAN tagging.</para>
<para>Please note that currently eth0 is hardcoded as the vlan_interface in the default flags. If you need to attach your bridges to a device other than eth0, you will need to add following flag to /etc/nova/nova.conf:</para>
<literallayout>--vlan_interface=eth1</literallayout>
<para>In VLAN mode, the setting for --network_size is the number of IPs per project as
opposed to the FlatDHCP mode where --network_size indicates number of IPs in the
entire fixed range. For VLAN, the settings in nova.conf that affect networking are
also --fixed_range, where the space is divided up into subnets of
--network_size.</para>
<para>VLAN is the default networking mode for Compute, so if you have no
--network_manager entry in your nova.conf file, you are set up for VLAN. To set your nova.conf file to VLAN, use this flag in /etc/nova/nova.conf:</para>
<literallayout>--network_manager=nova.network.manager.VlanManager</literallayout>
<para>For the purposes of this example walk-through, we will use the following settings. These are intentionally complex in an attempt to cover most situations:</para>
<itemizedlist>
<listitem><para>VLANs: 171, 172, 173 and
174</para></listitem>
<listitem><para>IP Blocks: 10.1.171.0/24,
10.1.172.0/24, 10.1.173.0/24 and 10.1.174.0/24</para></listitem>
<listitem><para>Each VLAN maps to its corresponding /24 (171 = 10.1.171.0/24, etc)</para></listitem>
<listitem><para>Each VLAN will get its own
bridge device, which is in the format br_$VLANID</para></listitem>
<listitem><para>Each /24 has an upstream
default gateway on .1</para></listitem>
<listitem><para>The first 6 IPs in each /24
are reserved</para></listitem>
</itemizedlist>
<para>First, create the networks that Compute can pull from using nova-manage commands:</para>
<literallayout class="monospaced">nova-manage --flagfile=/etc/nova/nova.conf network create private 10.1.171.0/24 1 256
nova-manage --flagfile=/etc/nova/nova.conf network create private 10.1.172.0/24 1 256
nova-manage --flagfile=/etc/nova/nova.conf network create private 10.1.173.0/24 1 256
nova-manage --flagfile=/etc/nova/nova.conf network create private 10.1.174.0/24 1 256</literallayout>
<para>Log in to the nova database to determine the network ID assigned to each VLAN:</para>
<literallayout class="monospaced">select id,cidr from networks;</literallayout>
<para>Update the DB to match your network settings. The following script will generate SQL based on the predetermined settings for this example. <emphasis>You will need to modify this database update to fit your environment.</emphasis></para>
<programlisting>
if [ -z $1 ]; then
echo "You need to specify the vlan to modify"
fi
if [ -z $2 ]; then
echo "You need to specify a network id number (check the DB for the network you want to update)"
fi
VLAN=$1
ID=$2
cat &gt; vlan.sql &lt;&lt; __EOF_
update networks set vlan = '$VLAN' where id = $ID;
update networks set bridge = 'br_$VLAN' where id = $ID;
update networks set gateway = '10.1.$VLAN.7' where id = $ID;
update networks set dhcp_start = '10.1.$VLAN.8' where id = $ID;
update fixed_ips set reserved = 1 where address in ('10.1.$VLAN.1','10.1.$VLAN.2','10.1.$VLAN.3','10.1.$VLAN.4','10.1.$VLAN.5','10.1.$VLAN.6','10.1.$VLAN.7');
__EOF_
</programlisting>
<para>After verifying that the above SQL will work for your environment, run it against the nova database, once for every VLAN you have in the environment.</para>
<para>Next, create a project manager for the Compute project:</para>
<literallayout class="monospaced">nova-manage --flagfile=/etc/nova/nova.conf user admin $username</literallayout>
<para>Then create a project and assign that user as the admin user:</para>
<literallayout class="monospaced">nova-manage --flagfile=/etc/nova/nova.conf project create $projectname $username</literallayout>
<para>Finally, get the credentials for the user just created, which also assigns
one of the networks to this project:)</para>
<literallayout class="monospaced">nova-manage --flagfile=/etc/nova/nova.conf project zipfile $projectname $username</literallayout>
<para>When you start nova-network, the bridge devices and associated VLAN tags will be created. When you create a new VM you must determine (either manually or programatically) which VLAN it should be a part of, and start the VM in the corresponding project.</para>
<para>In certain cases, the network manager may not properly tear down bridges and VLANs when it is stopped. If you attempt to restart the network manager and it does not start, check the logs for errors indicating that a bridge device already exists. If this is the case, you will likely need to tear down the bridge and VLAN devices manually.</para>
<literallayout class="monospaced">vconfig rem vlanNNN
ifconfig br_NNN down
brctl delbr br_NNN</literallayout>
<para>Also, if users need to access the instances in their project across a VPN, a
special VPN instance (code named cloudpipe) needs to be created as described. You
can create the cloudpipe instance. The image is basically just a Linux instance with
openvpn installed. It needs a simple script to grab user data from the metadata
server, b64 decode it into a zip file, and run the autorun.sh script from inside the
zip. The autorun script should configure and run openvpn to run using the data from
Compute. </para>
<para>For certificate management, it is also useful to have a cron script that will
periodically download the metadata and copy the new Certificate Revocation List
(CRL). This will keep revoked users from connecting and disconnects any users that
are connected with revoked certificates when their connection is re-negotiated
(every hour). You set the --use_project_ca flag in nova.conf for cloudpipes to work
securely so that each project has its own Certificate Authority (CA).</para></section>
<section xml:id="cloudpipe-per-project-vpns">
<title>Cloudpipe — Per Project Vpns</title>
<para> Cloudpipe is a method for connecting end users to their project instances in VLAN
networking mode. </para>
<para> The support code for cloudpipe implements admin commands (via nova-manage) to
automatically create a VM for a project that allows users to vpn into the private
network of their project. Access to this vpn is provided through a public port on
the network host for the project. This allows users to have free access to the
virtual machines in their project without exposing those machines to the public
internet. </para>
<para> The cloudpipe image is basically just a Linux instance with openvpn installed. It
needs a simple script to grab user data from the metadata server, b64 decode it into
a zip file, and run the autorun.sh script from inside the zip. The autorun script
will configure and run openvpn to run using the data from nova. </para>
<para> It is also useful to have a cron script that will periodically redownload the
metadata and copy the new crl. This will keep revoked users from connecting and will
disconnect any users that are connected with revoked certificates when their
connection is renegotiated (every hour). </para>
<section xml:id="creating-a-cloudpipe-image">
<title>Creating a Cloudpipe Image</title>
<para>
Making a cloudpipe image is relatively easy.
</para>
<itemizedlist><listitem><para>install openvpn on a base ubuntu image. </para></listitem>
<listitem><para>set up a server.conf.template in /etc/openvpn/</para></listitem>
<listitem><para>set up.sh in /etc/openvpn/ </para></listitem>
<listitem><para>set down.sh in /etc/openvpn/ </para></listitem>
<listitem><para>download and run the payload on boot from /etc/rc.local</para></listitem>
<listitem><para>setup /etc/network/interfaces </para></listitem>
<listitem><para>register the image and set the image id in your flagfile: </para>
<literallayout class="monospaced">
--vpn_image_id=ami-xxxxxxxx
</literallayout>
</listitem>
<listitem><para>you should set a few other flags to make vpns work properly: </para>
<literallayout class="monospaced">
--use_project_ca
--cnt_vpn_clients=5
</literallayout>
</listitem>
</itemizedlist>
<para> When you use nova-manage to launch a cloudpipe for a user, it goes through
the following process: </para>
<orderedlist>
<listitem>
<para> creates a keypair called &lt;project_id&gt;-vpn and saves it in the
keys directory </para>
</listitem>
<listitem>
<para> creates a security group &lt;project_id&gt;-vpn and opens up 1194 and
icmp </para>
</listitem>
<listitem>
<para> creates a cert and private key for the vpn instance and saves it in
the CA/projects/&lt;project_id&gt;/ directory </para>
</listitem>
<listitem>
<para> zips up the info and puts it b64 encoded as user data </para>
</listitem>
<listitem>
<para> launches an m1.tiny instance with the above settings using the
flag-specified vpn image </para>
</listitem>
</orderedlist>
</section>
<section xml:id="vpn-access">
<title>VPN Access</title>
<para> In vlan networking mode, the second IP in each private network is reserved
for the cloudpipe instance. This gives a consistent IP to the instance so that
nova-network can create forwarding rules for access from the outside world. The
network for each project is given a specific high-numbered port on the public IP
of the network host. This port is automatically forwarded to 1194 on the vpn
instance. </para>
<para> If specific high numbered ports do not work for your users, you can always
allocate and associate a public IP to the instance, and then change the
vpn_public_ip and vpn_public_port in the database. (This will be turned into a
nova-manage command or a flag soon.) </para>
</section>
<section xml:id="certificates-and-revocation">
<title>Certificates and Revocation</title>
<para>If the use_project_ca flag is set (required to for cloudpipes to work
securely), then each project has its own ca. This ca is used to sign the
certificate for the vpn, and is also passed to the user for bundling images.
When a certificate is revoked using nova-manage, a new Certificate Revocation
List (crl) is generated. As long as cloudpipe has an updated crl, it will block
revoked users from connecting to the vpn. </para>
<para> The userdata for cloudpipe isn't currently updated when certs are revoked, so
it is necessary to restart the cloudpipe instance if a user's credentials are
revoked. </para>
</section>
<section xml:id="restarting-and-logging-into-cloudpipe-vpn">
<title>Restarting and Logging into the Cloudpipe VPN</title>
<para>You can reboot a cloudpipe vpn through the api if something goes wrong (using
"nova reboot" for example), but if you generate a new crl, you will have to
terminate it and start it again using nova-manage vpn run. The cloudpipe
instance always gets the first ip in the subnet and it can take up to 10 minutes
for the ip to be recovered. If you try to start the new vpn instance too soon,
the instance will fail to start because of a "NoMoreAddresses" error. If you
cant wait 10 minutes, you can manually update the ip with something like the
following (use the right ip for the project): </para>
<literallayout class="monospaced">
nova delete &lt;instance_id&gt;
mysql nova -e "update fixed_ips set allocated=0, leased=0, instance_id=NULL where fixed_ip='10.0.0.2'"
</literallayout>
<para>You also will need to terminate the dnsmasq running for the user (make sure
you use the right pid file):</para>
<literallayout class="monospaced">sudo kill `cat /var/lib/nova/br100.pid`</literallayout>
<para>Now you should be able to re-run the vpn:</para>
<literallayout class="monospaced">nova-manage vpn run &lt;project_id&gt;</literallayout>
<para>The keypair that was used to launch the cloudpipe instance should be in the
keys/&lt;project_id&gt; folder. You can use this key to log into the cloudpipe
instance for debugging purposes.</para>
</section>
</section></section>
<section xml:id="enabling-ping-and-ssh-on-vms">
<title>Enabling Ping and SSH on VMs</title>
<para>Be sure you enable access to your VMs by using the secgroup-add-rule command. Below,
you will find the commands to allow ping and ssh to your VMs: </para>
<para><literallayout>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 </literallayout>If
you still cannot ping or SSH your instances after issuing the nova
secgroup-add-rule commands, look at the number of dnsmasq processes that are
running. If you have a running instance, check to see that TWO dnsmasq processes
are running. If not, perform the following: <code>killall dnsmasq; service
nova-network restart</code></para></section>
<section xml:id="associating-public-ip"><title>Associating a Public IP Address</title>
<para>OpenStack Compute uses NAT for public IPs. If you plan to use public IP
addresses for your virtual instances, you must configure --public_interface=vlan100'
in the nova.conf file so that Nova knows where to bind public IP addresses. Restart
nova-network if you change nova.conf while the service is running. Also, ensure you
have opened port 22 for the nova network.</para>
<para>You must add the IP address or block of public IP addresses to the floating IP
list using the <code>nova-manage floating create</code> command. When you start a
new virtual instance, associate one of the public addresses to the new instance
using the euca-associate-address command.</para>
<para>These are the basic overall steps and checkpoints. </para>
<para>First, set up the public address.</para>
<literallayout class="monospaced">nova-manage floating create 68.99.26.170/31
nova floating-ip-create 68.99.26.170
nova add-floating-ip 1 68.99.26.170</literallayout>
<para>Make sure the security groups are open.</para>
<literallayout class="monospaced">root@my-hostname:~# nova secgroup-list-rules default</literallayout>
<programlisting>
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
</programlisting>
<para>Ensure the NAT rules have been added to iptables.</para>
<literallayout class="monospaced">
iptables -L -nv
</literallayout>
<programlisting>
-A nova-network-OUTPUT -d 68.99.26.170/32 -j DNAT --to-destination 10.0.0.3
</programlisting>
<literallayout class="monospaced">
iptables -L -nv -t nat
</literallayout>
<programlisting>
-A nova-network-PREROUTING -d 68.99.26.170/32 -j DNAT --to-destination10.0.0.3
-A nova-network-floating-snat -s 10.0.0.3/32 -j SNAT --to-source 68.99.26.170
</programlisting>
<para>Check that the public address, in this example "68.99.26.170", has been
added to your public interface. You should see the address in the listing when you
enter "ip addr" at the command prompt.</para>
<programlisting>
2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq state UP qlen 1000
link/ether xx:xx:xx:17:4b:c2 brd ff:ff:ff:ff:ff:ff
inet 13.22.194.80/24 brd 13.22.194.255 scope global eth0
inet 68.99.26.170/32 scope global eth0
inet6 fe80::82b:2bf:fe1:4b2/64 scope link
valid_lft forever preferred_lft forever
</programlisting>
<para>Note that you cannot SSH to an instance with a public IP from within the same
server as the routing configuration won't allow it. </para>
</section>
<section xml:id="allocating-associating-ip-addresses"><title>Allocating and Associating IP Addresses with Instances</title><para>You can use Euca2ools commands to manage floating IP addresses used with Flat DHCP or VLAN
networking. </para>
<para>To assign a reserved IP address to your project, removing it from the pool of
available floating IP addresses, use <code>nova floating-ip-create </code>. It'll
return an IP address, assign it to the project you own, and remove it from the pool
of available floating IP addresses. </para>
<para>To associate the floating IP to your instance, use <code>nova add-floating-ip
$server-name/id $floating-ip</code>.</para>
<para>When you want to return the floating IP to the pool, first use
euca-disassociate-address [floating_ip] to disassociate the IP address from your
instance, then use euca-deallocate-address [floating_ip] to return the IP to the
pool of IPs for someone else to grab.</para>
<para>There are nova-manage commands that also help you manage the floating IPs.</para>
<itemizedlist>
<listitem>
<para>
nova-manage floating list - This command lists the floating IP addresses in the
pool.
</para>
</listitem>
<listitem>
<para>
nova-manage floating create [cidr] - This command creates specific
floating IPs for either a single address or a subnet.
</para>
</listitem>
<listitem>
<para>
nova-manage floating delete [cidr] - This command removes floating IP
addresses using the same parameters as the create command.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="removing-network-from-project">
<title>Removing a Network from a Project</title><para>You will find that you cannot remove a network that has already been associated to a project by simply deleting it. You can disassociate the project from the network with a scrub command and the project name as the final parameter:
</para><literallayout class="monospaced">nova-manage project scrub projectname</literallayout></section>
<section xml:id="existing-ha-networking-options"><title>Existing High Availability Options for Networking</title>
<para>from <link xlink:href="http://unchainyourbrain.com/openstack/13-networking-in-nova">Vish Ishaya</link></para>
<para>As you can see from the Flat DHCP diagram titled "Flat DHCP network, multiple interfaces, multiple servers," traffic from the VM to the public internet has to go through the host running nova network. Dhcp is handled by nova-network as well, listening on the gateway address of the fixed_range network. The compute hosts can optionally have their own public IPs, or they can use the network host as their gateway. This mode is pretty simple and it works in the majority of situations, but it has one major drawback: the network host is a single point of failure! If the network host goes down for any reason, it is impossible to communicate with the VMs. Here are some options for avoiding the single point of failure.</para>
<simplesect><title>Option 1: Failover</title>
<para>The folks at NTT labs came up with a ha-linux configuration that allows for a 4 second failover to a hot backup of the network host. Details on their approach can be found in the following post to the openstack mailing list: <link xlink:href="https://lists.launchpad.net/openstack/msg02099.html">https://lists.launchpad.net/openstack/msg02099.html</link></para>
<para>This solution is definitely an option, although it requires a second host that essentially does nothing unless there is a failure. Also four seconds can be too long for some real-time applications.</para>
</simplesect>
<simplesect> <title>Option 2: Multi-nic</title>
<para>Recently, nova gained support for multi-nic. This allows us to bridge a given VM into multiple networks. This gives us some more options for high availability. It is possible to set up two networks on separate vlans (or even separate ethernet devices on the host) and give the VMs a NIC and an IP on each network. Each of these networks could have its own network host acting as the gateway.</para>
<para>In this case, the VM has two possible routes out. If one of them fails, it has the option of using the other one. The disadvantage of this approach is it offloads management of failure scenarios to the guest. The guest needs to be aware of multiple networks and have a strategy for switching between them. It also doesn't help with floating IPs. One would have to set up a floating IP associated with each of the IPs on private the private networks to achieve some type of redundancy.</para></simplesect>
<simplesect><title>Option 3: HW Gateway</title>
<para>It is possible to tell dnsmasq to use an external gateway instead of acting as the gateway for the VMs. You can pass dhcpoption=3,&lt;ip of gateway&gt; to make the VMs use an external gateway. This will require some manual setup. The metadata IP forwarding rules will need to be set on the hardware gateway instead of the nova-network host. You will have to make sure to set up routes properly so that the subnet that you use for VMs is routable.</para>
<para>This offloads HA to standard switching hardware and it has some strong benefits. Unfortunately, nova-network is still responsible for floating IP natting and dhcp, so some failover strategy needs to be employed for those options.</para></simplesect>
<simplesect><title>New HA Option</title>
<para>Essentially, what the current options are lacking, is the ability to specify different gateways for different VMs. An agnostic approach to a better model might propose allowing multiple gateways per VM. Unfortunately this rapidly leads to some serious networking complications, especially when it comes to the natting for floating IPs. With a few assumptions about the problem domain, we can come up with a much simpler solution that is just as effective.</para>
<para>The key realization is that there is no need to isolate the failure domain away from the host where the VM is running. If the host itself goes down, losing networking to the VM is a non-issue. The VM is already gone. So the simple solution involves allowing each compute host to do all of the networking jobs for its own VMs. This means each compute host does NAT, dhcp, and acts as a gateway for all of its own VMs. While we still have a single point of failure in this scenario, it is the same point of failure that applies to all virtualized systems, and so it is about the best we can do.</para>
<para>So the next question is: how do we modify the Nova code to provide this option. One possibility would be to add code to the compute worker to do complicated networking setup. This turns out to be a bit painful, and leads to a lot of duplicated code between compute and network. Another option is to modify nova-network slightly so that it can run successfully on every compute node and change the message passing logic to pass the network commands to a local network worker.</para>
<para>Surprisingly, the code is relatively simple. A couple fields needed to be added to the database in order to support these new types of "multihost" networks without breaking the functionality of the existing system. All-in-all it is a pretty small set of changes for a lot of added functionality: about 250 lines, including quite a bit of cleanup. You can see the branch here: <link xlink:href="https://code.launchpad.net/%7Evishvananda/nova/ha-net/+merge/67078">https://code.launchpad.net/~vishvananda/nova/ha-net/+merge/67078</link></para>
<para>The drawbacks here are relatively minor. It requires adding an IP on the VM network to each host in the system, and it implies a little more overhead on the compute hosts. It is also possible to combine this with option 3 above to remove the need for your compute hosts to gateway. In that hybrid version they would no longer gateway for the VMs and their responsibilities would only be dhcp and nat.</para>
<para>The resulting layout for the new HA networking option looks the following diagram:</para>
<para><figure>
<title>High Availability Networking Option</title>
<mediaobject>
<imageobject>
<imagedata scale="50" fileref="figures/ha-net.jpg"/>
</imageobject>
</mediaobject>
</figure></para>
<para>In contrast with the earlier diagram, all the hosts in the system are running both nova-compute and nova-network. Each host does DHCP and does NAT for public traffic for the VMs running on that particular host. In this model every compute host requires a connection to the public internet and each host is also assigned an address from the VM network where it listens for dhcp traffic.</para>
<para>The requirements for configuring are the following: --multi_host flag must be in place for network creation and nova-network must be run on every compute host. These created multi hosts networks will send all network related commands to the host that the VM is on.
</para></simplesect>
<simplesect><title>Future of Networking</title>
<para>With the existing multi-nic code and the HA networking code, we have a pretty robust system with a lot of deployment options. This should be enough to provide deployers enough room to solve todays networking problems. Ultimately, we want to provide users the ability to create arbitrary networks and have real and virtual network appliances managed automatically. The efforts underway in the Quantum and Melange projects will help us reach this lofty goal, but with the current additions we should have enough flexibility to get us by until those projects can take over.</para></simplesect></section>
</chapter>