Networking By understanding the available networking configuration options you can design the best configuration for your OpenStack Compute instances.
Networking Options This section offers a brief overview of each concept in networking for Compute. In Compute, users organize their cloud resources in projects. A Compute project consists of a number of VM instances created by a user. For each VM instance, Compute assigns to it a private IP address. (Currently, Nova only supports Linux bridge networking that allows the virtual interfaces to connect to the outside network through the physical interface.) The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. Currently, Nova supports three kinds of networks, implemented in three “Network Manager” types respectively: Flat Network Manager, Flat DHCP Network Manager, and VLAN Network Manager. The three kinds of networks can co-exist in a cloud system. However, since you can't yet select the type of network for a given project, you cannot configure more than one type of network in a given Compute installation. Nova has a concept of Fixed IPs and Floating IPs. Fixed IPs are assigned to an instance on creation and stay the same until the instance is explicitly terminated. Floating IPs are IP addresses that can be dynamically associated with an instance. This address can be disassociated and associated with another instance at any time. A user can reserve a floating IP for their project. In Flat Mode, a network administrator specifies a subnet. The IP addresses for VM instances are grabbed from the subnet, and then injected into the image on launch. Each instance receives a fixed IP address from the pool of available addresses. A network administrator must configure the Linux networking bridge (named br100) both on the network controller hosting the network and on the cloud controllers hosting the instances. All instances of the system are attached to the same bridge, configured manually by the network administrator. The configuration injection currently only works on Linux-style systems that keep networking configuration in /etc/network/interfaces. In Flat DHCP Mode, you start a DHCP server to pass out IP addresses to VM instances from the specified subnet in addition to manually configuring the networking bridge. IP addresses for VM instances are grabbed from a subnet specified by the network administrator. Like Flat Mode, all instances are attached to a single bridge on the compute node. In addition a DHCP server is running to configure instances. In this mode, Compute does a bit more configuration in that it attempts to bridge into an ethernet device (eth0 by default). It will also run dnsmasq as a dhcpserver listening on this bridge. Instances receive their fixed IPs by doing a dhcpdiscover. In both flat modes, the network nodes do not act as a default gateway. Instances are given public IP addresses. Compute nodes have iptables/ebtables entries created per project and instance to protect against IP/MAC address spoofing and ARP poisoning. VLAN Network Mode is the default mode for OpenStack Compute. In this mode, Compute creates a VLAN and bridge for each project. For multiple machine installation, the VLAN Network Mode requires a switch that supports VLAN tagging (IEEE 802.1Q). The project gets a range of private IPs that are only accessible from inside the VLAN. In order for a user to access the instances in their project, a special VPN instance (code named cloudpipe) needs to be created. Compute generates a certificate and key for the user to access the VPN and starts the VPN automatically. It provides a private network segment for each project's instances that can be accessed via a dedicated VPN connection from the Internet. In this mode, each project gets its own VLAN, Linux networking bridge, and subnet. The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project. OpenStack Compute creates the Linux networking bridges and VLANs when required.
Configuring Networking on the Compute Node To configure the Compute node's networking for the VM images, the overall steps are: Set the "--network-manager" flag in nova.conf. Use the nova-manage network create label CIDR n n command to create the subnet that the VMs reside on. Integrate the bridge with your network. By default, Compute uses the VLAN Network Mode. You choose the networking mode for your virtual instances in the nova.conf file. Here are the three possible options: --network_manager=nova.network.manager.FlatManager Simple, non-VLAN networking --network_manager=nova.network.manager.FlatDHCPManager Flat networking with DHCP, you must set a bridge using the --flat_network_bridge flag --network_manager=nova.network.manager.VlanManager VLAN networking with DHCP. This is the Default if no network manager is defined in nova.conf. Also, when you issue the nova-manage network create command, it uses the settings from the nova.conf flag file. Use the following command to create the subnet that your VMs will run on : nova-manage network create private 192.168.0.0/24 1 256
Configuring Flat Networking FlatNetworking uses ethernet adapters configured as bridges to allow network traffic to transit between all the various nodes. This setup can be done with a single adapter on the physical host, or multiple. This option does not require a switch that does VLAN tagging as VLAN networking does, and is a common development installation or proof of concept setup. When you choose Flat networking, Nova does not manage networking at all. Instead, IP addresses are injected into the instance via the file system (or passed in via a guest agent). Metadata forwarding must be configured manually on the gateway if it is required within your network. To configure flat networking, ensure that your nova.conf file contains the line: --network_manager=nova.network.manager.FlatManager Compute defaults to a bridge device named ‘br100’ which is stored in the Nova database, so you can change the name of the bridge device by modifying the entry in the database. Consult the diagrams for additional configuration options. In any set up with FlatNetworking (either Flat or FlatDHCP), the host with nova-network on it is responsible for forwarding traffic from the private network configured with the --fixed_range= directive in nova.conf and the --flat_network_bridge setting. This host needs to have br100 configured and talking to any other nodes that are hosting VMs. With either of the Flat Networking options, the default gateway for the virtual machines is set to the host which is running nova-network. Set the compute node's external IP address to be on the bridge and add eth0 to that bridge. To do this, edit your network interfaces configuration to look like the following example: # The loopback network interface auto lo iface lo inet loopback # Networking for OpenStack Compute auto br100 iface br100 inet dhcp bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 Next, restart networking to apply the changes: sudo /etc/init.d/networking restart For an all-in-one development setup, this diagram represents the network setup.
Flat network, all-in-one server installation
For multiple compute nodes with a single network adapter, which you can use for smoke testing or a proof of concept, this diagram represents the network setup.
Flat network, single interface, multiple servers
For multiple compute nodes with multiple network adapters, this diagram represents the network setup. You may want to use this setup for separate admin and data traffic.
Flat network, multiple interfaces, multiple servers
Configuring Flat DHCP NetworkingWith Flat DHCP, the host running nova-network acts as the gateway to the virtual nodes. You can run one nova-network per cluster. Set the flag "--network_host" on the nova.conf stored on the nova-compute node to tell it which host the nova-network is running on so it can communicate with nova-network. You must also set the "--flat_network_bridge" setting to the name of the bridge (no default is set for it). The nova-network service will track leases and releases in the database so it knows if a VM instance has stopped properly configuring via DHCP. Lastly, it sets up iptables rules to allow the VMs to communicate with the outside world and contact a special metadata server to retrieve information from the cloud. Compute hosts in the FlatDHCP model are responsible for bringing up a matching bridge and bridging the VM tap devices into the same ethernet device that the network host is on. The compute hosts do not need an IP address on the VM network, because the bridging puts the VMs and the network host on the same logical network. When a VM boots, the VM sends out DHCP packets, and the DHCP server on the network host responds with their assigned IP address. Visually, the setup looks like the diagram below:
Flat DHCP network, multiple interfaces, multiple servers
FlatDHCP doesn't create VLANs, it creates a bridge. This bridge works just fine on a single host, but when there are multiple hosts, traffic needs a way to get out of the bridge onto a physical interface. Be careful when setting up "--flat_interface", if you specify an interface that already has an IP it will break and if this is the interface you are connecting through with SSH, you cannot fix it unless you have ipmi/console access. In FlatDHCP mode, the setting for "--network_size" should be number of IPs in the entire fixed range. If you are doing a /12 in CIDR notation, then this number would be 2^20 or 1,048,576 IP addresses. That said, it will take a very long time for you to create your initial network, as an entry for each IP will be created in the database. If you have an unused interface on your hosts that has connectivity with no IP address, you can simply tell FlatDHCP to bridge into the interface by specifying "--flat_interface=<interface>" in your flagfile. The network host will automatically add the gateway ip to this bridge. You can also add the interface to br100 manually and not set flat_interface. If this is the case for you, edit your nova.conf file to contain the following lines: --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --network_manager=nova.network.manager.FlatDHCPManager --flat_network_dhcp_start=10.0.0.2 --flat_network_bridge=br100 --flat_interface=eth2 --flat_injected=False --public_interface=eth0 Integrate your network interfaces to match this configuration.
Outbound Traffic Flow with Any Flat Networking In any set up with FlatNetworking, the host with nova-network on it is responsible for forwarding traffic from the private network configured with the "--fixed_range=..." directive in nova.conf. This host needs to have br100 configured and talking to any other nodes that are hosting VMs. With either of the Flat Networking options, the default gateway for the virtual machines is set to the host which is running nova-network. When a virtual machine sends traffic out to the public networks, it sends it first to its default gateway, which is where nova-network is configured.
Single adaptor hosts, first route
Next, the host on which nova-network is configured acts as a router and forwards the traffic out to the Internet.
Single adaptor hosts, second route
If you're using a single interface, then that interface (often eth0) needs to be set into promiscuous mode for the forwarding to happen correctly. This does not appear to be needed if you're running with physical hosts that have and use two interfaces.
Configuring VLAN Networking In some networking environments, you may have a large IP space which is cut up into smaller subnets. The smaller subnets are then trunked together at the switch level (dividing layer 3 by layer 2) so that all machines in the larger IP space can communicate. The purpose of this is generally to control the size of broadcast domains. Using projects as a way to logically separate each VLAN, we can setup our cloud in this environment. Please note that you must have IP forwarding enabled for this network mode to work. Obtain the parameters for each network. You may need to ask a network administrator for this information, including netmask, broadcast, gateway, ethernet device and VLAN ID. You need to have networking hardware that supports VLAN tagging. Please note that currently eth0 is hardcoded as the vlan_interface in the default flags. If you need to attach your bridges to a device other than eth0, you will need to add following flag to /etc/nova/nova.conf: --vlan_interface=eth1 In VLAN mode, the setting for --network_size is the number of IPs per project as opposed to the FlatDHCP mode where --network_size indicates number of IPs in the entire fixed range. For VLAN, the settings in nova.conf that affect networking are also --fixed_range, where the space is divided up into subnets of --network_size. VLAN is the default networking mode for Compute, so if you have no --network_manager entry in your nova.conf file, you are set up for VLAN. To set your nova.conf file to VLAN, use this flag in /etc/nova/nova.conf: --network_manager=nova.network.manager.VlanManager For the purposes of this example walk-through, we will use the following settings. These are intentionally complex in an attempt to cover most situations: VLANs: 171, 172, 173 and 174 IP Blocks: 10.1.171.0/24, 10.1.172.0/24, 10.1.173.0/24 and 10.1.174.0/24 Each VLAN maps to its corresponding /24 (171 = 10.1.171.0/24, etc) Each VLAN will get its own bridge device, which is in the format br_$VLANID Each /24 has an upstream default gateway on .1 The first 6 IPs in each /24 are reserved First, create the networks that Compute can pull from using nova-manage commands: nova-manage --flagfile=/etc/nova/nova.conf network create private 10.1.171.0/24 1 256 nova-manage --flagfile=/etc/nova/nova.conf network create private 10.1.172.0/24 1 256 nova-manage --flagfile=/etc/nova/nova.conf network create private 10.1.173.0/24 1 256 nova-manage --flagfile=/etc/nova/nova.conf network create private 10.1.174.0/24 1 256 Log in to the nova database to determine the network ID assigned to each VLAN: select id,cidr from networks; Update the DB to match your network settings. The following script will generate SQL based on the predetermined settings for this example. You will need to modify this database update to fit your environment. if [ -z $1 ]; then echo "You need to specify the vlan to modify" fi if [ -z $2 ]; then echo "You need to specify a network id number (check the DB for the network you want to update)" fi VLAN=$1 ID=$2 cat > vlan.sql << __EOF_ update networks set vlan = '$VLAN' where id = $ID; update networks set bridge = 'br_$VLAN' where id = $ID; update networks set gateway = '10.1.$VLAN.7' where id = $ID; update networks set dhcp_start = '10.1.$VLAN.8' where id = $ID; update fixed_ips set reserved = 1 where address in ('10.1.$VLAN.1','10.1.$VLAN.2','10.1.$VLAN.3','10.1.$VLAN.4','10.1.$VLAN.5','10.1.$VLAN.6','10.1.$VLAN.7'); __EOF_ After verifying that the above SQL will work for your environment, run it against the nova database, once for every VLAN you have in the environment. Next, create a project manager for the Compute project: nova-manage --flagfile=/etc/nova/nova.conf user admin $username Then create a project and assign that user as the admin user: nova-manage --flagfile=/etc/nova/nova.conf project create $projectname $username Finally, get the credentials for the user just created, which also assigns one of the networks to this project:) nova-manage --flagfile=/etc/nova/nova.conf project zipfile $projectname $username When you start nova-network, the bridge devices and associated VLAN tags will be created. When you create a new VM you must determine (either manually or programatically) which VLAN it should be a part of, and start the VM in the corresponding project. In certain cases, the network manager may not properly tear down bridges and VLANs when it is stopped. If you attempt to restart the network manager and it does not start, check the logs for errors indicating that a bridge device already exists. If this is the case, you will likely need to tear down the bridge and VLAN devices manually. vconfig rem vlanNNN ifconfig br_NNN down brctl delbr br_NNN Also, if users need to access the instances in their project across a VPN, a special VPN instance (code named cloudpipe) needs to be created as described. You can create the cloudpipe instance. The image is basically just a Linux instance with openvpn installed. It needs a simple script to grab user data from the metadata server, b64 decode it into a zip file, and run the autorun.sh script from inside the zip. The autorun script should configure and run openvpn to run using the data from Compute. For certificate management, it is also useful to have a cron script that will periodically download the metadata and copy the new Certificate Revocation List (CRL). This will keep revoked users from connecting and disconnects any users that are connected with revoked certificates when their connection is re-negotiated (every hour). You set the --use_project_ca flag in nova.conf for cloudpipes to work securely so that each project has its own Certificate Authority (CA).
Cloudpipe — Per Project Vpns Cloudpipe is a method for connecting end users to their project instances in VLAN networking mode. The support code for cloudpipe implements admin commands (via an extension) to automatically create a VM for a project that allows users to vpn into the private network of their project. Access to this vpn is provided through a public port on the network host for the project. This allows users to have free access to the virtual machines in their project without exposing those machines to the public internet. The cloudpipe image is basically just a Linux instance with openvpn installed. It needs a simple script to grab user data from the metadata server, b64 decode it into a zip file, and run the autorun.sh script from inside the zip. The autorun script will configure and run openvpn to run using the data from nova. It is also useful to have a cron script that will periodically redownload the metadata and copy the new crl. This will keep revoked users from connecting and will disconnect any users that are connected with revoked certificates when their connection is renegotiated (every hour).
Creating a Cloudpipe Image Making a cloudpipe image is relatively easy. install openvpn on a base ubuntu image. set up a server.conf.template in /etc/openvpn/ set up.sh in /etc/openvpn/ set down.sh in /etc/openvpn/ download and run the payload on boot from /etc/rc.local setup /etc/network/interfaces upload the image and set the image id in your config file: vpn_image_id=[uuid from glance] you should set a few other config options to make vpns work properly: use_project_ca=True cnt_vpn_clients=5 force_dhcp_release=True When you use the cloudpipe extension to launch a vpn for a user it goes through the following process: creates a keypair called <project_id>-vpn and saves it in the keys directory creates a security group <project_id>-vpn and opens up 1194 and icmp creates a cert and private key for the vpn instance and saves it in the CA/projects/<project_id>/ directory zips up the info and puts it b64 encoded as user data launches an [vpn_instance_type] instance with the above settings using the flag-specified vpn image
VPN Access In vlan networking mode, the second IP in each private network is reserved for the cloudpipe instance. This gives a consistent IP to the instance so that nova-network can create forwarding rules for access from the outside world. The network for each project is given a specific high-numbered port on the public IP of the network host. This port is automatically forwarded to 1194 on the vpn instance. If specific high numbered ports do not work for your users, you can always allocate and associate a public IP to the instance, and then change the vpn_public_ip and vpn_public_port in the database. Rather than using the db directly, you can also use nova-manage vpn change [new_ip] [new_port]
Certificates and Revocation If the use_project_ca config option is set (required to for cloudpipes to work securely), then each project has its own ca. This ca is used to sign the certificate for the vpn, and is also passed to the user for bundling images. When a certificate is revoked using nova-manage, a new Certificate Revocation List (crl) is generated. As long as cloudpipe has an updated crl, it will block revoked users from connecting to the vpn. The userdata for cloudpipe isn't currently updated when certs are revoked, so it is necessary to restart the cloudpipe instance if a user's credentials are revoked.
Restarting and Logging into the Cloudpipe VPN You can reboot a cloudpipe vpn through the api if something goes wrong (using "nova reboot" for example), but if you generate a new crl, you will have to terminate it and start it again using the cloudpipe extension. The cloudpipe instance always gets the first ip in the subnet and if force_dhcp_release is not set it takes some time for the ip to be recovered. If you try to start the new vpn instance too soon, the instance will fail to start because of a "NoMoreAddresses" error. It is therefore recommended to use force_dhcp_release. The keypair that was used to launch the cloudpipe instance should be in the keys/<project_id> folder. You can use this key to log into the cloudpipe instance for debugging purposes. If you are running multiple copies of nova-api this key will be on whichever server used the original request. To make debugging easier, you may want to put a common administrative key into the cloudpipe image that you create
Enabling Ping and SSH on VMs Be sure you enable access to your VMs by using the ‘secgroup-add-rule’ command. Below, you will find the commands to allow ‘ping’ and ‘ssh’ to your VMs: nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 If you still cannot ping or SSH your instances after issuing the ‘nova secgroup-add-rule’ commands, look at the number of ‘dnsmasq’ processes that are running. If you have a running instance, check to see that TWO ‘dnsmasq’ processes are running. If not, perform the following: killall dnsmasq; service nova-network restart
Associating a Public IP Address OpenStack Compute uses NAT for public IPs. If you plan to use public IP addresses for your virtual instances, you must configure --public_interface=vlan100' in the nova.conf file so that Nova knows where to bind public IP addresses. Restart nova-network if you change nova.conf while the service is running. Also, ensure you have opened port 22 for the nova network. You must add the IP address or block of public IP addresses to the floating IP list using the nova-manage floating create command. When you start a new virtual instance, associate one of the public addresses to the new instance using the euca-associate-address command. These are the basic overall steps and checkpoints. First, set up the public address. nova-manage floating create 68.99.26.170/31 nova floating-ip-create 68.99.26.170 nova add-floating-ip 1 68.99.26.170 Make sure the security groups are open. root@my-hostname:~# nova secgroup-list-rules default +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | | tcp | 22 | 22 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ Ensure the NAT rules have been added to iptables. iptables -L -nv -A nova-network-OUTPUT -d 68.99.26.170/32 -j DNAT --to-destination 10.0.0.3 iptables -L -nv -t nat -A nova-network-PREROUTING -d 68.99.26.170/32 -j DNAT --to-destination10.0.0.3 -A nova-network-floating-snat -s 10.0.0.3/32 -j SNAT --to-source 68.99.26.170 Check that the public address, in this example "68.99.26.170", has been added to your public interface. You should see the address in the listing when you enter "ip addr" at the command prompt. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether xx:xx:xx:17:4b:c2 brd ff:ff:ff:ff:ff:ff inet 13.22.194.80/24 brd 13.22.194.255 scope global eth0 inet 68.99.26.170/32 scope global eth0 inet6 fe80::82b:2bf:fe1:4b2/64 scope link valid_lft forever preferred_lft forever Note that you cannot SSH to an instance with a public IP from within the same server as the routing configuration won't allow it.
Allocating and Associating IP Addresses with InstancesYou can use Euca2ools commands to manage floating IP addresses used with Flat DHCP or VLAN networking. To assign a reserved IP address to your project, removing it from the pool of available floating IP addresses, use nova floating-ip-create . It'll return an IP address, assign it to the project you own, and remove it from the pool of available floating IP addresses. To associate the floating IP to your instance, use nova add-floating-ip $server-name/id $floating-ip. When you want to return the floating IP to the pool, first use euca-disassociate-address [floating_ip] to disassociate the IP address from your instance, then use euca-deallocate-address [floating_ip] to return the IP to the pool of IPs for someone else to grab. There are nova-manage commands that also help you manage the floating IPs. nova-manage floating list - This command lists the floating IP addresses in the pool. nova-manage floating create [cidr] - This command creates specific floating IPs for either a single address or a subnet. nova-manage floating delete [cidr] - This command removes floating IP addresses using the same parameters as the create command.
Removing a Network from a ProjectYou will find that you cannot remove a network that has already been associated to a project by simply deleting it. You can disassociate the project from the network with a scrub command and the project name as the final parameter: nova-manage project scrub projectname
Existing High Availability Options for Networking from Vish Ishaya As you can see from the Flat DHCP diagram titled "Flat DHCP network, multiple interfaces, multiple servers," traffic from the VM to the public internet has to go through the host running nova network. Dhcp is handled by nova-network as well, listening on the gateway address of the fixed_range network. The compute hosts can optionally have their own public IPs, or they can use the network host as their gateway. This mode is pretty simple and it works in the majority of situations, but it has one major drawback: the network host is a single point of failure! If the network host goes down for any reason, it is impossible to communicate with the VMs. Here are some options for avoiding the single point of failure. Option 1: Failover The folks at NTT labs came up with a ha-linux configuration that allows for a 4 second failover to a hot backup of the network host. Details on their approach can be found in the following post to the openstack mailing list: https://lists.launchpad.net/openstack/msg02099.html This solution is definitely an option, although it requires a second host that essentially does nothing unless there is a failure. Also four seconds can be too long for some real-time applications. Option 2: Multi-nic Recently, nova gained support for multi-nic. This allows us to bridge a given VM into multiple networks. This gives us some more options for high availability. It is possible to set up two networks on separate vlans (or even separate ethernet devices on the host) and give the VMs a NIC and an IP on each network. Each of these networks could have its own network host acting as the gateway. In this case, the VM has two possible routes out. If one of them fails, it has the option of using the other one. The disadvantage of this approach is it offloads management of failure scenarios to the guest. The guest needs to be aware of multiple networks and have a strategy for switching between them. It also doesn't help with floating IPs. One would have to set up a floating IP associated with each of the IPs on private the private networks to achieve some type of redundancy. Option 3: HW Gateway It is possible to tell dnsmasq to use an external gateway instead of acting as the gateway for the VMs. You can pass dhcpoption=3,<ip of gateway> to make the VMs use an external gateway. This will require some manual setup. The metadata IP forwarding rules will need to be set on the hardware gateway instead of the nova-network host. You will have to make sure to set up routes properly so that the subnet that you use for VMs is routable. This offloads HA to standard switching hardware and it has some strong benefits. Unfortunately, nova-network is still responsible for floating IP natting and dhcp, so some failover strategy needs to be employed for those options. New HA Option Essentially, what the current options are lacking, is the ability to specify different gateways for different VMs. An agnostic approach to a better model might propose allowing multiple gateways per VM. Unfortunately this rapidly leads to some serious networking complications, especially when it comes to the natting for floating IPs. With a few assumptions about the problem domain, we can come up with a much simpler solution that is just as effective. The key realization is that there is no need to isolate the failure domain away from the host where the VM is running. If the host itself goes down, losing networking to the VM is a non-issue. The VM is already gone. So the simple solution involves allowing each compute host to do all of the networking jobs for its own VMs. This means each compute host does NAT, dhcp, and acts as a gateway for all of its own VMs. While we still have a single point of failure in this scenario, it is the same point of failure that applies to all virtualized systems, and so it is about the best we can do. So the next question is: how do we modify the Nova code to provide this option. One possibility would be to add code to the compute worker to do complicated networking setup. This turns out to be a bit painful, and leads to a lot of duplicated code between compute and network. Another option is to modify nova-network slightly so that it can run successfully on every compute node and change the message passing logic to pass the network commands to a local network worker. Surprisingly, the code is relatively simple. A couple fields needed to be added to the database in order to support these new types of "multihost" networks without breaking the functionality of the existing system. All-in-all it is a pretty small set of changes for a lot of added functionality: about 250 lines, including quite a bit of cleanup. You can see the branch here: https://code.launchpad.net/~vishvananda/nova/ha-net/+merge/67078 The drawbacks here are relatively minor. It requires adding an IP on the VM network to each host in the system, and it implies a little more overhead on the compute hosts. It is also possible to combine this with option 3 above to remove the need for your compute hosts to gateway. In that hybrid version they would no longer gateway for the VMs and their responsibilities would only be dhcp and nat. The resulting layout for the new HA networking option looks the following diagram:
High Availability Networking Option
In contrast with the earlier diagram, all the hosts in the system are running both nova-compute and nova-network. Each host does DHCP and does NAT for public traffic for the VMs running on that particular host. In this model every compute host requires a connection to the public internet and each host is also assigned an address from the VM network where it listens for dhcp traffic. The requirements for configuring are the following: --multi_host flag must be in place for network creation and nova-network must be run on every compute host. These created multi hosts networks will send all network related commands to the host that the VM is on.
Future of Networking With the existing multi-nic code and the HA networking code, we have a pretty robust system with a lot of deployment options. This should be enough to provide deployers enough room to solve todays networking problems. Ultimately, we want to provide users the ability to create arbitrary networks and have real and virtual network appliances managed automatically. The efforts underway in the Quantum and Melange projects will help us reach this lofty goal, but with the current additions we should have enough flexibility to get us by until those projects can take over.