vmware-nsx/plugins/openvswitch-plugin
Tyler Smith 1fcde53d0e blueprint quantum-packaging
Change-Id: Ica19170540b06ecddb0fbb6d340ee7a6819c1708
2011-03-07 18:02:05 -05:00
..
etc blueprint quantum-packaging 2011-03-07 18:02:05 -05:00
lib blueprint quantum-packaging 2011-03-07 18:02:05 -05:00
MANIFEST.in blueprint quantum-packaging 2011-03-07 18:02:05 -05:00
README blueprint quantum-packaging 2011-03-07 18:02:05 -05:00
setup.py blueprint quantum-packaging 2011-03-07 18:02:05 -05:00

README

# -- Background

The quantum openvswitch plugin is a plugin that allows you to manage
connectivity between VMs on hypervisors running openvswitch.

The quantum openvswitch plugin consists of three components:

1) The plugin itself: The plugin uses a database backend (mysql for
   now) to store configuration and mappings that are used by the
   agent.  The mysql server runs on a central server (often the same
   host as nova itself).

2) The quantum service host which will be running quantum.  This can
   be run on the server running nova.

3) An agent which runs on the hypervisor (dom0) and communicates with
   openvswitch.  The agent gathers the configuration and mappings from
   the mysql database running on the quantum host.

The sections below describe how to configure and run the quantum
service with the openvswitch plugin.

# -- Nova configuration (controller node)

1) Make sure to set up nova using the quantum network manager in the
   nova.conf on the node that will be running nova-network.

--network_manager=nova.network.quantum.manager.QuantumManager

# -- Nova configuration (compute node(s))

1a) (If you're using xen) Configure the integration bridge and vif driver
# Note that the integration bridge could be different on each compute node so
# be careful to specify the right one in each nova.conf
--xenapi_ovs_integration_bridge=xapi1
--linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

1b) (If you're using qemu/kvm) Configure the bridge, vif driver, and
    libvirt/vif type

--libvirt_ovs_integration_bridge=br-int
--libvirt_type=qemu
--libvirt_vif_type=ethernet
--libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver
# This last one isn't actually required yet as DHCP isn't integrated
  with the QuantumManager yet.
--linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

# -- Quantum configuration

Make the openvswitch plugin the current quantum plugin

- edit ../../plugins.ini and change the provider line to be:
provider = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPlugin

# -- Database config.

The Open vSwitch quantum service requires access to a mysql database in order
to store configuration and mappings that will be used by the agent.  Here is
how to set up the database on the host that you will be running the quantum
service on.

MySQL should be installed on the host, and all plugins and clients
must be configured with access to the database.

To prep mysql, run:

$ mysql -u root -p -e "create database ovs_quantum"

Make sure any xenserver running the ovs quantum agent will be able to
communicate with the host running the quantum service:

# log in to mysql service
$ mysql -u root -p
# The Open vSwitch Quantum agent running on each compute node must be able to
# make a mysql connection back to the main database server.
mysql> GRANT USAGE ON *.* to root@'yourremotehost' IDENTIFIED BY 'newpassword';
# force update of authorization changes
mysql> FLUSH PRIVILEGES;

# -- Plugin configuration

- Edit the configuration file (ovs_quantum_plugin.ini).  Make sure it
  matches your mysql configuration.  This file must be updated with
  the addresses and credentials to access the database.  This file
  will be included in the agent distribution tarball (see below) and
  the agent will use the credentials here to access the database.

# -- XenServer Agent configuration

- Create the agent distribution tarball

$ make agent-dist

- Copy the resulting tarball to your xenserver(s) (copy to dom0, not
  the nova compute node)

- Unpack the tarball and run xenserver_install.sh.  This will install
  all of the necessary pieces into /etc/xapi.d/plugins.  It will also
  output the name of the integration bridge that you'll need for your nova
  configuration.  Make sure to specify this in your nova flagfile as
  --xenapi_ovs_integration_bridge.

  NOTE: Make sure the integration bridge that the script emits is the
  same as the one in your ovs_quantum_plugin.ini file.

- Run the agent [on your hypervisor (dom0)]:

$ /etc/xapi.d/plugins/ovs_quantum_agent.py /etc/xapi.d/plugins/ovs_quantum_plugin.ini

# -- KVM Agent configuration

- Edit ovs_quantum_plugin.ini and make sure the integration bridge is set to
  br-int.

- Copy ovs_quantum_agent.py and ovs_quantum_plugin.ini to the compute
  node and run:
$ python ovs_quantum_agent.py ovs_quantum_plugin.ini

# -- Getting quantum up and running

- Start quantum [on the quantum service host]:
~/src/quantum  $ python bin/quantum etc/quantum.conf
- Run ovs_quantum_plugin.py via the quantum plugin framework cli [on the
  quantum service host]
~/src/quantum$ python bin/cli

This will show help all of the available commands.

An example session (to attach a vm interface with id 'foo') looks like
this:

$ export TENANT=t1
$ export VIF_UUID=foo # This should be the uuid of the virtual interface
$ python bin/cli create_net $TENANT network1
Created a new Virtual Network with ID:e754e7c0-a8eb-40e5-861a-b182d30c3441
$ export NETWORK=e754e7c0-a8eb-40e5-861a-b182d30c3441
$ python bin/cli create_port $TENANT $NETWORK
Created Virtual Port:5a1e121b-ccc8-471d-9445-24f15f9f854c on Virtual Network:e754e7c0-a8eb-40e5-861a-b182d30c3441
$ export PORT=5a1e121b-ccc8-471d-9445-24f15f9f854c
$ python bin/cli plug_iface $TENANT $NETWORK $PORT $VIF_UUID
Plugged interface "foo" to port:5a1e121b-ccc8-471d-9445-24f15f9f854c on network:e754e7c0-a8eb-40e5-861a-b182d30c3441

(.. repeat for more ports and interface combinations..)

See the main quantum documentation for more details on the commands.

# -- Using the OVS plugin in multiple hosts

The integration bridge specified must have a port that is a VLAN trunk
connecting the the bridge to the outside world.  The physical network
connecting all servers must also be configured to trunk VLANs.

If the NIC (e.g., eth0) connecting to the physical network is not
attached to a bridge, it can be added directly as a port on the
integration bridge.  For example:

ovs-vsctl add-port br-int eth0

However, if the NIC is already attached to bridge (e.g., br0), then we must
create a "patch" port to link the integration bridge with that existing bridge.
For example:

ovs-vsctl add-port br0 patch-outside -- set Interface patch-outside type=patch options:peer=patch-inside
ovs-vsctl add-port br-int patch-inside -- set Interface patch-outside type=patch options:peer=patch-outside