Install Networking services When you install a Networking node, you must configure it for API endpoints, RabbitMQ, keystone_authtoken, and the database. Use debconf to configure these values. When you install a Networking package, debconf prompts you to choose configuration options including which plug-in to use, as follows: This parameter sets the core_plugin option value in the /etc/neutron/neutron.conf file. When you install the neutron-common package, all plug-ins are installed by default. This table lists the values for the core_plugin option. These values depend on your response to the debconf prompt.
Plug-ins and the core_plugin option
Plug-in core_plugin value in neutron.conf
BigSwitch neutron.plugins.bigswitch.plugin.NeutronRestProxyV2
Brocade neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2
Cisco neutron.plugins.cisco.network_plugin.PluginV2
Hyper-V neutron.plugins.hyperv.hyperv_neutron_plugin.HyperVNeutronPlugin
LinuxBridge neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2
Mellanox neutron.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin
MetaPlugin neutron.plugins.metaplugin.meta_neutron_plugin.MetaPluginV2
Midonet neutron.plugins.midonet.plugin.MidonetPluginV2
ml2 neutron.plugins.ml2.plugin.Ml2Plugin
Nec neutron.plugins.nec.nec_plugin.NECPluginV2
OpenVSwitch neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
PLUMgrid neutron.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2
RYU neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2
Depending on the value of core_plugin, the start-up scripts start the daemons by using the corresponding plug-in configuration file directly. For example, if you selected the Open vSwitch plug-in, neutron-server automatically launches with --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. The neutron-common package also prompts you for the default network configuration: Before you configure individual nodes for Networking, you must create the required OpenStack components: user, service, database, and one or more endpoints. After you complete these steps, follow the instructions in this guide to set up OpenStack Networking nodes. Use the password that you set previously to log in as root and create a neutron database: # mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY 'NEUTRON_DBPASS'; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'NEUTRON_DBPASS'; Create the required user, service, and endpoint so that Networking can interface with the Identity Service. To list the tenant IDs: # keystone tenant-list To list role IDs: # keystone role-list Create a neutron user: # keystone user-create --name=neutron --pass=NEUTRON_PASS --email=neutron@example.com Add the user role to the neutron user: # keystone user-role-add --user=neutron --tenant=service --role=admin Create the neutron service: # keystone service-create --name=neutron --type=network \ --description="OpenStack Networking Service" Create a Networking endpoint. Use the id property for the service that was returned in the previous step to create the endpoint: # keystone endpoint-create \ --service-id the_service_id_above \ --publicurl http://controller:9696 \ --adminurl http://controller:9696 \ --internalurl http://controller:9696
Install Networking services on a dedicated network node Before you start, set up a machine as a dedicated network node. Dedicated network nodes have a MGMT_INTERFACE NIC, a DATA_INTERFACE NIC, and a EXTERNAL_INTERFACE NIC. The management network handles communication among nodes. The data network handles communication coming to and from VMs. The external NIC connects the network node, and optionally to the controller node, so your VMs can connect to the outside world. All NICs must have static IPs. However, the data and external NICs have a special set up. For details about Networking plug-ins, see . By default, the system-config-firewall automated firewall configuration tool is in place on RHEL. This graphical interface (and a curses-style interface with -tui on the end of the name) enables you to configure IP tables as a basic firewall. You should disable it when you work with Networking unless you are familiar with the underlying network technologies, as, by default, it blocks various types of network traffic that are important to Networking. To disable it, simply launch the program and clear the Enabled check box. After you successfully set up OpenStack Networking, you can re-enable and configure the tool. However, during Networking set up, disable the tool to make it easier to debug network issues. Install the OpenStack Networking service on the network node: # apt-get install neutron-server neutron-dhcp-agent neutron-plugin-openvswitch-agent neutron-l3-agent # yum install openstack-neutron # zypper install openstack-neutron openstack-neutron-l3-agent openstack-neutron-dhcp-agent Respond to prompts for database management, [keystone_authtoken] settings, RabbitMQ credentials and API endpoint registration. Configure basic Networking-related services to start at boot time: # for s in neutron-{dhcp,l3}-agent; do chkconfig $s on; done # for s in openstack-neutron-{dhcp,l3}-agent; do chkconfig $s on; done Enable packet forwarding and disable packet destination filtering so that the network node can coordinate traffic for the VMs. Edit the /etc/sysctl.conf file, as follows: net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 With system network-related configurations, you might need to restart the network service to activate configurations, as follows: # service networking restart # service network restart Configure the core networking components. Edit the /etc/neutron/neutron.conf file and add these lines to the keystone_authtoken section: [keystone_authtoken] auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS To activate changes in the /etc/sysctl.conf file, run the following command: # sysctl -p Configure Networking to connect to the database. Edit the [database] section in the same file, as follows: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Edit the /etc/neutron/api-paste.ini file and add these lines to the [filter:authtoken] section: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host=controller auth_uri=http://controller:5000 admin_user=neutron admin_tenant_name=service admin_password=NEUTRON_PASS keystoneclient.middleware.auth_token: You must configure auth_uri to point to the public identity endpoint. Otherwise, clients might not be able to authenticate against an admin endpoint. Configure your network plug-in. For instructions, see instructions. Then, return here. Install and configure a networking plug-in. OpenStack Networking uses this plug-in to perform software-defined networking. For instructions, see instructions. Then, return here. Now that you've installed and configured a plug-in (you did do that, right?), it is time to configure the remaining parts of Networking. To perform DHCP on the software-defined networks, Networking supports several different plug-ins. However, in general, you use the Dnsmasq plug-in. Edit the /etc/neutron/dhcp_agent.ini file: dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq Restart Networking: # service neutron-dhcp-agent restart # service neutron-l3-agent restart After you configure the compute and controller nodes, configure the base networks.
Install and configure the Networking plug-ins
Install the Open vSwitch (OVS) plug-in Install the Open vSwitch plug-in and its dependencies: # apt-get install neutron-plugin-openvswitch-agent openvswitch-switch # yum install openstack-neutron-openvswitch # zypper install openstack-neutron-openvswitch-agent Start Open vSwitch: # service openvswitch start # service openvswitch-switch start And configure it to start when the system boots: # chkconfig openvswitch on # chkconfig openvswitch-switch on No matter which networking technology you use, you must add the br-int integration bridge, which connects to the VMs, and the br-ex external bridge, which connects to the outside world. # ovs-vsctl add-br br-int # ovs-vsctl add-br br-ex Add a port (connection) from the EXTERNAL_INTERFACE interface to br-ex interface: # ovs-vsctl add-port br-ex EXTERNAL_INTERFACE Configure the EXTERNAL_INTERFACE without an IP address and in promiscuous mode. Additionally, you must set the newly created br-ex interface to have the IP address that formerly belonged to EXTERNAL_INTERFACE. Edit the /etc/sysconfig/network-scripts/ifcfg-EXTERNAL_INTERFACE file: DEVICE_INFO_HERE ONBOOT=yes BOOTPROTO=none PROMISC=yes Create and edit the /etc/sysconfig/network-scripts/ifcfg-br-ex file: DEVICE=br-ex TYPE=Bridge ONBOOT=no BOOTPROTO=none IPADDR=EXTERNAL_INTERFACE_IP NETMASK=EXTERNAL_INTERFACE_NETMASK GATEWAY=EXTERNAL_INTERFACE_GATEWAY You must set some common configuration options no matter which networking technology you choose to use with Open vSwitch. Configure the L3 and DHCP agents to use OVS and namespaces. Edit the /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini files, respectively: interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True You must enable veth support if you use certain kernels. Some kernels, such as recent versions of RHEL (not RHOS) and CentOS, only partially support namespaces. Edit the previous files, as follows: ovs_use_veth = True Similarly, you must also tell Neutron core to use OVS. Edit the /etc/neutron/neutron.conf file: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 Choose a networking technology to create the virtual networks. Neutron supports GRE tunneling, VLANs, and VXLANs. This guide shows how to configure GRE tunneling and VLANs. GRE tunneling is simpler to set up because it does not require any special configuration from any physical network hardware. However, its protocol makes it difficult to filter traffic on the physical network. Additionally, this configuration does not use namespaces. You can have only one router for each network node. However, you can enable namespacing, and potentially veth, as described in the section detailing how to use VLANs with OVS). On Ubuntu 12.04 LTS with GRE you must install openvswitch-datapath-dkms and restart the service to enable the GRE flow so that OVS 1.10 and higher is used. Make sure you are running the OVS 1.10 kernel module in addition to the OVS 1.10 userspace. Both the kernel module and userspace are required for VXLAN support. The error you see in the /var/log/openvswitchovs-vswitchd.log log file is "Stderr: 'ovs-ofctl: -1: negative values not supported for in_port\n'". If you see this error, make sure modinfo openvswitch shows the right version. Also check the output from dmesg for the version of the OVS module being loaded. On the other hand, VLAN tagging modifies the ethernet header of packets. You can filter packets on the physical network through normal methods. However, not all NICs handle the increased packet size of VLAN-tagged packets well, and you might need to complete additional configuration on physical network hardware to ensure that your Neutron VLANs do not interfere with any other VLANs on your network and that any physical network hardware between nodes does not strip VLAN tags. While the examples in this guide enable network namespaces by default, you can disable them if issues occur or your kernel does not support them. Edit the /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini files, respectively: use_namespaces = False Edit the /etc/neutron/neutron.conf file to disable overlapping IP addresses: allow_overlapping_ips = False Note that when network namespaces are disabled, you can have only one router for each network node and overlapping IP addresses are not supported. You must complete additional steps after you create the initial Neutron virtual networks and router. Configure a firewall plug-in. If you do not wish to enforce firewall rules, called security groups by OpenStack, you can use neutron.agent.firewall.NoopFirewall. Otherwise, you can choose one of the Networking firewall plug-ins. The most common choice is the Hybrid OVS-IPTables driver, but you can also use the Firewall-as-a-Service driver. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [securitygroup] # Firewall driver for realizing neutron security group function. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver You must use at least the No-Op firewall. Otherwise, Horizon and other OpenStack services cannot get and set required VM boot options. Restart the OVS plug-in and make sure it starts on boot: # service neutron-openvswitch-agent restart # chkconfig neutron-openvswitch-agent on # service openstack-neutron-openvswitch-agent restart # chkconfig openstack-neutron-openvswitch-agent on # service neutron-plugin-openvswitch-agent restart # chkconfig neutron-plugin-openvswitch-agent on Now, return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for GRE tunneling Configure the OVS plug-in to use GRE tunneling, the br-int integration bridge, the br-tun tunneling bridge, and a local IP for the DATA_INTERFACE tunnel IP. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip = DATA_INTERFACE_IP Return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for VLANs Configure OVS to use VLANS. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 bridge_mappings = physnet1:br-DATA_INTERFACE Create the bridge for DATA_INTERFACE and add DATA_INTERFACE to it: # ovs-vsctl add-br br-DATA_INTERFACE # ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE Transfer the IP address for DATA_INTERFACE to the bridge in the same way that you transferred the EXTERNAL_INTERFACE IP address to br-ex. However, do not turn on promiscuous mode. Return to the OVS general instruction.
Create the base Neutron networks In these sections, replace SPECIAL_OPTIONS with any options specific to your Networking plug-in choices. See here to check if your plug-in requires any special options. Create the ext-net external network. This network represents a slice of the outside world. VMs are not directly linked to this network; instead, they connect to internal networks. Outgoing traffic is routed by Neutron to the external network. Additionally, floating IP addresses from the subnet for ext-net might be assigned to VMs so that the external network can contact them. Neutron routes the traffic appropriately. # neutron net-create ext-net -- --router:external=True SPECIAL_OPTIONS Create the associated subnet with the same gateway and CIDR as EXTERNAL_INTERFACE. It does not have DHCP because it represents a slice of the external world: # neutron subnet-create ext-net \ --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \ --gateway=EXTERNAL_INTERFACE_GATEWAY --enable_dhcp=False \ EXTERNAL_INTERFACE_CIDR Create one or more initial tenants. The following steps use the DEMO_TENANT tenant. Create the router attached to the external network. This router routes traffic to the internal subnets as appropriate. You can create it under the a given tenant: Append --tenant-id option with a value of DEMO_TENANT_ID to the command. # neutron router-create ext-to-int Connect the router to ext-net by setting the gateway for the router as ext-net: # neutron router-gateway-set EXT_TO_INT_ID EXT_NET_ID Create an internal network for DEMO_TENANT (and associated subnet over an arbitrary internal IP range, such as, 10.5.5.0/24), and connect it to the router by setting it as a port: # neutron net-create --tenant-id DEMO_TENANT_ID demo-net SPECIAL_OPTIONS # neutron subnet-create --tenant-id DEMO_TENANT_ID demo-net 10.5.5.0/24 --gateway 10.5.5.1 # neutron router-interface-add EXT_TO_INT_ID DEMO_NET_SUBNET_ID Check the special options page for your plug-in for remaining steps. Now, return to the general OVS instructions.
Plug-in-specific Neutron network options
Open vSwitch Network configuration options
GRE tunneling network options While this guide currently enables network namespaces by default, you can disable them if you have issues or your kernel does not support them. If you disabled namespaces, you must perform some additional configuration for the L3 agent. After you create all the networks, tell the L3 agent what the external network ID is, as well as the ID of the router associated with this machine (because you are not using namespaces, there can be only one router for each machine). To do this, edit the /etc/neutron/l3_agent.ini file: gateway_external_network_id = EXT_NET_ID router_id = EXT_TO_INT_ID Then, restart the L3 agent: # service neutron-l3-agent restart When creating networks, you should use the options: --provider:network_type gre --provider:segmentation_id SEG_ID SEG_ID should be 2 for the external network, and just any unique number inside the tunnel range specified before for any other network. These options are not needed beyond the first network, as Neutron automatically increments the segmentation id and copy the network type option for any additional networks. Now, return to the general OVS instructions.
VLAN network options When creating networks, use these options: --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id SEG_ID SEG_ID should be 2 for the external network, and just any unique number inside the vlan range specified above for any other network. These options are not needed beyond the first network, as Neutron automatically increments the segmentation ID and copies the network type and physical network options for any additional networks. They are only needed if you wish to modify those values in any way. Some NICs have Linux drivers that do not handle VLANs properly. See the ovs-vlan-bug-workaround and ovs-vlan-test man pages for more information. Additionally, you might try turning off rx-vlan-offload and tx-vlan-offload by using ethtool on the DATA_INTERFACE. Another potential caveat to VLAN functionality is that VLAN tags add an additional 4 bytes to the packet size. If your NICs cannot handle large packets, make sure to set the MTU to a value that is 4 bytes less than the normal value on the DATA_INTERFACE. If you run OpenStack inside a virtualized environment (for testing purposes), switching to the virtio NIC type (or a similar technology if you are not using KVM/QEMU to run your host VMs) might solve the issue.
Install networking support on a dedicated compute node This section details set up for any node that runs the nova-compute component but does not run the full network stack. By default, the system-config-firewall automated firewall configuration tool is in place on RHEL. This graphical interface (and a curses-style interface with -tui on the end of the name) enables you to configure IP tables as a basic firewall. You should disable it when you work with Neutron unless you are familiar with the underlying network technologies, as, by default, it blocks various types of network traffic that are important to Neutron. To disable it, simple launch the program and clear the Enabled check box. After you successfully set up OpenStack with Neutron, you can re-enable and configure the tool. However, during Neutron set up, disable the tool to make it easier to debug network issues. Disable packet destination filtering (route verification) to let the networking services route traffic to the VMs. Edit the /etc/sysctl.conf file and then restart networking: net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 Install and configure your networking plug-in components. To install and configure the network plug-in that you chose when you set up your network node, see . Configure the core components of Neutron. Edit the /etc/neutron/neutron.conf file: auth_host = controller admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS auth_url = http://controller:35357/v2.0 auth_strategy = keystone rpc_backend = YOUR_RPC_BACKEND PUT_YOUR_RPC_BACKEND_SETTINGS_HERE_TOO Edit the database URL under the [database] section in the above file, to tell Neutron how to connect to the database: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Edit the /etc/neutron/api-paste.ini file and add these lines to the [filter:authtoken] section: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host=controller admin_user=neutron admin_tenant_name=service admin_password=NEUTRON_PASS You must configure the networking plug-in.
Install and configure Neutron plug-ins on a dedicated compute node
Install the Open vSwitch (OVS) plug-in on a dedicated compute node Install the Open vSwitch plug-in and its dependencies: # apt-get install neutron-plugin-openvswitch-agent openvswitch-switch openvswitch-datapath-dkms # yum install openstack-neutron-openvswitch # zypper install openstack-neutron-openvswitch-agent Start Open vSwitch and configure it to start when the system boots: # service openvswitch start # chkconfig openvswitch on # service openvswitch-switch start # chkconfig openvswitch-switch on You must set some common configuration options no matter which networking technology you choose to use with Open vSwitch. You must add the br-int integration bridge, which connects to the VMs. # ovs-vsctl add-br br-int You must set some common configuration options. You must configure Networking core to use OVS. Edit the /etc/neutron/neutron.conf file: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 Configure the networking type that you chose when you set up the network node: either GRE tunneling or VLANs. You must configure a firewall as well. You should use the same firewall plug-in that you chose to use when you set up the network node. To do this, edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file and set the firewall_driver value under the securitygroup to the same value used on the network node. For instance, if you chose to use the Hybrid OVS-IPTables plug-in, your configuration looks like this: [securitygroup] # Firewall driver for realizing neutron security group function. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver You must use at least the No-Op firewall. Otherwise, Horizon and other OpenStack services cannot get and set required VM boot options. After you complete OVS configuration and the core Neutron configuration after this section, restart the Neutron Open vSwitch agent, and set it to start at boot: # service neutron-openvswitch-agent restart # chkconfig neutron-openvswitch-agent on # service openstack-neutron-openvswitch-agent restart # chkconfig openstack-neutron-openvswitch-agent on # service neutron-plugin-openvswitch-agent restart # chkconfig neutron-plugin-openvswitch-agent on Now, return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for GRE tunneling on a dedicated compute node Tell the OVS plug-in to use GRE tunneling with a br-int integration bridge, a br-tun tunneling bridge, and a local IP for the tunnel of DATA_INTERFACE's IP Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip = DATA_INTERFACE_IP Now, return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for VLANs on a dedicated compute node Tell OVS to use VLANs. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 bridge_mappings = physnet1:br-DATA_INTERFACE Create the bridge for the DATA_INTERFACE and add DATA_INTERFACE to it, the same way you did on the network node: # ovs-vsctl add-br br-DATA_INTERFACE # ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE Return to the general OVS instructions.
Install networking support on a dedicated controller node This is for a node which runs the control components of Neutron, but does not run any of the components that provide the underlying functionality (such as the plug-in agent or the L3 agent). If you wish to have a combined controller/compute node follow these instructions, and then those for the compute node. By default, the system-config-firewall automated firewall configuration tool is in place on RHEL. This graphical interface (and a curses-style interface with -tui on the end of the name) enables you to configure IP tables as a basic firewall. You should disable it when you work with Neutron unless you are familiar with the underlying network technologies, as, by default, it blocks various types of network traffic that are important to Neutron. To disable it, simple launch the program and clear the Enabled check box. After you successfully set up OpenStack with Neutron, you can re-enable and configure the tool. However, during Neutron set up, disable the tool to make it easier to debug network issues. Install the main Neutron server, Neutron libraries for Python, and the Neutron command-line interface (CLI): # yum install openstack-neutron python-neutron python-neutronclient # zypper install openstack-neutron python-neutron python-neutronclient Configure the core components of Neutron. Edit the /etc/neutron/neutron.conf file: auth_host = controller admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS auth_url = http://controller:35357/v2.0 auth_strategy = keystone rpc_backend = YOUR_RPC_BACKEND PUT_YOUR_RPC_BACKEND_SETTINGS_HERE_TOO Edit the database URL under the [database] section in the above file, to tell Neutron how to connect to the database: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure the Neutron copy of the api-paste.ini at /etc/neutron/api-paste.ini file: [filter:authtoken] EXISTING_STUFF_HERE admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS Configure the plug-in you chose when you set up the network node. Follow the instructions and return here. Tell Nova about Neutron. Specifically, you must tell Nova that Neutron will be handling networking and the firewall. Edit the /etc/nova/nova.conf file: network_api_class=nova.network.neutronv2.api.API neutron_url=http://controller:9696 neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_admin_username=neutron neutron_admin_password=NEUTRON_PASS neutron_admin_auth_url=http://controller:35357/v2.0 firewall_driver=nova.virt.firewall.NoopFirewallDriver security_group_api=neutron Regardless of which firewall driver you chose when you configure the network and compute nodes, set this driver as the No-Op firewall. The difference is that this is a Nova firewall, and because Neutron handles the Firewall, you must tell Nova not to use one. Start neutron-server and set it to start at boot: # service neutron-server start # chkconfig neutron-server on Make sure that the plug-in restarted successfully. If you get errors about a missing plugin.ini file, make a symlink that points to /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini with the name /etc/neutron/plugins.ini.
Install and configure the Neutron plug-ins on a dedicated controller node
Install the Open vSwitch (OVS) plug-in on a dedicated controller node Install the Open vSwitch plug-in: # yum install openstack-neutron-openvswitch # zypper install openstack-neutron-openvswitch-agent You must set some common configuration options no matter which networking technology you choose to use with Open vSwitch. You must configure Networking core to use OVS. Edit the /etc/neutron/neutron.conf file: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 Configure the OVS plug-in for the networking type that you chose when you configured the network node: GRE tunneling or VLANs. The dedicated controller node does not need to run Open vSwitch or the Open vSwitch agent. Now, return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for GRE tunneling on a dedicated controller node Tell the OVS plug-in to use GRE tunneling. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True Return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for VLANs on a dedicated controller node Tell OVS to use VLANS. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file, as follows: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 Return to the general OVS instructions.