Install Networking services When you install a Networking node, you must configure it for API endpoints, RabbitMQ, keystone_authtoken, and the database. Use debconf to configure these values. When you install a Networking package, debconf prompts you to choose configuration options including which plug-in to use, as follows: This parameter sets the core_plugin option value in the /etc/neutron/neutron.conf file. When you install the neutron-common package, all plug-ins are installed by default. This table lists the values for the core_plugin option. These values depend on your response to the debconf prompt.
Plug-ins and the core_plugin option
Plug-in core_plugin value in neutron.conf
BigSwitch neutron.plugins.bigswitch.plugin.NeutronRestProxyV2
Brocade neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2
Cisco neutron.plugins.cisco.network_plugin.PluginV2
Hyper-V neutron.plugins.hyperv.hyperv_neutron_plugin.HyperVNeutronPlugin
LinuxBridge neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2
Mellanox neutron.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin
MetaPlugin neutron.plugins.metaplugin.meta_neutron_plugin.MetaPluginV2
Midonet neutron.plugins.midonet.plugin.MidonetPluginV2
ml2 neutron.plugins.ml2.plugin.Ml2Plugin
Nec neutron.plugins.nec.nec_plugin.NECPluginV2
OpenVSwitch neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
PLUMgrid neutron.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2
RYU neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2
Depending on the value of core_plugin, the start-up scripts start the daemons by using the corresponding plug-in configuration file directly. For example, if you selected the Open vSwitch plug-in, neutron-server automatically launches with --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. The neutron-common package also prompts you for the default network configuration: Before you configure individual nodes for Networking, you must create the required OpenStack components: user, service, database, and one or more endpoints. After you complete these steps on the controller node, follow the instructions in this guide to set up OpenStack Networking nodes. Use the password that you set previously to log in as root and create a neutron database: # mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY 'NEUTRON_DBPASS'; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'NEUTRON_DBPASS'; Create the required user, service, and endpoint so that Networking can interface with the Identity Service. To list the tenant IDs: # keystone tenant-list To list role IDs: # keystone role-list Create a neutron user: # keystone user-create --name=neutron --pass=NEUTRON_PASS --email=neutron@example.com Add the user role to the neutron user: # keystone user-role-add --user=neutron --tenant=service --role=admin Create the neutron service: # keystone service-create --name=neutron --type=network \ --description="OpenStack Networking Service" Create a Networking endpoint. Use the id property for the service that was returned in the previous step to create the endpoint: # keystone endpoint-create \ --service-id the_service_id_above \ --publicurl http://controller:9696 \ --adminurl http://controller:9696 \ --internalurl http://controller:9696
Install Networking services on a dedicated network node Before you start, set up a machine as a dedicated network node. Dedicated network nodes have a MGMT_INTERFACE NIC, a DATA_INTERFACE NIC, and a EXTERNAL_INTERFACE NIC. The management network handles communication among nodes. The data network handles communication coming to and from VMs. The external NIC connects the network node, and optionally to the controller node, so your VMs can connect to the outside world. All NICs must have static IPs. However, the data and external NICs have a special set up. For details about Networking plug-ins, see . By default, the system-config-firewall automated firewall configuration tool is in place on RHEL. This graphical interface (and a curses-style interface with -tui on the end of the name) enables you to configure IP tables as a basic firewall. You should disable it when you work with Networking unless you are familiar with the underlying network technologies, as, by default, it blocks various types of network traffic that are important to Networking. To disable it, simply launch the program and clear the Enabled check box. After you successfully set up OpenStack Networking, you can re-enable and configure the tool. However, during Networking set up, disable the tool to make it easier to debug network issues. Install the OpenStack Networking service on the network node: # apt-get install neutron-server neutron-dhcp-agent neutron-plugin-openvswitch-agent neutron-l3-agent # yum install openstack-neutron # zypper install openstack-neutron openstack-neutron-l3-agent \ openstack-neutron-dhcp-agent openstack-neutron-metadata-agent Respond to prompts for database management, [keystone_authtoken] settings, RabbitMQ credentials and API endpoint registration. Configure basic Networking-related services to start at boot time: # for s in neutron-{dhcp,metadata,l3}-agent; do chkconfig $s on; done # for s in openstack-neutron-{dhcp,metadata,l3}-agent; do chkconfig $s on; done Enable packet forwarding and disable packet destination filtering so that the network node can coordinate traffic for the VMs. Edit the /etc/sysctl.conf file, as follows: net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 Use the sysctl command to ensure the changes made to the /etc/sysctl.conf file take effect: # sysctl -p It is recommended that the networking service is restarted after changing values related to the networking configuration. This ensures that all modified values take effect immediately: # service networking restart # service network restart Configure Networking to use keystone for authentication: Set the auth_strategy configuration key to keystone in the DEFAULT section of the file: # openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone Set the neutron configuration for keystone authentication: # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password NEUTRON_PASS To configure neutron to use keystone for authentication, edit the /etc/neutron/neutron.conffile. Set the auth_strategy configuration key to keystone in the DEFAULT section of the file: auth_strategy = keystone Add these lines to the keystone_authtoken section of the file: auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS Set the root_helper configuration in the [agent] section of /etc/neutron/neutron.conf: # openstack-config --set /etc/neutron/neutron.conf AGENT \ root_helper sudo neutron-rootwrap /etc/neutron/rootwrap.conf Configure access to the RabbitMQ service: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_userid guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_password RABBIT_PASS Configure the RabbitMQ access. Edit the /etc/neutron/neutron.conf file to modify the following parameters in the DEFAULT section. rabbit_host = controller rabbit_userid = guest rabbit_password = RABBIT_PASS Configure access to the Qpid message queue: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_qpid # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_hostname controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_port 5672 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_username guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_password guest Configure Networking to connect to the database: # openstack-config --set /etc/neutron/neutron.conf DATABASE sql_connection \ mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure Networking to connect to the database. Edit the [database] section in the same file, as follows: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure the /etc/neutron/api-paste.ini file for keystone authentication: # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ paste.filter_factory keystoneclient.middleware.auth_token:filter_factory # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ auth_host controller # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ auth_uri http://controller:5000 # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ admin_tenant_name service # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ admin_user neutron # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ admin_password NEUTRON_PASS Edit the /etc/neutron/api-paste.ini file and add these lines to the [filter:authtoken] section: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = controller auth_uri = http://controller:5000 admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS keystoneclient.middleware.auth_token: You must configure auth_uri to point to the public identity endpoint. Otherwise, clients might not be able to authenticate against an admin endpoint. Configure your network plug-in. For instructions, see instructions. Then, return here. Install and configure a networking plug-in. OpenStack Networking uses this plug-in to perform software-defined networking. See for further details. Then, return here when finished. Now that you've installed and configured a plug-in, it is time to configure the remaining parts of OpenStack Networking. To perform DHCP on the software-defined networks, Networking supports several different plug-ins. However, in general, you use the dnsmasq plug-in. Configure the /etc/neutron/dhcp_agent.ini file: dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ dhcp_driver neutron.agent.linux.dhcp.Dnsmasq To allow virtual machines to access the Compute metadata information, the Networking metadata agent must be enabled and configured. The agent will act as a proxy for the Compute metadata service. On the controller, edit the /etc/nova/nova.conf file to define a secret key that will be shared between the Compute Service and the Networking metadata agent. Add to the [DEFAULT] section: [DEFAULT] neutron_metadata_proxy_shared_secret = METADATA_PASS service_neutron_metadata_proxy = true Set the neutron_metadata_proxy_shared_secret key: # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_metadata_proxy_shared_secret METADATA_PASS # openstack-config --set /etc/nova/nova.conf DEFAULT \ service_neutron_metadata_proxy true Restart the nova-api service: # service nova-api restart # service openstack-nova-api restart On the network node, modify the metadata agent configuration. Edit the /etc/neutron/metadata_agent.ini file and modify the [DEFAULT] section: [DEFAULT] auth_url = http://controller:5000/v2.0 auth_region = regionOne admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS nova_metadata_ip = controller metadata_proxy_shared_secret = METADATA_PASS Set the required keys: # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ auth_url http://controller:5000/v2.0 # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ auth_region regionOne # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_tenant_name service # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_user neutron # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_password NEUTRON_PASS # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ nova_metadata_ip controller # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ metadata_proxy_shared_secret METADATA_PASS The value of auth_region is case-sensitive and must match the endpoint region defined in Keystone. The neutron-server initialization script expects a symbolic link /etc/neutron/plugin.ini pointing to the configuration file associated with your chosen plug-in. Using Open vSwitch, for example, the symbolic link must point to /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. If this symbolic link does not exist, create it using the following commands: # cd /etc/neutron # ln -s plugins/openvswitch/ovs_neutron_plugin.ini plugin.ini The openstack-neutron initialization script expects the variable NEUTRON_PLUGIN_CONF in file /etc/sysconfig/neutron to reference the configuration file associated with your chosen plug-in. Using Open vSwitch, for example, edit the /etc/sysconfig/neutron file and add the following: NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini" Restart Networking services. # service neutron-server restart # service neutron-dhcp-agent restart # service neutron-l3-agent restart # service neutron-metadata-agent restart # service neutron-server restart # service neutron-dhcp-agent restart # service neutron-l3-agent restart # service neutron-metadata-agent restart # service openstack-neutron restart # service openstack-neutron-dhcp-agent restart # service openstack-neutron-l3-agent restart # service openstack-neutron-metadata-agent restart Also restart your chosen Networking plug-in agent, for example, Open vSwitch. # service neutron-plugin-openvswitch-agent restart # service neutron-openvswitch-agent restart # service openstack-neutron-openvswitch-agent restart After you configure the compute and controller nodes, configure the base networks.
Install and configure the Networking plug-ins
Install the Open vSwitch (OVS) plug-in Install the Open vSwitch plug-in and its dependencies: # apt-get install neutron-plugin-openvswitch-agent openvswitch-switch # yum install openstack-neutron-openvswitch # zypper install openstack-neutron-openvswitch-agent On Ubuntu 12.04 LTS with GRE you must install openvswitch-datapath-dkms and restart the service to enable the GRE flow so that OVS 1.10 and higher is used. Make sure you are running the OVS 1.10 kernel module in addition to the OVS 1.10 user space. Both the kernel module and user space are required for VXLAN support. The error you see in the /var/log/openvswitchovs-vswitchd.log log file is "Stderr: 'ovs-ofctl: -1: negative values not supported for in_port\n'". If you see this error, make sure modinfo openvswitch shows the right version. Also check the output from dmesg for the version of the OVS module being loaded. Start Open vSwitch: # service openvswitch start # service openvswitch-switch start # service openvswitch-switch restart And configure it to start when the system boots: # chkconfig openvswitch on # chkconfig openvswitch-switch on No matter which networking technology you use, you must add the br-int integration bridge, which connects to the VMs, and the br-ex external bridge, which connects to the outside world. # ovs-vsctl add-br br-int # ovs-vsctl add-br br-ex Add a port (connection) from the EXTERNAL_INTERFACE interface to br-ex interface: # ovs-vsctl add-port br-ex EXTERNAL_INTERFACE The host must have an IP address associated with an interface other than EXTERNAL_INTERFACE, and your remote terminal session must be associated with this other IP address. If you associate an IP address with EXTERNAL_INTERFACE, that IP address stops working after you issue the ovs-vsctl add-port br-ex EXTERNAL_INTERFACE command. If you associate a remote terminal session with that IP address, you lose connectivity with the host. For more details about this behavior, see the Configuration Problems section of the Open vSwitch FAQ. Configure the EXTERNAL_INTERFACE without an IP address and in promiscuous mode. Additionally, you must set the newly created br-ex interface to have the IP address that formerly belonged to EXTERNAL_INTERFACE. Generic Receive Offload (GRO) should not be enabled on this interface as it can cause severe performance problems. It can be disabled with the ethtool utility. Edit the /etc/sysconfig/network-scripts/ifcfg-EXTERNAL_INTERFACE file: DEVICE_INFO_HERE ONBOOT=yes BOOTPROTO=none PROMISC=yes Create and edit the /etc/sysconfig/network-scripts/ifcfg-br-ex file: DEVICE=br-ex TYPE=Bridge ONBOOT=no BOOTPROTO=none IPADDR=EXTERNAL_INTERFACE_IP NETMASK=EXTERNAL_INTERFACE_NETMASK GATEWAY=EXTERNAL_INTERFACE_GATEWAY You must set some common configuration options no matter which networking technology you choose to use with Open vSwitch. Configure the L3 and DHCP agents to use OVS and namespaces. Edit the /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini files, respectively: interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True You must enable veth support if you use certain kernels. Some kernels, such as recent versions of RHEL (not RHOS) and CentOS, only partially support namespaces. Edit the previous files, as follows: ovs_use_veth = True Similarly, you must also tell Neutron core to use OVS. Edit the /etc/neutron/neutron.conf file: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 Choose a networking technology to create the virtual networks. Neutron supports GRE tunneling, VLANs, and VXLANs. This guide shows how to configure GRE tunneling and VLANs. GRE tunneling is simpler to set up because it does not require any special configuration from any physical network hardware. However, its protocol makes it difficult to filter traffic on the physical network. Additionally, this configuration does not use namespaces. You can have only one router for each network node. However, you can enable namespacing, and potentially veth, as described in the section detailing how to use VLANs with OVS). On the other hand, VLAN tagging modifies the ethernet header of packets. You can filter packets on the physical network through normal methods. However, not all NICs handle the increased packet size of VLAN-tagged packets well, and you might need to complete additional configuration on physical network hardware to ensure that your Neutron VLANs do not interfere with any other VLANs on your network and that any physical network hardware between nodes does not strip VLAN tags. While the examples in this guide enable network namespaces by default, you can disable them if issues occur or your kernel does not support them. Edit the /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini files, respectively: use_namespaces = False Edit the /etc/neutron/neutron.conf file to disable overlapping IP addresses: allow_overlapping_ips = False Note that when network namespaces are disabled, you can have only one router for each network node and overlapping IP addresses are not supported. You must complete additional steps after you create the initial Neutron virtual networks and router. Configure a firewall plug-in. If you do not wish to enforce firewall rules, called security groups by OpenStack, you can use neutron.agent.firewall.NoopFirewall. Otherwise, you can choose one of the Networking firewall plug-ins. The most common choice is the Hybrid OVS-IPTables driver, but you can also use the Firewall-as-a-Service driver. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [securitygroup] # Firewall driver for realizing neutron security group function. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver You must use at least the No-Op firewall. Otherwise, Horizon and other OpenStack services cannot get and set required VM boot options. Configure the OVS plug-in to start on boot. # chkconfig neutron-openvswitch-agent on # chkconfig openstack-neutron-openvswitch-agent on Now, return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for GRE tunneling Configure the OVS plug-in to use GRE tunneling, the br-int integration bridge, the br-tun tunneling bridge, and a local IP for the DATA_INTERFACE tunnel IP. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip = DATA_INTERFACE_IP Return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for VLANs Configure OVS to use VLANS. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 bridge_mappings = physnet1:br-DATA_INTERFACE Create the bridge for DATA_INTERFACE and add DATA_INTERFACE to it: # ovs-vsctl add-br br-DATA_INTERFACE # ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE Transfer the IP address for DATA_INTERFACE to the bridge in the same way that you transferred the EXTERNAL_INTERFACE IP address to br-ex. However, do not turn on promiscuous mode. Return to the OVS general instruction.
Install networking support on a dedicated compute node This section details set up for any node that runs the nova-compute component but does not run the full network stack. By default, the system-config-firewall automated firewall configuration tool is in place on RHEL. This graphical interface (and a curses-style interface with -tui on the end of the name) enables you to configure IP tables as a basic firewall. You should disable it when you work with Neutron unless you are familiar with the underlying network technologies, as, by default, it blocks various types of network traffic that are important to Neutron. To disable it, simple launch the program and clear the Enabled check box. After you successfully set up OpenStack with Neutron, you can re-enable and configure the tool. However, during Neutron set up, disable the tool to make it easier to debug network issues. Disable packet destination filtering (route verification) to let the networking services route traffic to the VMs. Edit the /etc/sysctl.conf file and run the following command to activate changes: net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 # sysctl -p Install and configure your networking plug-in components. To install and configure the network plug-in that you chose when you set up your network node, see . Configure Networking to use keystone for authentication: Set the auth_strategy configuration key to keystone in the DEFAULT section of the file: # openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone Set the neutron configuration for keystone authentication: # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_host controller # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_url http://controller:35357/v2.0 # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password NEUTRON_PASS Configure access to the RabbitMQ service: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_userid guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_password RABBIT_PASS Configure access to the Qpid message queue: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_qpid # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_hostname controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_port 5672 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_username guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_password guest Configure the core components of Neutron. Edit the /etc/neutron/neutron.conf file: auth_host = controller admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS auth_url = http://controller:35357/v2.0 auth_strategy = keystone rpc_backend = neutron.openstack.common.rpc.impl_kombu rabbit_host = controller rabbit_port = 5672 # Change the following settings if you're not using the default RabbitMQ configuration #rabbit_userid = guest rabbit_password = RABBIT_PASS Set the root_helper configuration in the [agent] section of /etc/neutron/neutron.conf: # openstack-config --set /etc/neutron/neutron.conf AGENT \ root_helper sudo neutron-rootwrap /etc/neutron/rootwrap.conf Configure Networking to connect to the database: # openstack-config --set /etc/neutron/neutron.conf DATABASE sql_connection \ mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure Networking to connect to the database. Edit the [database] section in the same file, as follows: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Edit the /etc/neutron/api-paste.ini file and add these lines to the [filter:authtoken] section: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = controller admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS Configure the /etc/neutron/api-paste.ini file for keystone authentication: # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ paste.filter_factory keystoneclient.middleware.auth_token:filter_factory # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ auth_host controller # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ admin_tenant_name service # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ admin_user neutron # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ admin_password NEUTRON_PASS Configure OpenStack Compute to use OpenStack Networking services. Configure the /etc/nova/nova.conf file as per instructions below: # openstack-config --set /etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_url http://controller:9696 # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_auth_strategy keystone # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_tenant_name service # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_username neutron # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_password NEUTRON_PASS # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_auth_url http://controller:35357/v2.0 # openstack-config --set /etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.NoopFirewallDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ security_group_api neutron Configure OpenStack Compute to use OpenStack Networking services. Edit the /etc/nova/nova.conf file: network_api_class=nova.network.neutronv2.api.API neutron_url=http://controller:9696 neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_admin_username=neutron neutron_admin_password=NEUTRON_PASS neutron_admin_auth_url=http://controller:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver=nova.virt.firewall.NoopFirewallDriver security_group_api=neutron No matter which firewall driver you chose when you configured the network and compute nodes, you must edit the /etc/nova/nova.conf file to set the firewall driver to nova.virt.firewall.NoopFirewallDriver. Because OpenStack Networking handles the firewall, this statement instructs Compute to not use a firewall. If you want Networking to handle the firewall, edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file to set the firewall_driver option to the firewall for the plug-in. For example, with OVS, edit the file as follows: [securitygroup] # Firewall driver for realizing neutron security group function. firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver # openstack-config --set \ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini security_group \ neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver If you do not want to use a firewall in Compute or Networking, edit both configuration files and set firewall_driver=nova.virt.firewall.NoopFirewallDriver. Also, edit the /etc/nova/nova.conf file and comment out or remove the security_group_api=neutron statement. Otherwise, when you issue nova list commands, the ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) error might be returned. Restart the Compute service. # service nova-compute restart # service openstack-nova-compute restart # service openstack-nova-compute restart Also restart your chosen Networking plug-in agent, for example, Open vSwitch. # service neutron-plugin-openvswitch-agent restart # service neutron-openvswitch-agent restart # service openstack-neutron-openvswitch-agent restart
Install and configure Neutron plug-ins on a dedicated compute node
Install the Open vSwitch (OVS) plug-in on a dedicated compute node Install the Open vSwitch plug-in and its dependencies: # apt-get install neutron-plugin-openvswitch-agent openvswitch-switch openvswitch-datapath-dkms # yum install openstack-neutron-openvswitch # zypper install openstack-neutron-openvswitch-agent Restart Open vSwitch: # service openvswitch-switch restart Start Open vSwitch and configure it to start when the system boots: # service openvswitch start # chkconfig openvswitch on # service openvswitch-switch start # chkconfig openvswitch-switch on You must set some common configuration options no matter which networking technology you choose to use with Open vSwitch. You must add the br-int integration bridge, which connects to the VMs. # ovs-vsctl add-br br-int You must set some common configuration options. You must configure Networking core to use OVS. Edit the /etc/neutron/neutron.conf file: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 auth_uri = http://controller:5000 core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 api_paste_config = /etc/neutron/api-paste.ini rpc_backend = neutron.openstack.common.rpc.impl_qpid Configure the networking type that you chose when you set up the network node: either GRE tunneling or VLANs. You must configure a firewall as well. You should use the same firewall plug-in that you chose to use when you set up the network node. To do this, edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file and set the firewall_driver value under the securitygroup to the same value used on the network node. For instance, if you chose to use the Hybrid OVS-IPTables plug-in, your configuration looks like this: [securitygroup] # Firewall driver for realizing neutron security group function. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver You must use at least the No-Op firewall. Otherwise, Horizon and other OpenStack services cannot get and set required VM boot options. Configure the OVS plug-in to start on boot. # chkconfig neutron-openvswitch-agent on # chkconfig openstack-neutron-openvswitch-agent on Now, return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for GRE tunneling on a dedicated compute node Tell the OVS plug-in to use GRE tunneling with a br-int integration bridge, a br-tun tunneling bridge, and a local IP for the tunnel of DATA_INTERFACE's IP Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip = DATA_INTERFACE_IP Now, return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for VLANs on a dedicated compute node Tell OVS to use VLANs. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 bridge_mappings = physnet1:br-DATA_INTERFACE Create the bridge for the DATA_INTERFACE and add DATA_INTERFACE to it, the same way you did on the network node: # ovs-vsctl add-br br-DATA_INTERFACE # ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE Return to the general OVS instructions.
Install networking support on a dedicated controller node This is for a node which runs the control components of Neutron, but does not run any of the components that provide the underlying functionality (such as the plug-in agent or the L3 agent). If you wish to have a combined controller/compute node follow these instructions, and then those for the compute node. By default, the system-config-firewall automated firewall configuration tool is in place on RHEL. This graphical interface (and a curses-style interface with -tui on the end of the name) enables you to configure IP tables as a basic firewall. You should disable it when you work with Neutron unless you are familiar with the underlying network technologies, as, by default, it blocks various types of network traffic that are important to Neutron. To disable it, simple launch the program and clear the Enabled check box. After you successfully set up OpenStack with Neutron, you can re-enable and configure the tool. However, during Neutron set up, disable the tool to make it easier to debug network issues. Install the server component of Networking and any dependencies. # apt-get install neutron-server # yum install openstack-neutron python-neutron python-neutronclient # zypper install openstack-neutron python-neutron python-neutronclient Configure Networking to connect to the database: # openstack-config --set /etc/neutron/neutron.conf DATABASE sql_connection \ mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure Networking to use your MySQL database. Edit the /etc/neutron/neutron.conf file and add the following key under the [database] section. Replace NEUTRON_DBPASS with the password you chose for the Neutron database. [database] ... connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure Networking to use keystone for authentication: Set the auth_strategy configuration key to keystone in the DEFAULT section of the file: # openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone Set the neutron configuration for keystone authentication: # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_host controller # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_url http://controller:35357/v2.0 # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password NEUTRON_PASS Configure Networking to use keystone as the Identity Service for authentication. Edit the /etc/neutron/neutron.conf file and add the following key under the [DEFAULT] section. [DEFAULT] ... auth_strategy = keystone Add the following keys under the [keystone_authtoken] section. Replace NEUTRON_PASS with the password you chose for the Neutron user in Keystone. [keystone_authtoken] ... auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS auth_uri = http://controller:5000 auth_url = http://controller:35357/v2.0 Edit the /etc/neutron/api-paste.ini file and add the following keys under the [filter:authtoken] section. Replace NEUTRON_PASS with the password you chose for the Neutron user in Keystone. [filter:authtoken] ... paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS Configure the /etc/neutron/api-paste.ini file for keystone authentication: # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ paste.filter_factory keystoneclient.middleware.auth_token:filter_factory # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ auth_host controller # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ admin_tenant_name service # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ admin_user neutron # openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ admin_password NEUTRON_PASS Configure access to the RabbitMQ service: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_userid guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_password RABBIT_PASS Configure access to the Qpid message queue: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_qpid # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_hostname controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_port 5672 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_username guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_password guest Configure Networking to use your message broker. Edit the /etc/neutron/neutron.conf file and add the following keys under the [DEFAULT] section. Replace RABBIT_PASS with the password you chose for RabbitMQ. [DEFAULT] ... rpc_backend = neutron.openstack.common.rpc.impl_kombu rabbit_host = controller rabbit_password = RABBIT_PASS Set the root_helper configuration in the [agent] section of /etc/neutron/neutron.conf: # openstack-config --set /etc/neutron/neutron.conf AGENT \ root_helper sudo neutron-rootwrap /etc/neutron/rootwrap.conf Although the controller node does not run any Networking agents, you must install and configure the same plug-in that you configured on the network node. Install and configure the Networking plug-ins on a dedicated controller node Configure OpenStack Compute to use OpenStack Networking services. Configure the /etc/nova/nova.conf file as per instructions below: # openstack-config --set /etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_url http://controller:9696 # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_auth_strategy keystone # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_tenant_name service # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_username neutron # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_password NEUTRON_PASS # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_auth_url http://controller:35357/v2.0 # openstack-config --set /etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.NoopFirewallDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ security_group_api neutron Configure OpenStack Compute to use OpenStack Networking services. Edit the /etc/nova/nova.conf file: network_api_class=nova.network.neutronv2.api.API neutron_url=http://controller:9696 neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_admin_username=neutron neutron_admin_password=NEUTRON_PASS neutron_admin_auth_url=http://controller:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver=nova.virt.firewall.NoopFirewallDriver security_group_api=neutron Regardless of which firewall driver you chose when you configured the network and compute nodes, set this driver as the No-Op firewall. This firewall is a nova firewall, and because neutron handles the Firewall, you must tell nova not to use one. When Networking handles the firewall, the option firewall_driver should be set according to the specified plug-in. For example with OVS, edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [securitygroup] # Firewall driver for realizing neutron security group function. firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver # openstack-config --set \ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini security_group \ neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver If you do not want to use a firewall in Compute or Networking, set firewall_driver=nova.virt.firewall.NoopFirewallDriver in both config files, and comment out or remove security_group_api=neutron in the /etc/nova/nova.conf file, otherwise you may encounter ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) when issuing nova list commands. The neutron-server initialization script expects a symbolic link /etc/neutron/plugin.ini pointing to the configuration file associated with your chosen plug-in. Using Open vSwitch, for example, the symbolic link must point to /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. If this symbolic link does not exist, create it using the following commands: # cd /etc/neutron # ln -s plugins/openvswitch/ovs_neutron_plugin.ini plugin.ini The openstack-neutron initialization script expects the variable NEUTRON_PLUGIN_CONF in file /etc/sysconfig/neutron to reference the configuration file associated with your chosen plug-in. Using Open vSwitch, for example, edit the /etc/sysconfig/neutron file and add the following: NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini" Start neutron-server and set it to start at boot: # service neutron-server start # chkconfig neutron-server on # service openstack-neutron start # chkconfig openstack-neutron on Restart neutron-server: # service neutron-server restart
Install and configure the Neutron plug-ins on a dedicated controller node
Install the Open vSwitch (OVS) plug-in on a dedicated controller node Install the Open vSwitch plug-in: # apt-get install neutron-plugin-openvswitch-agent # yum install openstack-neutron-openvswitch # zypper install openstack-neutron-openvswitch-agent You must set some common configuration options no matter which networking technology you choose to use with Open vSwitch. You must configure Networking core to use OVS. Edit the /etc/neutron/neutron.conf file: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 Configure the OVS plug-in for the networking type that you chose when you configured the network node: GRE tunneling or VLANs. The dedicated controller node does not need to run Open vSwitch or the Open vSwitch agent. Now, return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for GRE tunneling on a dedicated controller node Tell the OVS plug-in to use GRE tunneling. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True Return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for VLANs on a dedicated controller node Tell OVS to use VLANS. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file, as follows: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 Return to the general OVS instructions.
Create the base Neutron networks In these sections, replace SPECIAL_OPTIONS with any options specific to your Networking plug-in choices. See here to check if your plug-in requires any special options. Create the ext-net external network. This network represents a slice of the outside world. VMs are not directly linked to this network; instead, they connect to internal networks. Outgoing traffic is routed by Neutron to the external network. Additionally, floating IP addresses from the subnet for ext-net might be assigned to VMs so that the external network can contact them. Neutron routes the traffic appropriately. # neutron net-create ext-net -- --router:external=True SPECIAL_OPTIONS Create the associated subnet with the same gateway and CIDR as EXTERNAL_INTERFACE. It does not have DHCP because it represents a slice of the external world: # neutron subnet-create ext-net \ --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \ --gateway=EXTERNAL_INTERFACE_GATEWAY --enable_dhcp=False \ EXTERNAL_INTERFACE_CIDR Create one or more initial tenants, for example: # keystone tenant-create --name DEMO_TENANT See for further details. See for further details. Create the router attached to the external network. This router routes traffic to the internal subnets as appropriate. You can create it under a given tenant: Append --tenant-id option with a value of DEMO_TENANT_ID to the command. Use the following to quickly get the DEMO_TENANT tenant-id: # keystone tenant-list | grep DEMO_TENANT | awk '{print $2;}' Then create the router: # neutron router-create ext-to-int --tenant-id DEMO_TENANT_ID Connect the router to ext-net by setting the gateway for the router as ext-net: # neutron router-gateway-set EXT_TO_INT_ID EXT_NET_ID Create an internal network for DEMO_TENANT (and associated subnet over an arbitrary internal IP range, such as, 10.5.5.0/24), and connect it to the router by setting it as a port: # neutron net-create --tenant-id DEMO_TENANT_ID demo-net SPECIAL_OPTIONS # neutron subnet-create --tenant-id DEMO_TENANT_ID demo-net 10.5.5.0/24 --gateway 10.5.5.1 # neutron router-interface-add EXT_TO_INT_ID DEMO_NET_SUBNET_ID Check the special options page for your plug-in for remaining steps. Now, return to the general OVS instructions.
Plug-in-specific Neutron network options
Open vSwitch Network configuration options
GRE tunneling network options While this guide currently enables network namespaces by default, you can disable them if you have issues or your kernel does not support them. If you disabled namespaces, you must perform some additional configuration for the L3 agent. After you create all the networks, tell the L3 agent what the external network ID is, as well as the ID of the router associated with this machine (because you are not using namespaces, there can be only one router for each machine). To do this, edit the /etc/neutron/l3_agent.ini file: gateway_external_network_id = EXT_NET_ID router_id = EXT_TO_INT_ID Then, restart the L3 agent: # service neutron-l3-agent restart When creating networks, you should use the options: --provider:network_type gre --provider:segmentation_id SEG_ID SEG_ID should be 2 for the external network, and just any unique number inside the tunnel range specified before for any other network. These options are not needed beyond the first network, as Neutron automatically increments the segmentation id and copy the network type option for any additional networks. Now, return to the general OVS instructions.
VLAN network options When creating networks, use these options: --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id SEG_ID SEG_ID should be 2 for the external network, and just any unique number inside the vlan range specified above for any other network. These options are not needed beyond the first network, as Neutron automatically increments the segmentation ID and copies the network type and physical network options for any additional networks. They are only needed if you wish to modify those values in any way. Some NICs have Linux drivers that do not handle VLANs properly. See the ovs-vlan-bug-workaround and ovs-vlan-test man pages for more information. Additionally, you might try turning off rx-vlan-offload and tx-vlan-offload by using ethtool on the DATA_INTERFACE. Another potential caveat to VLAN functionality is that VLAN tags add an additional 4 bytes to the packet size. If your NICs cannot handle large packets, make sure to set the MTU to a value that is 4 bytes less than the normal value on the DATA_INTERFACE. If you run OpenStack inside a virtualized environment (for testing purposes), switching to the virtio NIC type (or a similar technology if you are not using KVM/QEMU to run your host VMs) might solve the issue.