Advanced configuration options This section describes advanced configuration options for various system components. For example, configuration options where the default works but that the user wants to customize options. After installing from packages, $NEUTRON_CONF_DIR is /etc/neutron.
OpenStack Networking server with plug-in This is the web server that runs the OpenStack Networking API Web Server. It is responsible for loading a plug-in and passing the API calls to the plug-in for processing. The neutron-server should receive one of more configuration files as it its input, for example: neutron-server --config-file <neutron config> --config-file <plugin config> The neutron config contains the common neutron configuration parameters. The plug-in config contains the plug-in specific flags. The plug-in that is run on the service is loaded through the core_plugin configuration parameter. In some cases a plug-in might have an agent that performs the actual networking. Most plug-ins require a SQL database. After you install and start the database server, set a password for the root account and delete the anonymous accounts: $> mysql -u root mysql> update mysql.user set password = password('iamroot') where user = 'root'; mysql> delete from mysql.user where user = ''; Create a database and user account specifically for plug-in: mysql> create database <database-name>; mysql> create user '<user-name>'@'localhost' identified by '<user-name>'; mysql> create user '<user-name>'@'%' identified by '<user-name>'; mysql> grant all on <database-name>.* to '<user-name>'@'%'; Once the above is done you can update the settings in the relevant plug-in configuration files. The plug-in specific configuration files can be found at $NEUTRON_CONF_DIR/plugins. Some plug-ins have a L2 agent that performs the actual networking. That is, the agent will attach the virtual machine NIC to the OpenStack Networking network. Each node should have an L2 agent running on it. Note that the agent receives the following input parameters: neutron-plugin-agent --config-file <neutron config> --config-file <plugin config> Two things need to be done prior to working with the plug-in: Ensure that the core plug-in is updated. Ensure that the database connection is correctly set. The following table contains examples for these settings. Some Linux packages might provide installation utilities that configure these.
Settings
Parameter Value
Open vSwitch
core_plugin ($NEUTRON_CONF_DIR/neutron.conf) neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
connection (in the plugin configuration file, section [database]) mysql://<username>:<password>@localhost/ovs_neutron?charset=utf8
Plug-in Configuration File $NEUTRON_CONF_DIR/plugins/openvswitch/ovs_neutron_plugin.ini
Agent neutron-openvswitch-agent
Linux Bridge
core_plugin ($NEUTRON_CONF_DIR/neutron.conf) neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2
connection (in the plug-in configuration file, section [database]) mysql://<username>:<password>@localhost/neutron_linux_bridge?charset=utf8
Plug-in Configuration File $NEUTRON_CONF_DIR/plugins/linuxbridge/linuxbridge_conf.ini
Agent neutron-linuxbridge-agent
All plug-in configuration files options can be found in the Appendix - Configuration File Options.
DHCP agent There is an option to run a DHCP server that will allocate IP addresses to virtual machines running on the network. When a subnet is created, by default, the subnet has DHCP enabled. The node that runs the DHCP agent should run: neutron-dhcp-agent --config-file <neutron config> --config-file <dhcp config> Currently the DHCP agent uses dnsmasq to perform that static address assignment. A driver needs to be configured that matches the plug-in running on the service.
Basic settings
Parameter Value
Open vSwitch
interface_driver ($NEUTRON_CONF_DIR/dhcp_agent.ini) neutron.agent.linux.interface.OVSInterfaceDriver
Linux Bridge
interface_driver ($NEUTRON_CONF_DIR/dhcp_agent.ini) neutron.agent.linux.interface.BridgeInterfaceDriver
Namespace By default the DHCP agent makes use of Linux network namespaces in order to support overlapping IP addresses. Requirements for network namespaces support are described in the Limitations section. If the Linux installation does not support network namespace, you must disable using network namespace in the DHCP agent config file (The default value of use_namespaces is True). use_namespaces = False
L3 Agent There is an option to run a L3 agent that will give enable layer 3 forwarding and floating IP support. The node that runs the L3 agent should run: neutron-l3-agent --config-file <neutron config> --config-file <l3 config> A driver needs to be configured that matches the plug-in running on the service. The driver is used to create the routing interface.
Basic settings
Parameter Value
Open vSwitch
interface_driver ($NEUTRON_CONF_DIR/l3_agent.ini) neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge ($NEUTRON_CONF_DIR/l3_agent.ini) br-ex
Linux Bridge
interface_driver ($NEUTRON_CONF_DIR/l3_agent.ini) neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge ($NEUTRON_CONF_DIR/l3_agent.ini) This field must be empty (or the bridge name for the external network).
The L3 agent communicates with the OpenStack Networking server via the OpenStack Networking API, so the following configuration is required: OpenStack Identity authentication: auth_url="$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_AUTH_HOST:$KEYSTONE_AUTH_PORT/v2.0" For example, http://10.56.51.210:5000/v2.0 Admin user details: admin_tenant_name $SERVICE_TENANT_NAME admin_user $Q_ADMIN_USERNAME admin_password $SERVICE_PASSWORD
Namespace By default the L3 agent makes use of Linux network namespaces in order to support overlapping IP addresses. Requirements for network namespaces support are described in the Limitation section. If the Linux installation does not support network namespace, you must disable using network namespace in the L3 agent config file (The default value of use_namespaces is True). use_namespaces = False When use_namespaces is set to False, only one router ID can be supported per node. This must be configured via the configuration variable router_id. # If use_namespaces is set to False then the agent can only configure one router. # This is done by setting the specific router_id. router_id = 1064ad16-36b7-4c2f-86f0-daa2bcbd6b2a To configure it, you need to run the OpenStack Networking service and create a router, and then set an ID of the router created to router_id in the L3 agent configuration file. $ neutron router-create myrouter1 Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 338d42d7-b22e-42c5-9df6-f3674768fe75 | | name | myrouter1 | | status | ACTIVE | | tenant_id | 0c236f65baa04e6f9b4236b996555d56 | +-----------------------+--------------------------------------+
Multiple floating IP pools The L3 API in OpenStack Networking supports multiple floating IP pools. In OpenStack Networking, a floating IP pool is represented as an external network and a floating IP is allocated from a subnet associated with the external network. Since each L3 agent can be associated with at most one external network, we need to invoke multiple L3 agent to define multiple floating IP pools. 'gateway_external_network_id' in L3 agent configuration file indicates the external network that the L3 agent handles. You can run multiple L3 agent instances on one host. In addition, when you run multiple L3 agents, make sure that handle_internal_only_routers is set to True only for one L3 agent in an OpenStack Networking deployment and set to False for all other L3 agents. Since the default value of this parameter is True, you need to configure it carefully. Before starting L3 agents, you need to create routers and external networks, then update the configuration files with UUID of external networks and start L3 agents. For the first agent, invoke it with the following l3_agent.ini where handle_internal_only_routers is True. handle_internal_only_routers = True gateway_external_network_id = 2118b11c-011e-4fa5-a6f1-2ca34d372c35 external_network_bridge = br-ex python /opt/stack/neutron/bin/neutron-l3-agent --config-file /etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini For the second (or later) agent, invoke it with the following l3_agent.ini where handle_internal_only_routers is False. handle_internal_only_routers = False gateway_external_network_id = e828e54c-850a-4e74-80a8-8b79c6a285d8 external_network_bridge = br-ex-2
L3 Metering Agent There is an option to run a L3 metering agent that will enable layer 3 traffic metering. In general case the metering agent should be launched on all nodes that run the L3 agent: neutron-metering-agent --config-file <neutron config> --config-file <l3 metering config> A driver needs to be configured that matches the plug-in running on the service. The driver is used to add metering to the routing interface.
Basic settings
Parameter Value
Open vSwitch
interface_driver ($NEUTRON_CONF_DIR/metering_agent.ini) neutron.agent.linux.interface.OVSInterfaceDriver
Linux Bridge
interface_driver ($NEUTRON_CONF_DIR/metering_agent.ini) neutron.agent.linux.interface.BridgeInterfaceDriver
Namespace The metering agent and the L3 agent have to have the same configuration regarding to the network namespaces setting. If the Linux installation does not support network namespace, you must disable using network namespace in the L3 metering config file (The default value of is True). use_namespaces = False
L3 metering driver A driver which implements the metering abstraction needs to be configured. Currently there is only one implementation which is based on iptables. driver = neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
L3 metering service driver To enable L3 metering you have to be sure to set the following parameter in neutron.conf on the host that runs neutron-server: service_plugins = neutron.services.metering.metering_plugin.MeteringPlugin
Limitations No equivalent for nova-network --multi_host flag: Nova-network has a model where the L3, NAT, and DHCP processing happen on the compute node itself, rather than a dedicated networking node. OpenStack Networking now support running multiple l3-agent and dhcp-agents with load being split across those agents, but the tight coupling of that scheduling with the location of the VM is not supported in Grizzly. The Havana release is expected to include an exact replacement for the --multi_host flag in nova-network. Linux network namespace required on nodes running neutron-l3-agent or neutron-dhcp-agent if overlapping IPs are in use: . In order to support overlapping IP addresses, the OpenStack Networking DHCP and L3 agents use Linux network namespaces by default. The hosts running these processes must support network namespaces. To support network namespaces, the following are required: Linux kernel 2.6.24 or newer (with CONFIG_NET_NS=y in kernel configuration) and iproute2 utilities ('ip' command) version 3.1.0 (aka 20111117) or newer To check whether your host supports namespaces try running the following as root: # ip netns add test-ns # ip netns exec test-ns ifconfig If the preceding commands do not produce errors, your platform is likely sufficient to use the dhcp-agent or l3-agent with namespace. In our experience, Ubuntu 12.04 or later support namespaces as does Fedora 17 and new, but some older RHEL platforms do not by default. It may be possible to upgrade the iproute2 package on a platform that does not support namespaces by default. If you need to disable namespaces, make sure the neutron.conf used by neutron-server has the following setting: allow_overlapping_ips=False and that the dhcp_agent.ini and l3_agent.ini have the following setting: use_namespaces=False If the host does not support namespaces then the neutron-l3-agent and neutron-dhcp-agent should be run on different hosts. This is due to the fact that there is no isolation between the IP addresses created by the L3 agent and by the DHCP agent. By manipulating the routing the user can ensure that these networks have access to one another. If you run both L3 and DHCP services on the same node, you should enable namespaces to avoid conflicts with routes: use_namespaces=True No IPv6 support for L3 agent: The neutron-l3-agent, used by many plug-ins to implement L3 forwarding, supports only IPv4 forwarding. Currently, there are no errors provided if you configure IPv6 addresses via the API. ZeroMQ support is experimental: Some agents, including neutron-dhcp-agent, neutron-openvswitch-agent, and neutron-linuxbridge-agent use RPC to communicate. ZeroMQ is an available option in the configuration file, but has not been tested and should be considered experimental. In particular, issues might occur with ZeroMQ and the dhcp agent. MetaPlugin is experimental: This release includes a MetaPlugin that is intended to support multiple plug-ins at the same time for different API requests, based on the content of those API requests. The core team has not thoroughly reviewed or tested this functionality. Consider this functionality to be experimental until further validation is performed.