Networking Learn Networking concepts, architecture, and basic and advanced neutron and nova command-line interface (CLI) cloud.
Introduction to Networking The Networking service, code-named Neutron, provides an API that lets you define network connectivity and addressing in the cloud. The Networking service enables operators to leverage different networking technologies to power their cloud networking. The Networking service also provides an API to configure and manage a variety of network services ranging from L3 forwarding and NAT to load balancing, edge firewalls, and IPSEC VPN. For a detailed description of the Networking API abstractions and their attributes, see the OpenStack Networking API v2.0 Reference.
Networking API Networking is a virtual network service that provides a powerful API to define the network connectivity and IP addressing used by devices from other services, such as Compute. The Compute API has a virtual server abstraction to describe computing resources. Similarly, the Networking API has virtual network, subnet, and port abstractions to describe networking resources.
Networking resources
Resource Description
Network An isolated L2 segment, analogous to VLAN in the physical networking world.
Subnet A block of v4 or v6 IP addresses and associated configuration state.
Port A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port.
You can configure rich network topologies by creating and configuring networks and subnets, and then instructing other OpenStack services like Compute to attach virtual devices to ports on these networks. In particular, Networking supports each tenant having multiple private networks, and allows tenants to choose their own IP addressing scheme (even if those IP addresses overlap with those used by other tenants). The Networking service: Enables advanced cloud networking use cases, such as building multi-tiered web applications and allowing applications to be migrated to the cloud without changing IP addresses. Offers flexibility for the cloud administrator to customize network offerings. Enables developers to extend the Networking API. Over time, the extended functionality becomes part of the core Networking API.
Plug-in architecture The original Compute network implementation assumed a basic model of isolation through Linux VLANs and IP tables. Networking introduces the concept of a plug-in, which is a back-end implementation of the Networking API. A plug-in can use a variety of technologies to implement the logical API requests. Some Networking plug-ins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits.
Available networking plug-ins
Plug-in Documentation
Big Switch Plug-in (Floodlight REST Proxy) Documentation included in this guide and http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin
Brocade Plug-in Documentation included in this guide
Cisco http://wiki.openstack.org/cisco-neutron
Cloudbase Hyper-V Plug-in http://www.cloudbase.it/quantum-hyper-v-plugin/
Linux Bridge Plug-in http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin
Mellanox Plug-in https://wiki.openstack.org/wiki/Mellanox-Neutron/
Midonet Plug-in http://www.midokura.com/
ML2 (Modular Layer 2) Plug-in https://wiki.openstack.org/wiki/Neutron/ML2
NEC OpenFlow Plug-in http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin
Nicira NVP Plug-in Documentation included in this guide as well as in NVP Product Overview, NVP Product Support
Open vSwitch Plug-in Documentation included in this guide.
PLUMgrid Documentation included in this guide as well as in https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron
Ryu Plug-in Documentation included in this guide as well as in https://github.com/osrg/ryu/wiki/OpenStack
Plug-ins can have different properties for hardware requirements, features, performance, scale, or operator tools. Because Networking supports a large number of plug-ins, the cloud administrator can weigh options to decide on the right networking technology for the deployment. In the Havana release, OpenStack Networking provides the Modular Layer 2 (ML2) plug-in that can concurrently use multiple layer 2 networking technologies that are found in real-world data centers. It currently works with the existing Open vSwitch, Linux Bridge, and Hyper-v L2 agents. The ML2 framework simplifies the addition of support for new L2 technologies and reduces the effort that is required to add and maintain them compared to monolithic plug-ins. Plug-in deprecation notice: The Open vSwitch and Linux Bridge plug-ins are deprecated in the Havana release and will be removed in the Icehouse release. All features have been ported to the ML2 plug-in in the form of mechanism drivers. ML2 currently provides Linux Bridge, Open vSwitch and Hyper-v mechanism drivers. Not all Networking plug-ins are compatible with all possible Compute drivers:
Plug-in compatibility with Compute drivers
Plug-in Libvirt (KVM/QEMU) XenServer VMware Hyper-V Bare-metal PowerVM
Big Switch / Floodlight Yes
Brocade Yes
Cisco Yes
Cloudbase Hyper-V Yes
Linux Bridge Yes
Mellanox Yes
Midonet Yes
ML2 Yes Yes
NEC OpenFlow Yes
Nicira NVP Yes Yes Yes
Open vSwitch Yes
Plumgrid Yes Yes
Ryu Yes
Plug-in configurations For configurations options, see Networking configuration options in Configuration Reference. These sections explain how to configure specific plug-ins.
Configure Big Switch, Floodlight REST Proxy plug-in To use the REST Proxy plug-in with OpenStack Networking Edit /etc/neutron/neutron.conf and set: core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2 Edit the plug-in configuration file, /etc/neutron/plugins/bigswitch/restproxy.ini, and specify a comma-separated list of controller_ip:port pairs: server = <controller-ip>:<port> For database configuration, see Install Networking Services in any of the Installation Guides in the OpenStack Documentation index. (The link defaults to the Ubuntu version.) To apply the new settings, restart neutron-server: # sudo service neutron-server restart
Configure Brocade plug-in To use the Brocade plug-in with OpenStack Networking Install the Brocade modified Python netconf client (ncclient) library which is available at https://github.com/brocade/ncclient: $ git clone https://www.github.com/brocade/ncclient $ cd ncclient; sudo python ./setup.py install Edit the /etc/neutron/neutron.conf file and set the following option: core_plugin = neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2 Edit the /etc/neutron/plugins/brocade/brocade.ini configuration file for the Brocade plug-in and specify the admin user name, password, and IP address of the Brocade switch: [SWITCH] username = admin password = password address = switch mgmt ip address ostype = NOS For database configuration, see Install Networking Services in any of the Installation Guides in the OpenStack Documentation index. (The link defaults to the Ubuntu version.) To apply the new settings, restart the neutron-server service: # service neutron-server restart
Configure OVS plug-in If you use the Open vSwitch (OVS) plug-in in a deployment with multiple hosts, you will need to use either tunneling or vlans to isolate traffic from multiple networks. Tunneling is easier to deploy because it does not require configuring VLANs on network switches. This procedure uses tunneling: To configure OpenStack Networking to use the OVS plug-in Edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to specify these values (for database configuration, see Install Networking Services in Installation Guide): enable_tunneling=True tenant_network_type=gre tunnel_id_ranges=1:1000 # only required for nodes running agents local_ip=<data-net-IP-address-of-node> If you use the neutron DHCP agent, add these lines to the /etc/neutron/dhcp_agent.ini file: dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf Create /etc/neutron/dnsmasq-neutron.conf, and add these values to lower the MTU size on instances and prevent packet fragmentation over the GRE tunnel: dhcp-option-force=26,1400 After performing that change on the node running neutron-server, restart neutron-server to apply the new settings: # sudo service neutron-server restart
Configure Nicira NVP plug-in To configure OpenStack Networking to use the NVP plug-in While the instructions in this section refer to the Nicira NVP platform, they also apply to VMware NSX. Install the NVP plug-in, as follows: # sudo apt-get install neutron-plugin-nicira Edit /etc/neutron/neutron.conf and set: core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2 Example neutron.conf file for NVP: core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2 rabbit_host = 192.168.203.10 allow_overlapping_ips = True To configure the NVP controller cluster for the Openstack Networking Service, locate the [default] section in the /etc/neutron/plugins/nicira/nvp.ini file, and add the following entries (for database configuration, see Install Networking Services in Installation Guide): A set of parameters need to establish and configure the connection with the controller cluster. Such parameters include NVP API endpoints, access credentials, and settings for HTTP redirects and retries in case of connection failuresnvp_user = <admin user name> nvp_password = <password for nvp_user> req_timeout = <timeout in seconds for NVP_requests> # default 30 seconds http_timeout = <tiemout in seconds for single HTTP request> # default 10 seconds retries = <number of HTTP request retries> # default 2 redirects = <maximum allowed redirects for a HTTP request> # default 3 nvp_controllers = <comma separated list of API endpoints> In order to ensure correct operations nvp_user shoud be a user with administrator credentials on the NVP platform. A controller API endpoint consists of the controller's IP address and port; if the port is omitted, port 443 will be used. If multiple API endpoints are specified, it is up to the user to ensure that all these endpoints belong to the same controller cluster; The Openstack Networking Nicira NVP plugin does not perform this check, and results might be unpredictable. When multiple API endpoints are specified, the plugin will load balance requests on the various API endpoints. The UUID of the NVP Transport Zone that should be used by default when a tenant creates a network. This value can be retrieved from the NVP Manager's Transport Zones page: default_tz_uuid = <uuid_of_the_transport_zone> default_l3_gw_service_uuid = <uuid_of_the_gateway_service> Ubuntu packaging currently does not update the neutron init script to point to the NVP configuration file. Instead, you must manually update /etc/default/neutron-server with the following: NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/nicira/nvp.ini To apply the new settings, restart neutron-server: # sudo service neutron-server restart Example nvp.ini file: [DEFAULT] default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf nvp_user=admin nvp_password=changeme nvp_controllers=10.127.0.100,10.127.0.200:8888 To debug nvp.ini configuration issues, run this command from the host that runs neutron-server: # check-nvp-config <path/to/nvp.ini> This command tests whether neutron-server can log into all of the NVP Controllers and the SQL server, and whether all UUID values are correct.
Loadbalancer-as-a-Service and Firewall-as-a-Service The NVP LBaaS and FWaaS services use the standard OpenStack API with the exception of requiring routed-insertion extension support. Below are the main differences between the NVP implementation and the community reference implementation of these services: The NVP LBaaS and FWaaS plugins require the routed-insertion extension, which adds the router_id attribute to the VIP (Virtual IP address) and firewall resources and binds these services to a logical router. The community reference implementation of LBaaS only supports a one-arm model, which restricts the VIP to be on the same subnet as the backend servers. The NVP LBaaS plugin only supports a two-arm model between north-south traffic, meaning that the VIP can only be created on the external (physical) network. The community reference implementation of FWaaS applies firewall rules to all logical routers in a tenant, while the NVP FWaaS plugin applies firewall rules only to one logical router according to the router_id of the firewall entity. To configure Loadbalancer-as-a-Service and Firewall-as-a-Service with NVP: Edit /etc/neutron/neutron.conf file: core_plugin = neutron.plugins.nicira.NeutronServicePlugin.NvpAdvancedPlugin # Note: comment out service_plugins. LBaaS & FWaaS is supported by core_plugin NvpAdvancedPlugin # service_plugins = Edit /etc/neutron/plugins/nicira/nvp.ini file: In addition to the original NVP configuration, the default_l3_gw_service_uuid is required for the NVP Advanced Plugin and a vcns section must be added as shown below. [DEFAULT] nvp_password = admin nvp_user = admin nvp_controllers = 10.37.1.137:443 default_l3_gw_service_uuid = aae63e9b-2e4e-4efe-81a1-92cf32e308bf default_tz_uuid = 2702f27a-869a-49d1-8781-09331a0f6b9e [vcns] # VSM management URL manager_uri = https://10.24.106.219 # VSM admin user name user = admin # VSM admin password password = default # UUID of a logical switch on NVP which has physical network connectivity (currently using bridge transport type) external_network = f2c023cf-76e2-4625-869b-d0dabcfcc638 # ID of deployment_container on VSM. Optional, if not specified, a default global deployment container will be used # deployment_container_id = # task_status_check_interval configures status check interval for vCNS asynchronous API. Default is 2000 msec. # task_status_check_interval =
Configure PLUMgrid plug-in To use the PLUMgrid plug-in with OpenStack Networking Edit /etc/neutron/neutron.conf and set: core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2 Edit /etc/neutron/plugins/plumgrid/plumgrid.ini under the [PLUMgridDirector] section, and specify the IP address, port, admin user name, and password of the PLUMgrid Director: [PLUMgridDirector] director_server = "PLUMgrid-director-ip-address" director_server_port = "PLUMgrid-director-port" username = "PLUMgrid-director-admin-username" password = "PLUMgrid-director-admin-password" For database configuration, see Install Networking Services in Installation Guide. To apply the settings, restart neutron-server: # sudo service neutron-server restart
Configure Ryu plug-in To use the Ryu plug-in with OpenStack Networking Install the Ryu plug-in, as follows: # sudo apt-get install neutron-plugin-ryu Edit /etc/neutron/neutron.conf and set: core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2 Edit /etc/neutron/plugins/ryu/ryu.ini (for database configuration, see Install Networking Services in Installation Guide), and update the following in the [ovs] section for the ryu-neutron-agent: The openflow_rest_api is used to tell where Ryu is listening for REST API. Substitute ip-address and port-no based on your Ryu setup. The ovsdb_interface is used for Ryu to access the ovsdb-server. Substitute eth0 based on your set up. The IP address is derived from the interface name. If you want to change this value irrespective of the interface name, ovsdb_ip can be specified. If you use a non-default port for ovsdb-server, it can be specified by ovsdb_port. tunnel_interface needs to be set to tell what IP address is used for tunneling (if tunneling isn't used, this value is ignored). The IP address is derived from the network interface name. You can use the same configuration file for many Compute nodes by using a network interface name with a different IP address: openflow_rest_api = <ip-address>:<port-no> ovsdb_interface = <eth0> tunnel_interface = <eth0> To apply the new settings, restart neutron-server: # sudo service neutron-server restart
Configure neutron agents Plug-ins typically have requirements for particular software that must be run on each node that handles data packets. This includes any node that runs nova-compute and nodes that run dedicated OpenStack Networking service agents such as, neutron-dhcp-agent, neutron-l3-agent, or neutron-lbaas-agent (see below for more information about individual service agents). A data-forwarding node typically has a network interface with an IP address on the “management network” and another interface on the “data network”. This section shows you how to install and configure a subset of the available plug-ins, which may include the installation of switching software (for example, Open vSwitch) as well as agents used to communicate with the neutron-server process running elsewhere in the data center.
Configure data-forwarding nodes
Node set up: OVS plug-in This section also applies to the ML2 plugin when Open vSwitch is used as a mechanism driver. If you use the Open vSwitch plug-in, you must install Open vSwitch and the neutron-plugin-openvswitch-agent agent on each data-forwarding node: Do not install the openvswitch-brcompat package as it breaks the security groups functionality. To set up each node for the OVS plug-in Install the OVS agent package (this pulls in the Open vSwitch software as a dependency): # sudo apt-get install neutron-plugin-openvswitch-agent On each node that runs the neutron-plugin-openvswitch-agent: Replicate the ovs_neutron_plugin.ini file created in the first step onto the node. If using tunneling, the node's ovs_neutron_plugin.ini file must also be updated with the node's IP address configured on the data network using the local_ip value. Restart Open vSwitch to properly load the kernel module: # sudo service openvswitch-switch restart Restart the agent: # sudo service neutron-plugin-openvswitch-agent restart All nodes that run neutron-plugin-openvswitch-agent must have an OVS br-int bridge. . To create the bridge, run: # sudo ovs-vsctl add-br br-int
Node set up: Nicira NVP plug-in If you use the Nicira NVP plug-in, you must also install Open vSwitch on each data-forwarding node. However, you do not need to install an additional agent on each node. It is critical that you are running an Open vSwitch version that is compatible with the current version of the NVP Controller software. Do not use the Open vSwitch version that is installed by default on Ubuntu. Instead, use the Open Vswitch version that is provided on the Nicira support portal for your NVP Controller version. To set up each node for the Nicira NVP plug-in Ensure each data-forwarding node has an IP address on the "management network," and an IP address on the "data network" that is used for tunneling data traffic. For full details on configuring your forwarding node, see the NVP Administrator Guide. Use the NVP Administrator Guide to add the node as a "Hypervisor" using the NVP Manager GUI. Even if your forwarding node has no VMs and is only used for services agents like neutron-dhcp-agent or neutron-lbaas-agent, it should still be added to NVP as a Hypervisor. After following the NVP Administrator Guide, use the page for this Hypervisor in the NVP Manager GUI to confirm that the node is properly connected to the NVP Controller Cluster and that the NVP Controller Cluster can see the br-int integration bridge.
Node set up: Ryu plug-in If you use the Ryu plug-in, you must install both Open vSwitch and Ryu, in addition to the Ryu agent package: To set up each node for the Ryu plug-in Install Ryu (there isn't currently an Ryu package for ubuntu): # sudo pip install ryu Install the Ryu agent and Open vSwitch packages: # sudo apt-get install neutron-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkms Replicate the ovs_ryu_plugin.ini and neutron.conf files created in the above step on all nodes running neutron-plugin-ryu-agent. Restart Open vSwitch to properly load the kernel module: # sudo service openvswitch-switch restart Restart the agent: # sudo service neutron-plugin-ryu-agent restart All nodes running neutron-plugin-ryu-agent also require that an OVS bridge named "br-int" exists on each node. To create the bridge, run: # sudo ovs-vsctl add-br br-int
Configure DHCP agent The DHCP service agent is compatible with all existing plug-ins and is required for all deployments where VMs should automatically receive IP addresses through DHCP. To install and configure the DHCP agent You must configure the host running the neutron-dhcp-agent as a "data forwarding node" according to the requirements for your plug-in (see ). Install the DHCP agent: # sudo apt-get install neutron-dhcp-agent Finally, update any options in the /etc/neutron/dhcp_agent.ini file that depend on the plug-in in use (see the sub-sections). If you reboot a node that runs the DHCP agent, you must run the neutron-ovs-cleanup command before the neutron-dhcp-agent service starts. On Red Hat-based systems, the neutron-ovs-cleanup service runs the neutron-ovs-cleanupcommand automatically. However, on Debian-based systems such as Ubuntu, you must manually run this command or write your own system script that runs on boot before the neutron-dhcp-agent service starts.
DHCP agent setup: OVS plug-in These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the OVS plug-in: [DEFAULT] ovs_use_veth = True enable_isolated_metadata = True use_namespaces = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
DHCP agent setup: NVP plug-in These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the NVP plug-in: [DEFAULT] ovs_use_veth = True enable_metadata_network = True enable_isolated_metadata = True use_namespaces = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
DHCP agent setup: Ryu plug-in These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the Ryu plug-in: [DEFAULT] ovs_use_veth = True use_namespace = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
Configure L3 agent The OpenStack Networking Service has a widely used API extension to allow administrators and tenants to create routers to interconnect L2 networks, and floating IPs to make ports on private networks publicly accessible. Many plug-ins rely on the L3 service agent to implement the L3 functionality. However, the following plug-ins already have built-in L3 capabilities: Nicira NVP plug-in Big Switch/Floodlight plug-in, which supports both the open source Floodlight controller and the proprietary Big Switch controller. Only the proprietary BigSwitch controller implements L3 functionality. When using Floodlight as your OpenFlow controller, L3 functionality is not available. PLUMgrid plug-in Do not configure or use neutron-l3-agent if you use one of these plug-ins. To install the L3 agent for all other plug-ins Install the neutron-l3-agent binary on the network node: # sudo apt-get install neutron-l3-agent To uplink the node that runs neutron-l3-agent to the external network, create a bridge named "br-ex" and attach the NIC for the external network to this bridge. For example, with Open vSwitch and NIC eth1 connected to the external network, run: # sudo ovs-vsctl add-br br-ex # sudo ovs-vsctl add-port br-ex eth1 Do not manually configure an IP address on the NIC connected to the external network for the node running neutron-l3-agent. Rather, you must have a range of IP addresses from the external network that can be used by OpenStack Networking for routers that uplink to the external network. This range must be large enough to have an IP address for each router in the deployment, as well as each floating IP. The neutron-l3-agent uses the Linux IP stack and iptables to perform L3 forwarding and NAT. In order to support multiple routers with potentially overlapping IP addresses, neutron-l3-agent defaults to using Linux network namespaces to provide isolated forwarding contexts. As a result, the IP addresses of routers will not be visible simply by running ip addr list or ifconfig on the node. Similarly, you will not be able to directly ping fixed IPs. To do either of these things, you must run the command within a particular router's network namespace. The namespace will have the name "qrouter-<UUID of the router>. These example commands run in the router namespace with UUID 47af3868-0fa8-4447-85f6-1304de32153b: # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping <fixed-ip> If you reboot a node that runs the L3 agent, you must run the neutron-ovs-cleanup command before the neutron-l3-agent service starts. On Red Hat-based systems, the neutron-ovs-cleanup service runs the neutron-ovs-cleanup command automatically. However, on Debian-based systems such as Ubuntu, you must manually run this command or write your own system script that runs on boot before the neutron-l3-agent service starts.
Configure LBaaS agent Starting with the Havana release, the Neutron Load-Balancer-as-a-Service (LBaaS) supports an agent scheduling mechanism, so several neutron-lbaas-agents can be run on several nodes (one per one). To install the LBaas agent and configure the node Install the agent by running: # sudo apt-get install neutron-lbaas-agent If you are using: An OVS-based plug-in (OVS, NVP, Ryu, NEC, BigSwitch/Floodlight), you must set: interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver A plug-in that uses LinuxBridge, you must set: interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver To use the reference implementation, you must also set: device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver Set this parameter in the neutron.conf file on the host that runs neutron-server: service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin
Configure FWaaS agent The Firewall-as-a-Service (FWaaS) agent is co-located with the Neutron L3 agent and does not require any additional packages apart from those required for the Neutron L3 agent. You can enable the FWaaS functionality by setting the configuration, as follows. To configure FWaaS service and agent Set this parameter in the neutron.conf file on the host that runs neutron-server: service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPlugin To use the reference implementation, you must also add a FWaaS driver configuration to the neutron.conf file on every node where the Neutron L3 agent is deployed: [fwaas] driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver enabled = True
Networking architecture Before you deploy Networking, it helps to understand the Networking components and how these components interact with each other and other OpenStack services.
Overview Networking is a standalone service, just like other OpenStack services such as Compute, Image service, Identity service, or the Dashboard. Like those services, a deployment of Networking often involves deploying several processes on a variety of hosts. The Networking server uses the neutron-server daemon to expose the Networking API and to pass user requests to the configured Networking plug-in for additional processing. Typically, the plug-in requires access to a database for persistent storage (also similar to other OpenStack services). If your deployment uses a controller host to run centralized Compute components, you can deploy the Networking server on that same host. However, Networking is entirely standalone and can be deployed on its own host as well. Depending on your deployment, Networking can also include the following agents.
Networking agents
Agent Description
plug-in agent (neutron-*-agent) Runs on each hypervisor to perform local vswitch configuration. The agent that runs depends on the plug-in that you use, and some plug-ins do not require an agent.
dhcp agent (neutron-dhcp-agent) Provides DHCP services to tenant networks. Some plug-ins use this agent.
l3 agent (neutron-l3-agent) Provides L3/NAT forwarding to provide external network access for VMs on tenant networks. Some plug-ins use this agent.
l3 metering agent (neutron-metering-agent) Provides L3 traffic measurements for tenant networks.
These agents interact with the main neutron process through RPC (for example, rabbitmq or qpid) or through the standard Networking API. Further: Networking relies on the Identity service (Keystone) for the authentication and authorization of all API requests. Compute (Nova) interacts with Networking through calls to its standard API.  As part of creating a VM, the nova-compute service communicates with the Networking API to plug each virtual NIC on the VM into a particular network.  The Dashboard (Horizon) integrates with the Networking API, enabling administrators and tenant users to create and manage network services through a web-based GUI.
Place services on physical hosts Like other OpenStack services, Networking enables cloud administrators to run one or more services on one or more physical devices. At one extreme, the cloud administrator can run all service daemons on a single physical host for evaluation purposes. Alternatively the cloud administrator can run each service on its own physical host and, in some cases, can replicate services across multiple hosts for redundancy. For more information, see the OpenStack Configuration Reference. A standard architecture includes a cloud controller host, a network gateway host, and a set of hypervisors that run virtual machines. The cloud controller and network gateway can be on the same host. However, if you expect VMs to send significant traffic to or from the Internet, a dedicated network gateway host helps avoid CPU contention between the neutron-l3-agent and other OpenStack services that forward packets.
Network connectivity for physical hosts A standard Networking set up has one or more of the following distinct physical data center networks.
General distinct physical data center networks
Network Description
Management network Provides internal communication between OpenStack Components. IP addresses on this network should be reachable only within the data center.
Data network Provides VM data communication within the cloud deployment. The IP addressing requirements of this network depend on the Networking plug-in that is used.
External network Provides VMs with Internet access in some deployment scenarios. Anyone on the Internet can reach IP addresses on this network.
API network Exposes all OpenStack APIs, including the Networking API, to tenants. IP addresses on this network should be reachable by anyone on the Internet. The API network might be the same as the external network, because it is possible to create an external-network subnet that is allocated IP ranges that use less than the full range of IP addresses in an IP block.
Use Networking You can start and stop OpenStack Networking services using the service command. For example: # sudo service neutron-server stop # sudo service neutron-server status # sudo service neutron-server start # sudo service neutron-server restart Log files are in the /var/log/neutron directory. Configuration files are in the /etc/neutron directory. You can use Networking in the following ways: Expose the Networking API to cloud tenants, which enables them to build rich network topologies. Have the cloud administrator, or an automated administrative tool, create network connectivity on behalf of tenants. A tenant or cloud administrator can both perform the following procedures.
Core Networking API features After you install and run Networking, tenants and administrators can perform create-read-update-delete (CRUD) API networking operations by using the Networking API directly or the neutron command-line interface (CLI). The neutron CLI is a wrapper around the Networking API. Every Networking API call has a corresponding neutron command. The CLI includes a number of options. For details, refer to the OpenStack End User Guide.
API abstractions The Networking v2.0 API provides control over both L2 network topologies and the IP addresses used on those networks (IP Address Management or IPAM). There is also an extension to cover basic L3 forwarding and NAT, which provides capabilities similar to nova-network.
API abstractions
Abstraction Description
Network An isolated L2 network segment (similar to a VLAN) that forms the basis for describing the L2 network topology available in an Networking deployment.
Subnet Associates a block of IP addresses and other network configuration, such as, default gateways or dns-servers, with an Networking network. Each subnet represents an IPv4 or IPv6 address block and, if needed, each Networking network can have multiple subnets.
Port Represents an attachment port to a L2 Networking network. When a port is created on the network, by default it is allocated an available fixed IP address out of one of the designated subnets for each IP version (if one exists). When the port is destroyed, its allocated addresses return to the pool of available IPs on the subnet. Users of the Networking API can either choose a specific IP address from the block, or let Networking choose the first available IP address.
This table summarizes the attributes available for each networking abstraction. For information about API abstraction and operations, see the Networking API v2.0 Reference.
Network attributes
Attribute Type Default value Description
bool True Administrative state of the network. If specified as False (down), this network does not forward packets.
uuid-str Generated UUID for this network.
string None Human-readable name for this network; is not required to be unique.
bool False Specifies whether this network resource can be accessed by any tenant. The default policy setting restricts usage of this attribute to administrative users only.
string N/A Indicates whether this network is currently operational.
list(uuid-str) Empty list List of subnets associated with this network.
uuid-str N/A Tenant owner of the network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies.
Subnet attributes
Attribute Type Default Value Description
list(dict) Every address in , excluding (if configured). List of cidr sub-ranges that are available for dynamic allocation to ports. Syntax: [ { "start":"10.0.0.2", "end": "10.0.0.254"} ]
string N/A IP range for this subnet, based on the IP version.
list(string) Empty list List of DNS name servers used by hosts in this subnet.
bool True Specifies whether DHCP is enabled for this subnet.
string First address in Default gateway used by devices in this subnet.
list(dict) Empty list Routes that should be used by devices with IPs from this subnet (not including local subnet route).
uuid-string Generated UUID representing this subnet.
int 4 IP version.
string None Human-readable name for this subnet (might not be unique).
uuid-string N/A Network with which this subnet is associated.
uuid-string N/A Owner of network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies.
Port attributes
Attribute Type Default Value Description
bool true Administrative state of this port. If specified as False (down), this port does not forward packets.
string None Identifies the device using this port (for example, a virtual server's ID).
string None Identifies the entity using this port (for example, a dhcp agent).
list(dict) Automatically allocated from pool Specifies IP addresses for this port; associates the port with the subnets containing the listed IP addresses.
uuid-string Generated UUID for this port.
string Generated Mac address to use on this port.
string None Human-readable name for this port (might not be unique).
uuid-string N/A Network with which this port is associated.
string N/A Indicates whether the network is currently operational.
uuid-string N/A Owner of the network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies.
Basic Networking operations To learn about advanced capabilities that are available through the neutron command-line interface (CLI), read the networking section in the OpenStack End User Guide. This table shows example neutron commands that enable you to complete basic Networking operations:
Basic Networking operations
Operation Command
Creates a network. $ neutron net-create net1
Creates a subnet that is associated with net1. $ neutron subnet-create net1 10.0.0.0/24
Lists ports for a specified tenant. $ neutron port-list
Lists ports for a specified tenant and displays the , , and columns. $ neutron port-list -c id -c fixed_ips -c device_owner
Shows information for a specified port. $ neutron port-show port-id
The field describes who owns the port. A port whose begins with: network is created by Networking. compute is created by Compute.
Administrative operations The cloud administrator can run any neutron command on behalf of tenants by specifying an Identity in the command, as follows: # neutron net-create --tenant-id=tenant-id network-name For example: # neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1 To view all tenant IDs in Identity, run the following command as an Identity Service admin user: # keystone tenant-list
Advanced Networking operations This table shows example neutron commands that enable you to complete advanced Networking operations:
Advanced Networking operations
Operation Command
Creates a network that all tenants can use. # neutron net-create --shared public-net
Creates a subnet with a specified gateway IP address. # neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24
Creates a subnet that has no gateway IP address. # neutron subnet-create --no-gateway net1 10.0.0.0/24
Creates a subnet with DHCP disabled. # neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False
Creates a subnet with a specified set of host routes. # neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2
Creates a subnet with a specified set of dns name servers. # neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8
Displays all ports and IPs allocated on a network. # neutron port-list --network_id net-id
Use Compute with Networking
Basic Compute and Networking operations This table shows example neutron and nova commands that enable you to complete basic Compute and Networking operations:
Basic Compute and Networking operations
Action Command
Checks available networks. # neutron net-list
Boots a VM with a single NIC on a selected Networking network. # nova boot --image img --flavor flavor --nic net-id=net-id vm-name
Searches for ports with a that matches the Compute instance UUID. See . # neutron port-list --device_id=vm-id
Searches for ports, but shows only the for the port. # neutron port-list --field mac_address --device_id=vm-id
Temporarily disables a port from sending traffic. # neutron port-update port-id --admin_state_up=False
The can also be a logical router ID. Create and delete VMs When you boot a Compute VM, a port on the network that corresponds to the VM NIC is automatically created and associated with the default security group. You can configure security group rules to enable users to access the VM. When you delete a Compute VM, the underlying Networking port is automatically deleted.
Advanced VM creation operations This table shows example nova and neutron commands that enable you to complete advanced VM creation operations:
Advanced VM creation operations
Operation Command
Boots a VM with multiple NICs. # nova boot --image img --flavor flavor --nic net-id=net1-id --nic net-id=net2-id vm-name
Boots a VM with a specific IP address. First, create an Networking port with a specific IP address. Then, boot a VM specifying a rather than a . # neutron port-create --fixed-ip subnet_id=subnet-id,ip_address=IP net-id # nova boot --image img --flavor flavor --nic port-id=port-id vm-name
Boots a VM that connects to all networks that are accessible to the tenant who submits the request (without the option). # nova boot --image img --flavor flavor vm-name
Networking does not currently support the v4-fixed-ip parameter of the --nic option for the nova command.
Enable ping and SSH on VMs (security groups) You must configure security group rules depending on the type of plug-in you are using. If you are using a plug-in that: Implements Networking security groups, you can configure security group rules directly by using neutron security-group-rule-create. This example enables ping and ssh access to your VMs. # neutron security-group-rule-create --protocol icmp \ --direction ingress default # neutron security-group-rule-create --protocol tcp --port-range-min 22 \ --port-range-max 22 --direction ingress default Does not implement Networking security groups, you can configure security group rules by using the nova secgroup-add-rule or euca-authorize command. These nova commands enable ping and ssh access to your VMs. # nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 # nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 If your plug-in implements Networking security groups, you can also leverage Compute security groups by setting security_group_api = neutron in the nova.conf file. After you set this option, all Compute security group commands are proxied to Networking.
Authentication and authorization Networking uses the Identity Service as the default authentication service. When the Identity Service is enabled, users who submit requests to the Networking service must provide an authentication token in X-Auth-Token request header. Users obtain this token by authenticating with the Identity Service endpoint. For more information about authentication with the Identity Service, see OpenStack Identity Service API v2.0 Reference. When the Identity Service is enabled, it is not mandatory to specify the tenant ID for resources in create requests because the tenant ID is derived from the authentication token. The default authorization settings only allow administrative users to create resources on behalf of a different tenant. Networking uses information received from Identity to authorize user requests. Networking handles two kind of authorization policies: Operation-based policies specify access criteria for specific operations, possibly with fine-grained control over specific attributes; Resource-based policies specify whether access to specific resource is granted or not according to the permissions configured for the resource (currently available only for the network resource). The actual authorization policies enforced in Networking might vary from deployment to deployment. The policy engine reads entries from the policy.json file. The actual location of this file might vary from distribution to distribution. Entries can be updated while the system is running, and no service restart is required. Every time the policy file is updated, the policies are automatically reloaded. Currently the only way of updating such policies is to edit the policy file. In this section, the terms policy and rule refer to objects that are specified in the same way in the policy file. There are no syntax differences between a rule and a policy. A policy is something that is matched directly from the Networking policy engine. A rule is an element in a policy, which is evaluated. For instance in create_subnet: [["admin_or_network_owner"]], create_subnet is a policy, and admin_or_network_owner is a rule. Policies are triggered by the Networking policy engine whenever one of them matches an Networking API operation or a specific attribute being used in a given operation. For instance the create_subnet policy is triggered every time a POST /v2.0/subnets request is sent to the Networking server; on the other hand create_network:shared is triggered every time the shared attribute is explicitly specified (and set to a value different from its default) in a POST /v2.0/networks request. It is also worth mentioning that policies can be also related to specific API extensions; for instance extension:provider_network:set is be triggered if the attributes defined by the Provider Network extensions are specified in an API request. An authorization policy can be composed by one or more rules. If more rules are specified, evaluation policy succeeds if any of the rules evaluates successfully; if an API operation matches multiple policies, then all the policies must evaluate successfully. Also, authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to another rule, until a terminal rule is reached. The Networking policy engine currently defines the following kinds of terminal rules: Role-based rules evaluate successfully if the user who submits the request has the specified role. For instance "role:admin" is successful if the user who submits the request is an administrator. Field-based rules evaluate successfully if a field of the resource specified in the current request matches a specific value. For instance "field:networks:shared=True" is successful if the shared attribute of the network resource is set to true. Generic rules compare an attribute in the resource with an attribute extracted from the user's security credentials and evaluates successfully if the comparison is successful. For instance "tenant_id:%(tenant_id)s" is successful if the tenant identifier in the resource is equal to the tenant identifier of the user submitting the request. This extract is from the default policy.json file: { [1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], "admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]], "admin_only": [["role:admin"]], "regular_user": [], "shared": [["field:networks:shared=True"]], [2] "default": [["rule:admin_or_owner"]], "create_subnet": [["rule:admin_or_network_owner"]], "get_subnet": [["rule:admin_or_owner"], ["rule:shared"]], "update_subnet": [["rule:admin_or_network_owner"]], "delete_subnet": [["rule:admin_or_network_owner"]], "create_network": [], [3] "get_network": [["rule:admin_or_owner"], ["rule:shared"]], [4] "create_network:shared": [["rule:admin_only"]], "update_network": [["rule:admin_or_owner"]], "delete_network": [["rule:admin_or_owner"]], "create_port": [], [5] "create_port:mac_address": [["rule:admin_or_network_owner"]], "create_port:fixed_ips": [["rule:admin_or_network_owner"]], "get_port": [["rule:admin_or_owner"]], "update_port": [["rule:admin_or_owner"]], "delete_port": [["rule:admin_or_owner"]] } [1] is a rule which evaluates successfully if the current user is an administrator or the owner of the resource specified in the request (tenant identifier is equal). [2] is the default policy which is always evaluated if an API operation does not match any of the policies in policy.json. [3] This policy evaluates successfully if either admin_or_owner, or shared evaluates successfully. [4] This policy restricts the ability to manipulate the shared attribute for a network to administrators only. [5] This policy restricts the ability to manipulate the mac_address attribute for a port only to administrators and the owner of the network where the port is attached. In some cases, some operations are restricted to administrators only. This example shows you how to modify a policy file to permit tenants to define networks and see their resources and permit administrative users to perform all other operations: { "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], "admin_only": [["role:admin"]], "regular_user": [], "default": [["rule:admin_only"]], "create_subnet": [["rule:admin_only"]], "get_subnet": [["rule:admin_or_owner"]], "update_subnet": [["rule:admin_only"]], "delete_subnet": [["rule:admin_only"]], "create_network": [], "get_network": [["rule:admin_or_owner"]], "create_network:shared": [["rule:admin_only"]], "update_network": [["rule:admin_or_owner"]], "delete_network": [["rule:admin_or_owner"]], "create_port": [["rule:admin_only"]], "get_port": [["rule:admin_or_owner"]], "update_port": [["rule:admin_only"]], "delete_port": [["rule:admin_only"]] }
High availability The use of high-availability in a Networking deployment helps prevent individual node failures. In general, you can run neutron-server and neutron-dhcp-agent in an active-active fashion. You can run the neutron-l3-agent service as active/passive, which avoids IP conflicts with respect to gateway IP addresses.
Networking high availability with Pacemaker You can run some Networking services into a cluster (Active / Passive or Active / Active for Networking Server only) with Pacemaker. Download the latest resources agents: neutron-server: https://github.com/madkiss/openstack-resource-agents neutron-dhcp-agent : https://github.com/madkiss/openstack-resource-agents neutron-l3-agent : https://github.com/madkiss/openstack-resource-agents For information about how to build a cluster, see Pacemaker documentation.
Plug-in pagination and sorting support
Plug-ins that support native pagination and sorting
Plug-in Support Native Pagination Support Native Sorting
ML2 True True
Open vSwitch True True
Linux Bridge True True