From 1fe3fe0a3199017aaabf3ac421b26da725290b5b Mon Sep 17 00:00:00 2001 From: nerminamiller Date: Wed, 6 Nov 2013 11:47:39 -0500 Subject: [PATCH] Move various plugin configuration info from the old networking install chapter to Cloud Admin Guide Move Big Switch, Nicira, OSV, PLUMgrid, Ryu, and neutron agent config info to ch_neworking Add info about deleting nova-network before initializing networking install Add crossreferences to Config Ref and Install Guide author: nermina miller backport: havana Partial-Bug: 1244759 Change-Id: I5fe694de95da45879acccb5dfa45a331aa591403 --- doc/admin-guide-cloud/ch_networking.xml | 1359 ++++++++++++++--- .../section_networking_adv_features.xml | 130 +- ...on_networking_adv_operational_features.xml | 6 +- doc/common/section_rpc-for-networking.xml | 94 +- .../section_networking-options-reference.xml | 11 +- .../networking/section_networking-plugins.xml | 158 +- doc/install-guide/section_neutron-install.xml | 79 +- .../section_neutron-single-flat.xml | 20 +- 8 files changed, 1350 insertions(+), 507 deletions(-) diff --git a/doc/admin-guide-cloud/ch_networking.xml b/doc/admin-guide-cloud/ch_networking.xml index ec5cbee349..c0d79e2d28 100644 --- a/doc/admin-guide-cloud/ch_networking.xml +++ b/doc/admin-guide-cloud/ch_networking.xml @@ -15,10 +15,10 @@ API for defining network connectivity and addressing in the cloud. The Networking service enables operators to leverage different networking technologies to power their - cloud networking. The Networking service also provides an API to configure - and manage a variety of network services ranging from L3 - forwarding and NAT to load balancing, edge firewalls, and - IPSEC VPN. + cloud networking. The Networking service also provides an + API to configure and manage a variety of network services + ranging from L3 forwarding and NAT to load balancing, edge + firewalls, and IPSEC VPN. For a detailed description of the Networking API abstractions and their attributes, see the - - - - - - - - - - - - - - - - - - - - - - - - - -
Networking resources
ResourceDescription
NetworkAn isolated L2 segment, analogous to VLAN in the physical networking world.
SubnetA block of v4 or v6 IP addresses and associated configuration state.
PortA connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port.
-
+ + + + + + + + + + + + + + + + + + + + + + + + +
Networking resources
ResourceDescription
NetworkAn isolated L2 segment, analogous to VLAN + in the physical networking world.
SubnetA block of v4 or v6 IP addresses and + associated configuration state.
PortA connection point for attaching a single + device, such as the NIC of a virtual + server, to a virtual network. Also + describes the associated network + configuration, such as the MAC and IP + addresses to be used on that port.
You can configure rich network topologies by creating and configuring networks and subnets, and then instructing other OpenStack services like Compute - to attach virtual devices to ports on these networks.In particular, Networking supports each tenant having - multiple private networks, and allows tenants to - choose their own IP addressing scheme (even if those - IP addresses overlap with those used by other + to attach virtual devices to ports on these + networks. + In particular, Networking supports each tenant + having multiple private networks, and allows tenants + to choose their own IP addressing scheme (even if + those IP addresses overlap with those used by other tenants). The Networking service: @@ -103,105 +112,141 @@ others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Available networking plugins
PluginDocumentation
Big Switch Plug-in (Floodlight REST Proxy) - http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin -
Brocade Plug-inhttps://github.com/brocade/brocade
Ciscohttp://wiki.openstack.org/cisco-neutron
Cloudbase Hyper-V Plug-inhttp://www.cloudbase.it/quantum-hyper-v-plugin/
Linux Bridge Plug-inhttp://wiki.openstack.org/Neutron-Linux-Bridge-Plugin
Mellanox Plug-inhttps://wiki.openstack.org/wiki/Mellanox-Neutron/
Midonet Plug-inhttp://www.midokura.com/
ML2 (Modular Layer 2) Plug-inhttps://wiki.openstack.org/wiki/Neutron/ML2
NEC OpenFlow Plug-inhttp://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin
Nicira NVP Plug-inNVP Product Overview, NVP Product Support
Open vSwitch Plug-inincluded in this guide
PLUMgridhttps://https://wiki.openstack.org/wiki/PLUMgrid-Neutron
Ryu Plug-inhttps://github.com/osrg/ryu/wiki/OpenStack
-
- Plug-ins can have different properties for hardware requirements, features, - performance, scale, or operator tools. Because Networking supports a large number of - plug-ins, the cloud administrator can weigh options to decide on the right - networking technology for the deployment. - In the Havana release, OpenStack Networking provides the Modular - Layer 2 (ML2) plug-in that can concurrently use multiple layer 2 - networking technologies that are found in real-world data centers. It currently - works with the existing Open vSwitch, Linux Bridge, and Hyper-v L2 agents. The ML2 - framework simplifies the addition of support for new L2 technologies and reduces the - effort that is required to add and maintain them compared to monolithic - plug-ins. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Available networking plug-ins
Plug-inDocumentation
Big Switch Plug-in + (Floodlight REST + Proxy)Documentation included in this guide and + http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin +
Brocade + Plug-inhttps://github.com/brocade/brocade
Ciscohttp://wiki.openstack.org/cisco-neutron
Cloudbase Hyper-V + Plug-inhttp://www.cloudbase.it/quantum-hyper-v-plugin/
Linux Bridge + Plug-inhttp://wiki.openstack.org/Neutron-Linux-Bridge-Plugin
Mellanox + Plug-inhttps://wiki.openstack.org/wiki/Mellanox-Neutron/
Midonet + Plug-inhttp://www.midokura.com/
ML2 (Modular Layer + 2) Plug-inhttps://wiki.openstack.org/wiki/Neutron/ML2
NEC OpenFlow + Plug-inhttp://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin
Nicira NVP + Plug-inDocumentation included in this guide as + well as in NVP Product Overview, NVP Product Support
Open vSwitch + Plug-inDocumentation included in this guide.
PLUMgridDocumentation included in this guide as + well as in https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron
Ryu + Plug-inDocumentation included in this guide as + well as in https://github.com/osrg/ryu/wiki/OpenStack
+ Plug-ins can have different properties for hardware + requirements, features, performance, scale, or + operator tools. Because Networking supports a large + number of plug-ins, the cloud administrator can weigh + options to decide on the right networking technology + for the deployment. + In the Havana release, OpenStack Networking provides + the Modular Layer 2 + (ML2) plug-in that can concurrently use + multiple layer 2 networking technologies that are + found in real-world data centers. It currently works + with the existing Open vSwitch, Linux Bridge, and + Hyper-v L2 agents. The ML2 framework simplifies the + addition of support for new L2 technologies and + reduces the effort that is required to add and + maintain them compared to monolithic plug-ins. - Plugins Deprecation Notice: - The Open vSwitch and Linux Bridge plug-ins are deprecated in the Havana - release and will be removed in the Icehouse release. All features have been - ported to the ML2 plug-in in the form of mechanism drivers. ML2 currently - provides Linux Bridge, Open vSwitch and Hyper-v mechanism drivers. + Plug-ins deprecation notice: + The Open vSwitch and Linux Bridge plug-ins are + deprecated in the Havana release and will be + removed in the Icehouse release. All features have + been ported to the ML2 plug-in in the form of + mechanism drivers. ML2 currently provides Linux + Bridge, Open vSwitch and Hyper-v mechanism + drivers. Not all Networking plug-ins are compatible with all possible Compute drivers: @@ -210,7 +255,7 @@ drivers - + Plug-in Libvirt (KVM/QEMU) XenServer VMware @@ -221,7 +266,7 @@ - Bigswitch / Floodlight + Big Switch / Floodlight Yes @@ -340,13 +385,776 @@ +
+ Plug-in configurations + For configurations options, see Networking configuration options in + Configuration + Reference. The following sections + explain in detail how to configure specific + plug-ins. +
+ Configure Big Switch, Floodlight REST Proxy + plug-in + + To use the REST Proxy plug-in with + OpenStack Networking + + Edit + /etc/neutron/neutron.conf + and set: + core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2 + + + Edit the plug-in configuration file, + /etc/neutron/plugins/bigswitch/restproxy.ini, + and specify a comma-separated list of + controller_ip:port + pairs: + server = <controller-ip>:<port> + For database configuration, see Install Networking Services + in any of the Installation + Guides in the OpenStack Documentation + index. (The link defaults to + the Ubuntu version.) + + + To apply the new settings, restart + neutron-server: + # sudo service neutron-server restart + + +
+
+ Configure OVS plug-in + If you use the Open vSwitch (OVS) plug-in in + a deployment with multiple hosts, you will + need to use either tunneling or vlans to + isolate traffic from multiple networks. + Tunneling is easier to deploy because it does + not require configuring VLANs on network + switches. + The following procedure uses + tunneling: + + To configure OpenStack Networking to + use the OVS plug-in + + Edit + /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini + to specify the following + values (for database configuration, + see Install Networking Services + in Installation + Guide): + enable_tunneling=True tenant_network_type=gre tunnel_id_ranges=1:1000 # only required for nodes running agents local_ip=<data-net-IP-address-of-node> + + + If you are using the neutron DHCP + agent, add the following to + /etc/neutron/dhcp_agent.ini: + dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf + + + Create + /etc/neutron/dnsmasq-neutron.conf, + and add the following values to lower + the MTU size on instances and prevent + packet fragmentation over the GRE + tunnel: + dhcp-option-force=26,1400 + + + After performing that change on the + node running neutron-server, + restart neutron-server to + apply the new settings: + # sudo service neutron-server restart + + +
+
+ Configure Nicira NVP plug-in + + To configure OpenStack Networking to + use the NVP plug-in + + Install the NVP plug-in, as + follows: + # sudo apt-get install neutron-plugin-nicira + + + Edit + /etc/neutron/neutron.conf + and set: + core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2 + Example + neutron.conf + file for NVP: + core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2 +rabbit_host = 192.168.203.10 +allow_overlapping_ips = True + + + To tell OpenStack Networking about a + controller cluster, create a new + [cluster:<name>] section in the + /etc/neutron/plugins/nicira/nvp.ini + file, and add the following entries + (for database configuration, see Install Networking Services + in Installation + Guide): + + The UUID of the NVP Transport + Zone that should be used by default + when a tenant creates a network. + This value can be retrieved from + the NVP Manager Transport Zones + page: + default_tz_uuid = <uuid_of_the_transport_zone> + + + A connection string + indicating parameters to be used by + the NVP plug-in when connecting to + the NVP web service API. There will + be one of these lines in the file + for each NVP controller in your + deployment. An NVP operator will + likely want to update the NVP + controller IP and password, but the + remaining fields can be the + defaults: + nvp_controller_connection = <controller_node_ip>:<controller_port>:<api_user>:<api_password>:<request_timeout>:<http_timeout>:<retries>:<redirects> + + + The UUID of an NVP L3 Gateway + Service that should be used by + default when a tenant creates a + router. This value can be retrieved + from the NVP Manager Gateway + Services page: + default_l3_gw_service_uuid = <uuid_of_the_gateway_service> + + Ubuntu packaging currently + does not update the neutron init + script to point to the NVP + configuration file. Instead, you + must manually update + /etc/default/neutron-server + with the following: + NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/nicira/nvp.ini + + + + + + To apply the new settings, restart + neutron-server: + # sudo service neutron-server restart + + + Example nvp.ini + file: + [cluster:main] +default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c +default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf +nvp_controller_connection=10.0.0.2:443:admin:admin:30:10:2:2 +nvp_controller_connection=10.0.0.3:443:admin:admin:30:10:2:2 +nvp_controller_connection=10.0.0.4:443:admin:admin:30:10:2:2 + + To debug nvp.ini + configuration issues, run the following + command from the host running + neutron-server: + # check-nvp-config <path/to/nvp.ini>This + command tests whether neutron-server can log + into all of the NVP Controllers, SQL + server, and whether all of the UUID values + are correct. + +
+
+ Configure PLUMgrid plug-in + + To use the PLUMgrid plug-in with + OpenStack Networking + + Edit + /etc/neutron/neutron.conf + and set: + core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2 + + + Edit + /etc/neutron/plugins/plumgrid/plumgrid.ini + under the + [PLUMgridDirector] + section, and specify the IP address, + port, admin user name, and password of + the PLUMgrid Director: + [PLUMgridDirector] +director_server = "PLUMgrid-director-ip-address" +director_server_port = "PLUMgrid-director-port" +username = "PLUMgrid-director-admin-username" +password = "PLUMgrid-director-admin-password" + For database configuration, see Install Networking Services + in Installation + Guide. + + + To apply the new settings, restart + neutron-server: + # sudo service neutron-server restart + + +
+
+ Configure Ryu plug-in + + To use the Ryu plug-in with OpenStack + Networking + + Install the Ryu plug-in, as + follows: + # sudo apt-get install neutron-plugin-ryu + + + Edit + /etc/neutron/neutron.conf + and set: + core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2 + + + Edit + /etc/neutron/plugins/ryu/ryu.ini + (for database configuration, see Install Networking Services + in Installation + Guide), and update the + following in the + [ovs] + section for the + ryu-neutron-agent: + + The + openflow_rest_api + is used to tell where Ryu is + listening for REST API. Substitute + ip-address + and + port-no + based on your Ryu setup. + + + The + ovsdb_interface + is used for Ryu to access the + ovsdb-server. + Substitute eth0 based on your set + up. The IP address is derived from + the interface name. If you want to + change this value irrespective of + the interface name, + ovsdb_ip + can be specified. If you use a + non-default port for + ovsdb-server, + it can be specified by + ovsdb_port. + + + tunnel_interface + needs to be set to tell what IP + address is used for tunneling (if + tunneling isn't used, this value is + ignored). The IP address is derived + from the network interface + name. + + + You can use the same configuration + file for many Compute nodes by using a + network interface name with a + different IP address: + openflow_rest_api = <ip-address>:<port-no> ovsdb_interface = <eth0> tunnel_interface = <eth0> + + + To apply the new settings, restart + neutron-server: + # sudo service neutron-server restart + + +
+
+ +
+ Configure neutron agents + Plug-ins typically have requirements for particular + software that must be run on each node that handles + data packets. This includes any node that runs + nova-compute and nodes that run + dedicated OpenStack Networking service agents such as, + neutron-dhcp-agent, + neutron-l3-agent, or + neutron-lbaas-agent (see + below for more information about individual service + agents). + A data-forwarding node typically has a network + interface with an IP address on the “management + network” and another interface on the “data + network”. + This section shows you how to install and configure + a subset of the available plug-ins, which may include + the installation of switching software (for example, + Open vSwitch) as well as agents used to communicate + with the neutron-server process running + elsewhere in the data center. +
+ Configure data-forwarding nodes +
+ Node set up: OVS plug-in + If you use the Open vSwitch plug-in, you + must install Open vSwitch and the + neutron-plugin-openvswitch-agent + agent on each data-forwarding node: + + Do not install the openvswitch-brcompat + package as it breaks the security groups + functionality. + + + To set up each node for the OVS + plug-in + + Install the OVS agent package (this + pulls in the Open vSwitch software as + a dependency): + # sudo apt-get install neutron-plugin-openvswitch-agent + + + On each node that runs the + neutron-plugin-openvswitch-agent: + + + Replicate the + ovs_neutron_plugin.ini + file created in the first step onto + the node. + + + If using tunneling, the + node's + ovs_neutron_plugin.ini + file must also be updated with the + node's IP address configured on the + data network using the + local_ip + value. + + + + + Restart Open vSwitch to properly + load the kernel module: + # sudo service openvswitch-switch restart + + + Restart the agent: + # sudo service neutron-plugin-openvswitch-agent restart + + + All nodes that run + neutron-plugin-openvswitch-agent + must have an OVS + br-int bridge. . + To create the bridge, run: + # sudo ovs-vsctl add-br br-int + + +
+
+ Node set up: Nicira NVP plug-in + If you use the Nicira NVP plug-in, you must + also install Open vSwitch on each + data-forwarding node. However, you do not need + to install an additional agent on each + node. + + It is critical that you are running an + Open vSwitch version that is compatible + with the current version of the NVP + Controller software. Do not use the Open + vSwitch version that is installed by + default on Ubuntu. Instead, use the Open + Vswitch version that is provided on the + Nicira support portal for your NVP + Controller version. + + + To set up each node for the Nicira NVP + plug-in + + Ensure each data-forwarding node has + an IP address on the "management + network," and an IP address on the + "data network" that is used for + tunneling data traffic. For full + details on configuring your forwarding + node, see the NVP + Administrator + Guide. + + + Use the NVP Administrator + Guide to add the node + as a "Hypervisor" using the NVP + Manager GUI. Even if your forwarding + node has no VMs and is only used for + services agents like + neutron-dhcp-agent + or + neutron-lbaas-agent, + it should still be added to NVP as a + Hypervisor. + + + After following the NVP + Administrator Guide, + use the page for this Hypervisor in + the NVP Manager GUI to confirm that + the node is properly connected to the + NVP Controller Cluster and that the + NVP Controller Cluster can see the + br-int + integration bridge. + + +
+
+ Node set up: Ryu plug-in + If you use the Ryu plug-in, you must install + both Open vSwitch and Ryu, in addition to the + Ryu agent package: + + To set up each node for the Ryu + plug-in + + Install Ryu (there isn't currently + an Ryu package for ubuntu): + # sudo pip install ryu + + + Install the Ryu agent and Open + vSwitch packages: + # sudo apt-get install neutron-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkms + + + Replicate the + ovs_ryu_plugin.ini + and neutron.conf + files created in the above step on all + nodes running + neutron-plugin-ryu-agent. + + + + Restart Open vSwitch to properly + load the kernel module: + # sudo service openvswitch-switch restart + + + Restart the agent: + # sudo service neutron-plugin-ryu-agent restart + + + All nodes running + neutron-plugin-ryu-agent + also require that an OVS bridge named + "br-int" exists on each node. To + create the bridge, run: + # sudo ovs-vsctl add-br br-int + + +
+
+
+ Configure DHCP agent + The DHCP service agent is compatible with all + existing plug-ins and is required for all + deployments where VMs should automatically receive + IP addresses through DHCP. + + To install and configure the DHCP + agent + + You must configure the host running the + neutron-dhcp-agent + as a "data forwarding node" according to + the requirements for your plug-in (see + ). + + + Install the DHCP agent: + # sudo apt-get install neutron-dhcp-agent + + + Finally, update any options in the + /etc/neutron/dhcp_agent.ini + file that depend on the plug-in in use + (see the sub-sections). + + +
+ DHCP agent setup: OVS plug-in + The following DHCP agent options are + required in the + /etc/neutron/dhcp_agent.ini + file for the OVS plug-in: + [DEFAULT] +ovs_use_veth = True +enable_isolated_metadata = True +use_namespaces = True +interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver +
+
+ DHCP agent setup: NVP plug-in + The following DHCP agent options are + required in the + /etc/neutron/dhcp_agent.ini + file for the NVP plug-in: + [DEFAULT] +ovs_use_veth = True +enable_metadata_network = True +enable_isolated_metadata = True +use_namespaces = True +interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver +
+
+ DHCP agent setup: Ryu plug-in + The following DHCP agent options are + required in the + /etc/neutron/dhcp_agent.ini + file for the Ryu plug-in: + [DEFAULT] +ovs_use_veth = True +use_namespace = True +interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver +
+
+
+ Configure L3 agent + Neutron has a widely used API extension to allow + administrators and tenants to create "routers" + that connect to L2 networks. + Many plug-ins rely on the L3 service agent to + implement the L3 functionality. However, the + following plug-ins already have built-in L3 + capabilities: + + + + Nicira NVP plug-in + + + Big Switch/Floodlight plug-in, which + supports both the open source Floodlight controller and + the proprietary Big Switch + controller. + + Only the proprietary BigSwitch + controller implements L3 + functionality. When using + Floodlight as your OpenFlow + controller, L3 functionality is not + available. + + + + PLUMgrid plug-in + + + + Do not configure or use + neutron-l3-agent + if you use one of these plug-ins. + + + To install the L3 agent for all other + plug-ins + + Install the + neutron-l3-agent + binary on the network node: + # sudo apt-get install neutron-l3-agent + + + To uplink the node that runs + neutron-l3-agent + to the external network, create a + bridge named "br-ex" and attach the + NIC for the external network to this + bridge. + For example, with Open vSwitch and + NIC eth1 connected to the external + network, run: + # sudo ovs-vsctl add-br br-ex +# sudo ovs-vsctl add-port br-ex eth1 + Do not manually configure an IP + address on the NIC connected to the + external network for the node running + neutron-l3-agent. + Rather, you must have a range of IP + addresses from the external network + that can be used by OpenStack + Networking for routers that uplink to + the external network. This range must + be large enough to have an IP address + for each router in the deployment, as + well as each floating IP. + + + The + neutron-l3-agent + uses the Linux IP stack and iptables + to perform L3 forwarding and NAT. In + order to support multiple routers with + potentially overlapping IP addresses, + neutron-l3-agent + defaults to using Linux network + namespaces to provide isolated + forwarding contexts. As a result, the + IP addresses of routers will not be + visible simply by running ip + addr list or + ifconfig on the + node. Similarly, you will not be able + to directly ping + fixed IPs. + To do either of these things, you + must run the command within a + particular router's network namespace. + The namespace will have the name + "qrouter-<UUID of the router>. + The following commands are examples of + running commands in the namespace of a + router with UUID + 47af3868-0fa8-4447-85f6-1304de32153b: + # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list +# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping <fixed-ip> + + + +
+
+ Configure LBaaS agent + Starting with the Havana release, the Neutron + Load-Balancer-as-a-Service (LBaaS) supports an + agent scheduling mechanism, so several + neutron-lbaas-agents + can be run on several nodes (one per one). + + To install the LBaas agent and configure + the node + + Install the agent by running: + # sudo apt-get install neutron-lbaas-agent + + + If you are using: + + An OVS-based plug-in (OVS, + NVP, Ryu, NEC, + BigSwitch/Floodlight), you must + set: + interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver + + + A plug-in that uses + LinuxBridge, you must set: + interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver + + + + + To use the reference implementation, you + must also set: + device_driver = neutron.plugins.services.agent_loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver + + + Make sure to set the following parameter + in neutron.conf on + the host that runs neutron-server: + service_plugins = neutron.plugins.services.agent_loadbalancer.plugin.LoadBalancerPlugin + + +
+
+ Configure FWaaS agent + The Firewall-as-a-Service (FWaaS) agent is + co-located with the Neutron L3 agent and does not + require any additional packages apart from those + required for the Neutron L3 agent. You can enable + the FWaaS functionality by setting the + configuration, as follows. + + To configure FWaaS service and + agent + + Make sure to set the following parameter + in the neutron.conf + file on the host that runs neutron-server: + service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPlugin + + + To use the reference implementation, you + must also add a FWaaS driver configuration + to the neutron.conf + file on every node where the Neutron L3 + agent is deployed: + [fwaas] +driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver +enabled = True + + +
Networking architecture Before you deploy Networking, it helps to understand the Networking components and how these components interact - with each other and with other OpenStack services. + with each other and other OpenStack services.
Overview Networking is a standalone service, just like other @@ -356,7 +1164,7 @@ deploying several processes on a variety of hosts. The Networking server uses the neutron-server daemon + class="service">neutron-server daemon to expose the Networking API and to pass user requests to the configured Networking plug-in for additional processing. Typically, the plug-in requires access to @@ -382,25 +1190,31 @@ - plug-in agent - (neutron-*-agent) - Runs on each hypervisor to perform local vswitch - configuration. The agent that runs depends on - the plug-in that you use, and some plug-ins do - not require an agent. + plug-in + agent + (neutron-*-agent) + Runs on each hypervisor to perform + local vswitch configuration. The agent + that runs depends on the plug-in that + you use, and some plug-ins do not + require an agent. - dhcp agent - (neutron-dhcp-agent) - Provides DHCP services to tenant networks. - Some plug-ins use this agent. + dhcp + agent + (neutron-dhcp-agent) + Provides DHCP services to tenant + networks. Some plug-ins use this + agent. - l3 agent - (neutron-l3-agent) - Provides L3/NAT forwarding to provide external - network access for VMs on tenant networks. - Some plug-ins use this agent. + l3 + agent + (neutron-l3-agent) + Provides L3/NAT forwarding to provide + external network access for VMs on + tenant networks. Some plug-ins use + this agent. @@ -421,7 +1235,7 @@ >nova-compute service communicates with the Networking API to plug each virtual NIC on the VM into a particular - network.  + network.  The Dashboard (Horizon) integrates with the @@ -471,7 +1285,8 @@ - + @@ -482,27 +1297,45 @@ - - + + - - + + - - + + - - + +
General distinct physical data center networksGeneral distinct physical data center + networks
Management networkProvides internal communication between OpenStack Components. IP - addresses on this network should be reachable only within the data center.Management + networkProvides internal communication + between OpenStack Components. IP + addresses on this network should be + reachable only within the data + center.
Data networkProvides VM data communication within the cloud deployment. The IP addressing - requirements of this network depend on the Networking plug-in that is used.Data + networkProvides VM data communication within + the cloud deployment. The IP + addressing requirements of this + network depend on the Networking + plug-in that is used.
External networkProvides VMs with Internet access in some deployment scenarios. - Anyone on the Internet can reach IP addresses on this network.External + networkProvides VMs with Internet access in + some deployment scenarios. Anyone on + the Internet can reach IP addresses on + this network.
API networkExposes all OpenStack APIs, including the Networking API, to - tenants. IP addresses on this network should be reachable by anyone on the Internet. The - API network might be the same as the external network, because it is possible to create an - external-network subnet that is allocated IP ranges that use less than the full range of IP - addresses in an IP block.API + networkExposes all OpenStack APIs, including + the Networking API, to tenants. IP + addresses on this network should be + reachable by anyone on the + Internet. The API network might be the + same as the external network, because + it is possible to create an + external-network subnet that is + allocated IP ranges that use less than + the full range of IP addresses in an + IP block.
@@ -511,7 +1344,19 @@
Use Networking - + You can start and stop OpenStack Networking services + using the service command. For + example: + # sudo service neutron-server stop +# sudo service neutron-server status +# sudo service neutron-server start +# sudo service neutron-server restart + Log files are in the + /var/log/neutron + directory. + Configuration files are in the + /etc/neutron + directory. You can use Networking in the following ways: @@ -539,7 +1384,7 @@ The CLI includes a number of options. For details, refer to the OpenStack End User + >OpenStack End User Guide.
API abstractions @@ -550,50 +1395,58 @@ L3 forwarding and NAT, which provides capabilities similar to nova-network. - - - - - - - - - - - - - - - - - - - - - - - -
API abstractions
AbstractionDescription
NetworkAn isolated L2 network segment - (similar to a VLAN) that forms the basis - for describing the L2 network topology - available in an Networking deployment.
SubnetAssociates a block of IP - addresses and other network configuration, - such as, default gateways or dns-servers, - with an Networking network. Each subnet - represents an IPv4 or IPv6 address block - and, if needed, each Networking network - can have multiple subnets.
PortRepresents an attachment port to a - L2 Networking network. When a port is - created on the network, by default it is - allocated an available fixed IP address - out of one of the designated subnets for - each IP version (if one exists). When the - port is destroyed, its allocated addresses - return to the pool of available IPs on the - subnet. Users of the Networking API can - either choose a specific IP address from - the block, or let Networking choose the - first available IP address.
+ API abstractions + + + + + Abstraction + Description + + + + + Network + An isolated L2 network segment + (similar to a VLAN) that forms the + basis for describing the L2 network + topology available in an Networking + deployment. + + + Subnet + Associates a block of IP addresses + and other network configuration, + such as, default gateways or + dns-servers, with an Networking + network. Each subnet represents an + IPv4 or IPv6 address block and, if + needed, each Networking network can + have multiple subnets. + + + Port + Represents an attachment port to a + L2 Networking network. When a port + is created on the network, by + default it is allocated an + available fixed IP address out of + one of the designated subnets for + each IP version (if one exists). + When the port is destroyed, its + allocated addresses return to the + pool of available IPs on the + subnet. Users of the Networking API + can either choose a specific IP + address from the block, or let + Networking choose the first + available IP address. + + + The following table summarizes the attributes available for each networking abstraction. For @@ -947,14 +1800,14 @@ of tenants by specifying an Identity in the command, as follows: - $ neutron net-create --tenant-id=tenant-id network-name + # neutron net-create --tenant-id=tenant-id network-name For example: - $ neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1 + # neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1 To view all tenant IDs in Identity, run the following command as an Identity Service admin user: - $ keystone tenant-list + # keystone tenant-list
@@ -977,37 +1830,37 @@ Creates a network that all tenants can use. - $ neutron net-create --shared public-net + # neutron net-create --shared public-net Creates a subnet with a specified gateway IP address. - $ neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24 + # neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24 Creates a subnet that has no gateway IP address. - $ neutron subnet-create --no-gateway net1 10.0.0.0/24 + # neutron subnet-create --no-gateway net1 10.0.0.0/24 Creates a subnet with DHCP disabled. - $ neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False + # neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False Creates a subnet with a specified set of host routes. - $ neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2 + # neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2 Creates a subnet with a specified set of dns name servers. - $ neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8 + # neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8 Displays all ports and IPs allocated on a network. - $ neutron port-list --network_id net-id + # neutron port-list --network_id net-id @@ -1035,12 +1888,12 @@ Checks available networks. - $ neutron net-list + # neutron net-list Boots a VM with a single NIC on a selected Networking network. - $ nova boot --image img --flavor flavor --nic net-id=net-id vm-name + # nova boot --image img --flavor flavor --nic net-id=net-id vm-name @@ -1048,21 +1901,21 @@ that matches the Compute instance UUID. See . - + linkend="network_compute_note" + />. - $ neutron port-list --device_id=vm-id + # neutron port-list --device_id=vm-id Searches for ports, but shows only the for the port. - $ neutron port-list --field mac_address --device_id=vm-id + # neutron port-list --field mac_address --device_id=vm-id Temporarily disables a port from sending traffic. - $ neutron port-update port-id --admin_state_up=False + # neutron port-update port-id --admin_state_up=False @@ -1079,8 +1932,8 @@ VM NIC is automatically created and associated with the default security group. You can configure security group rules to + linkend="enabling_ping_and_ssh" + >security group rules to enable users to access the VM.
@@ -1109,7 +1962,7 @@ Boots a VM with multiple NICs. - $ nova boot --image img --flavor flavor --nic net-id=net1-id --nic net-id=net2-id vm-name + # nova boot --image img --flavor flavor --nic net-id=net1-id --nic net-id=net2-id vm-name Boots a VM with a specific IP address. @@ -1118,8 +1971,8 @@ specifying a rather than a . - $ neutron port-create --fixed-ip subnet_id=subnet-id,ip_address=IP net-id -$ nova boot --image img --flavor flavor --nic port-id=port-id vm-name + # neutron port-create --fixed-ip subnet_id=subnet-id,ip_address=IP net-id +# nova boot --image img --flavor flavor --nic port-id=port-id vm-name @@ -1128,7 +1981,7 @@ tenant who submits the request (without the option). - $ nova boot --image img --flavor flavor vm-name + # nova boot --image img --flavor flavor vm-name @@ -1156,9 +2009,9 @@ ping and ssh access to your VMs. - $ neutron security-group-rule-create --protocol icmp \ + # neutron security-group-rule-create --protocol icmp \ --direction ingress default - $ neutron security-group-rule-create --protocol tcp --port-range-min 22 \ + # neutron security-group-rule-create --protocol tcp --port-range-min 22 \ --port-range-max 22 --direction ingress default @@ -1172,8 +2025,8 @@ ping and ssh access to your VMs. - $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 -$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 + # nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 +# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
@@ -1202,7 +2055,7 @@ Service endpoint. For more information about authentication with the Identity Service, see OpenStack Identity Service API v2.0 + >OpenStack Identity Service API v2.0 Reference. When the Identity Service is enabled, it is not mandatory to specify the tenant ID for resources in create requests because the @@ -1250,7 +2103,7 @@ a policy, which is evaluated. For instance in create_subnet: [["admin_or_network_owner"]], create_subnet is a policy, + role="italic">create_subnet is a policy, and admin_or_network_owner is a rule. Policies are triggered by the Networking policy engine @@ -1378,18 +2231,18 @@ }
- High Availability + High availability The use of high-availability in a Networking deployment helps prevent individual node failures. In general, you can run neutron-server and neutron-dhcp-agent in an + class="service">neutron-dhcp-agent in an active-active fashion. You can run the neutron-l3-agent service as active/passive, which avoids IP conflicts with respect to gateway IP addresses.
- Networking High Availability with Pacemaker + Networking high availability with Pacemaker You can run some Networking services into a cluster (Active / Passive or Active / Active for Networking Server only) with Pacemaker. @@ -1397,18 +2250,18 @@ neutron-server: https://github.com/madkiss/openstack-resource-agents + xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-server" + >https://github.com/madkiss/openstack-resource-agents neutron-dhcp-agent : https://github.com/madkiss/openstack-resource-agents + xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-dhcp" + >https://github.com/madkiss/openstack-resource-agents neutron-l3-agent : https://github.com/madkiss/openstack-resource-agents + xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-l3" + >https://github.com/madkiss/openstack-resource-agents diff --git a/doc/admin-guide-cloud/section_networking_adv_features.xml b/doc/admin-guide-cloud/section_networking_adv_features.xml index e946d389d1..2ddabfa358 100644 --- a/doc/admin-guide-cloud/section_networking_adv_features.xml +++ b/doc/admin-guide-cloud/section_networking_adv_features.xml @@ -514,15 +514,15 @@ Creates external networks. - $ neutron net-create public --router:external=True -$ neutron subnet-create public 172.16.1.0/24 + # neutron net-create public --router:external=True +# neutron subnet-create public 172.16.1.0/24 Lists external networks. - $ neutron net-list -- --router:external=True + # neutron net-list -- --router:external=True @@ -530,13 +530,13 @@ connects to multiple L2 networks privately. - $ neutron net-create net1 -$ neutron subnet-create net1 10.0.0.0/24 -$ neutron net-create net2 -$ neutron subnet-create net2 10.0.1.0/24 -$ neutron router-create router1 -$ neutron router-interface-add router1 <subnet1-uuid> -$ neutron router-interface-add router1 <subnet2-uuid> + # neutron net-create net1 +# neutron subnet-create net1 10.0.0.0/24 +# neutron net-create net2 +# neutron subnet-create net2 10.0.1.0/24 +# neutron router-create router1 +# neutron router-interface-add router1 <subnet1-uuid> +# neutron router-interface-add router1 <subnet2-uuid> @@ -546,7 +546,7 @@ act as a NAT gateway for external connectivity. - $ neutron router-gateway-set router1 <ext-net-id> + # neutron router-gateway-set router1 <ext-net-id> The router obtains an interface with the gateway_ip address of the subnet, and this interface is attached to a @@ -566,7 +566,7 @@ Lists routers. - $ neutron router-list + # neutron router-list @@ -574,7 +574,7 @@ Shows information for a specified router. - $ neutron router-show <router_id> + # neutron router-show <router_id> @@ -590,7 +590,7 @@ represents the VM NIC to which the floating IP should map. - $ neutron port-list -c id -c fixed_ips -- --device_id=<instance_id> + # neutron port-list -c id -c fixed_ips -- --device_id=<instance_id> This port must be on an Networking subnet that is attached to a router uplinked to the external network used @@ -610,8 +610,8 @@ Creates a floating IP address and associates it with a port. - $ neutron floatingip-create <ext-net-id> -$ neutron floatingip-associate <floatingip-id> <internal VM port-id> + # neutron floatingip-create <ext-net-id> +# neutron floatingip-associate <floatingip-id> <internal VM port-id> @@ -620,14 +620,14 @@ associates it with a port, in a single step. - $ neutron floatingip-create --port_id <internal VM port-id> <ext-net-id> + # neutron floatingip-create --port_id <internal VM port-id> <ext-net-id> Lists floating IPs. - $ neutron floatingip-list + # neutron floatingip-list @@ -635,7 +635,7 @@ Finds floating IP for a specified VM port. - $ neutron floatingip-list -- --port_id=ZZZ + # neutron floatingip-list -- --port_id=ZZZ @@ -643,7 +643,7 @@ Disassociates a floating IP address. - $ neutron floatingip-disassociate <floatingip-id> + # neutron floatingip-disassociate <floatingip-id> @@ -651,14 +651,14 @@ Deletes the floating IP address. - $ neutron floatingip-delete <floatingip-id> + # neutron floatingip-delete <floatingip-id> Clears the gateway. - $ neutron router-gateway-clear router1 + # neutron router-gateway-clear router1 @@ -666,14 +666,14 @@ Removes the interfaces from the router. - $ neutron router-interface-delete router1 <subnet-id> + # neutron router-interface-delete router1 <subnet-id> Deletes the router. - $ neutron router-delete router1 + # neutron router-delete router1 @@ -889,51 +889,51 @@ Creates a security group for our web servers. - $ neutron security-group-create webservers --description "security group for webservers" + # neutron security-group-create webservers --description "security group for webservers" Lists security groups. - $ neutron security-group-list + # neutron security-group-list Creates a security group rule to allow port 80 ingress. - $ neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 <security_group_uuid> + # neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 <security_group_uuid> Lists security group rules. - $ neutron security-group-rule-list + # neutron security-group-rule-list Deletes a security group rule. - $ neutron security-group-rule-delete <security_group_rule_uuid> + # neutron security-group-rule-delete <security_group_rule_uuid> Deletes a security group. - $ neutron security-group-delete <security_group_uuid> + # neutron security-group-delete <security_group_uuid> Creates a port and associates two security groups. - $ neutron port-create --security-group <security_group_id1> --security-group <security_group_id2> <network_id> + # neutron port-create --security-group <security_group_id1> --security-group <security_group_id2> <network_id> Removes security groups from a port. - $ neutron port-update --no-security-groups <port_id> + # neutron port-update --no-security-groups <port_id> @@ -980,15 +980,15 @@ option is required for pool creation. - $ neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-uuid> --provider <provider_name> + # neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-uuid> --provider <provider_name> Associates two web servers with pool. - $ neutron lb-member-create --address <webserver one IP> --protocol-port 80 mypool -$ neutron lb-member-create --address <webserver two IP> --protocol-port 80 mypool + # neutron lb-member-create --address <webserver one IP> --protocol-port 80 mypool +# neutron lb-member-create --address <webserver two IP> --protocol-port 80 mypool @@ -996,13 +996,13 @@ make sure our instances are still running on the specified protocol-port. - $ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 + # neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 Associates a health monitor with pool. - $ neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool + # neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool @@ -1012,7 +1012,7 @@ directs the requests to one of the pool members. - $ neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-uuid> mypool + # neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-uuid> mypool @@ -1363,7 +1363,7 @@ Create a firewall rule: - $ neutron firewall-rule-create --protocol <tcp|udp|icmp|any> --destination-port <port-range> --action <allow|deny> + # neutron firewall-rule-create --protocol <tcp|udp|icmp|any> --destination-port <port-range> --action <allow|deny> The CLI requires that a protocol value be provided. If the rule is protocol agnostic, the 'any' value can be used. @@ -1374,7 +1374,7 @@ Create a firewall policy: - $ neutron firewall-policy-create --firewall-rules "<firewall-rule ids or names separated by space>" myfirewallpolicy + # neutron firewall-policy-create --firewall-rules "<firewall-rule ids or names separated by space>" myfirewallpolicy The order of the rules specified above is important. A firewall policy can be created without any rules and rules can be added later @@ -1394,15 +1394,14 @@ Create a firewall: - $ neutron firewall-create <firewall-policy-uuid> + # neutron firewall-create <firewall-policy-uuid> - The FWaaS features and the above workflow can - also be accessed from the Horizon user interface. - This support is disabled by default, but can be - enabled by configuring - $HORIZON_DIR/openstack_dashboard/local/local_settings.py + The FWaaS features and the above workflow can also be accessed from the + Horizon user interface. This support is disabled by default, but can be enabled + by configuring + #HORIZON_DIR/openstack_dashboard/local/local_settings.py and setting 'enable_firewall' = True @@ -1432,12 +1431,12 @@ Create a port with a specific allowed-address-pairs: - $ neutron port-create net1 --allowed-address-pairs type=dict list=true mac_address=<mac_address>,ip_address=<ip_cidr> + # neutron port-create net1 --allowed-address-pairs type=dict list=true mac_address=<mac_address>,ip_address=<ip_cidr> Update a port adding allowed-address-pairs: - $ neutron port-update <subnet-uuid> --allowed-address-pairs type=dict list=true mac_address=<mac_address>,ip_address=<ip_cidr> + # neutron port-update <subnet-uuid> --allowed-address-pairs type=dict list=true mac_address=<mac_address>,ip_address=<ip_cidr> @@ -1599,7 +1598,7 @@ Creates QoS Queue (admin-only). - $ neutron queue-create--min 10 --max 1000 myqueue + # neutron queue-create--min 10 --max 1000 myqueue @@ -1607,20 +1606,20 @@ Associates a queue with a network. - $ neutron net-create network --queue_id=<queue_id> + # neutron net-create network --queue_id=<queue_id> Creates a default system queue. - $ neutron queue-create --default True --min 10 --max 2000 default + # neutron queue-create --default True --min 10 --max 2000 default Lists QoS queues. - $ neutron queue-list + # neutron queue-list @@ -1628,7 +1627,7 @@ Deletes a QoS queue. - $ neutron queue-delete <queue_id or name>' + # neutron queue-delete <queue_id or name>' @@ -1709,21 +1708,22 @@ Nicira NVP L3 extension operations Create external network and map it to a specific NVP gateway service: - $ neutron net-create public --router:external=True --provider:network_type l3_ext \ + # neutron net-create public --router:external=True --provider:network_type l3_ext \ --provider:physical_network <L3-Gateway-Service-UUID> Terminate traffic on a specific VLAN from a NVP gateway service: - $ neutron net-create public --router:external=True --provider:network_type l3_ext \ + # neutron net-create public --router:external=True --provider:network_type l3_ext \ --provider:physical_network <L3-Gateway-Service-UUID> -provider:segmentation_id <VLAN_ID>
- Big Switch Plugin Extensions - The following section explains the Big Switch Neutron plugin-specific extension. + Big Switch plug-in extensions + The following section explains the Big Switch Neutron plug-in-specific + extension.
- Big Switch Router Rules + Big Switch router rules Big Switch allows router rules to be added to each tenant router. These rules can be used to enforce routing policies such as denying traffic between subnets or traffic @@ -1731,7 +1731,7 @@ level, network segmentation policies can be enforced across many VMs that have differing security groups.
- Router Rule Attributes + Router rule attributes Each tenant router has a set of router rules associated with it. Each router rule has the attributes in the following table. Router rules and their @@ -1740,7 +1740,7 @@ via the Horizon interface, or through the Neutron API. - + @@ -1790,7 +1790,7 @@
Big Switch Router Rule AttributesBig Switch Router rule attributes
- Order of Rule Processing + Order of rule processing The order of router rules has no effect. Overlapping rules are evaluated using longest prefix matching on the source and destination fields. The source field @@ -1801,7 +1801,7 @@ source.
- Big Switch Router Rules Operations + Big Switch router rules operations Router rules are configured with a router update operation in Neutron. The update overrides any previous rules so all of the rules must be provided at the same @@ -1809,17 +1809,17 @@ Update a router with rules to permit traffic by default but block traffic from external networks to the 10.10.10.0/24 subnet: - $ neutron router-update Router-UUID --router_rules type=dict list=true\ + # neutron router-update Router-UUID --router_rules type=dict list=true\ source=any,destination=any,action=permit \ source=external,destination=10.10.10.0/24,action=deny Specify alternate next-hop addresses for a specific subnet: - $ neutron router-update Router-UUID --router_rules type=dict list=true\ + # neutron router-update Router-UUID --router_rules type=dict list=true\ source=any,destination=any,action=permit \ source=10.10.10.0/24,destination=any,action=permit,nexthops=10.10.10.254+10.10.10.253 Block traffic between two subnets while allowing everything else: - $ neutron router-update Router-UUID --router_rules type=dict list=true\ + # neutron router-update Router-UUID --router_rules type=dict list=true\ source=any,destination=any,action=permit \ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny
diff --git a/doc/admin-guide-cloud/section_networking_adv_operational_features.xml b/doc/admin-guide-cloud/section_networking_adv_operational_features.xml index 8a4e611569..703f18ccf4 100644 --- a/doc/admin-guide-cloud/section_networking_adv_operational_features.xml +++ b/doc/admin-guide-cloud/section_networking_adv_operational_features.xml @@ -17,10 +17,8 @@ Provide logging settings in a logging configuration file. - See Python Logging HOWTO for logging - configuration file. + See Python + logging how-to to learn more about logging. Provide logging setting in diff --git a/doc/common/section_rpc-for-networking.xml b/doc/common/section_rpc-for-networking.xml index a9463d0033..4c95074422 100644 --- a/doc/common/section_rpc-for-networking.xml +++ b/doc/common/section_rpc-for-networking.xml @@ -4,16 +4,17 @@ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="networking-configuring-rpc"> - Configuring the Oslo RPC Messaging System - - OpenStack projects use an open standard for messaging middleware - known as AMQP. This messaging middleware enables the OpenStack - services which will exist across multiple servers to talk to each other. - OpenStack Oslo RPC supports three implementations of AMQP: - RabbitMQ, - Qpid, and - ZeroMQ + Configuration options for the Oslo RPC Messaging System + Many OpenStack Networking plug-ins use RPC to enable agents to communicate with the main + neutron-server process. If your plugin requires + agents, they can use the same RPC mechanism used by other OpenStack components like Nova. + OpenStack projects use an open standard for messaging middleware known as AMQP. This messaging + middleware enables the OpenStack services which will exist across multiple servers to talk to + each other. OpenStack Oslo RPC supports three implementations of AMQP: + RabbitMQ, Qpid, and + ZeroMQ +
Configuration for RabbitMQ @@ -43,67 +44,50 @@ rpc_backend=neutron.openstack.common.rpc.impl_kombu +
- Configuration for Qpid - - This section discusses the configuration options that are relevant - if Qpid is used as the messaging system for - OpenStack Oslo RPC. Qpid is not the default - messaging system, so it must be enabled by setting the - rpc_backend option in + Configuration for Qpid + This section discusses the configuration options that are relevant if + Qpid is used as the messaging system for OpenStack Oslo RPC. + Qpid is not the default messaging system, so it must be enabled + by setting the rpc_backend option in neutron.conf. - - + rpc_backend=neutron.openstack.common.rpc.impl_qpid - - This next critical option points the compute nodes to the - Qpid broker (server). Set - qpid_hostname in neutron.conf to + This next critical option points the compute nodes to the Qpid + broker (server). Set qpid_hostname in neutron.conf to be the hostname where the broker is running. - - - The --qpid_hostname option accepts a value in - the form of either a hostname or an IP address. - - - + + The --qpid_hostname option accepts a value in the form of either a + hostname or an IP address. + + qpid_hostname=hostname.example.com - - If the Qpid broker is listening on a - port other than the AMQP default of 5672, you will - need to set the qpid_port option: - - + If the Qpid broker is listening on a port other than the AMQP + default of 5672, you will need to set the qpid_port + option: + qpid_port=12345 - - If you configure the Qpid broker to - require authentication, you will need to add a username and password to - the configuration: - - + If you configure the Qpid broker to require authentication, you + will need to add a username and password to the configuration: + qpid_username=username qpid_password=password - - By default, TCP is used as the transport. If you would like to - enable SSL, set the qpid_protocol option: - - + By default, TCP is used as the transport. If you would like to enable SSL, set the + qpid_protocol option: + qpid_protocol=ssl - - The following table lists the rest of the options used by the Qpid - messaging driver for OpenStack Oslo RPC. It is not common that these - options are used. - - - -
+ The following table lists the rest of the options used by the Qpid messaging driver for + OpenStack Oslo RPC. It is not common that these options are used. + +
Configuration for ZeroMQ This section discusses the configuration options that are relevant @@ -115,7 +99,7 @@ qpid_protocol=ssl
- Common Configuration for Messaging + Common configuration for messaging This section lists options that are common between the RabbitMQ, Qpid diff --git a/doc/config-reference/networking/section_networking-options-reference.xml b/doc/config-reference/networking/section_networking-options-reference.xml index 0cb6e21916..cdb75b1483 100644 --- a/doc/config-reference/networking/section_networking-options-reference.xml +++ b/doc/config-reference/networking/section_networking-options-reference.xml @@ -3,12 +3,11 @@ xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> - Networking Configuration Options -These options and descriptions were generated from the code in -the Networking service project which provides software defined networking -between VMs run in Compute. Below are common options, and the sections -following contain information about the various networking plugins and -less-commonly altered sections. + Networking configuration options +The options and descriptions listed in this introduction are autogenerated from the code in + the Networking service project, which provides software-defined networking between VMs run + in Compute. The list contains common options, while the subsections list the options for the + various networking plug-ins. diff --git a/doc/config-reference/networking/section_networking-plugins.xml b/doc/config-reference/networking/section_networking-plugins.xml index 5140c7c072..d06a4d39b7 100644 --- a/doc/config-reference/networking/section_networking-plugins.xml +++ b/doc/config-reference/networking/section_networking-plugins.xml @@ -1,85 +1,81 @@ -
-Networking plugins -OpenStack Networking introduces the concept of a plugin, which is a back-end implementation of - the OpenStack Networking API. A plugin can use a variety of technologies to - implement the logical API requests. Some OpenStack Networking plugins might use - basic Linux VLANs and IP tables, while others might use more advanced + Networking plug-ins + OpenStack Networking introduces the concept of a plug-in, which is a back-end + implementation of the OpenStack Networking API. A plug-in can use a variety of + technologies to implement the logical API requests. Some Networking plug-ins might + use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow. The following sections - detail the configuration options for the various plugins available. -
-BigSwitch configuration options - -
-
-Brocade Configuration Options - -
-
-CISCO Configuration Options - -
-
-CloudBase Hyper-V Plugin configuration options (deprecated) - -
-
-CloudBase Hyper-V Agent configuration options - -
-
-Linux bridge Plugin configuration options (deprecated) - -
-
-Linux bridge Agent configuration options - -
-
-Mellanox Configuration Options - -
-
-Meta Plugin configuration options -The Meta Plugin allows you to use multiple plugins at the same time. - -
- -
-MidoNet configuration options - -
-
-NEC configuration options - -
-
-Nicira NVP configuration options - -
-
-Open vSwitch Plugin configuration options (deprecated) - -
-
-Open vSwitch Agent configuration options - -
-
-PLUMgrid configuration options - -
-
-Ryu configuration options - -
- + detail the configuration options for the various plug-ins available.
+
+ BigSwitch configuration options + +
+
+ Brocade configuration options + +
+
+ Cisco configuration options + +
+
+ CloudBase Hyper-V Plugin configuration options (deprecated) + +
+
+ CloudBase Hyper-V Agent configuration options + +
+
+ Linux bridge plug-in configuration options (deprecated) + +
+
+ Linux bridge Agent configuration options + +
+
+ Mellanox configuration options + +
+
+ Meta plug-in configuration options + The meta plug-in allows you to use multiple plug-ins at the same + time. + +
+ +
+ MidoNet configuration options + +
+
+ NEC configuration options + +
+
+ Nicira NVP configuration options + +
+
+ Open vSwitch plug-in configuration options (deprecated) + +
+
+ Open vSwitch Agent configuration options + +
+
+ PLUMgrid configuration options + +
+
+ Ryu configuration options + +
diff --git a/doc/install-guide/section_neutron-install.xml b/doc/install-guide/section_neutron-install.xml index 190add61e3..ce6aa7ab02 100644 --- a/doc/install-guide/section_neutron-install.xml +++ b/doc/install-guide/section_neutron-install.xml @@ -43,60 +43,59 @@ - OpenVSwitch - neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 - - - LinuxBridge - neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2 - - - ml2 - neutron.plugins.ml2.plugin.Ml2Plugin - - - RYU - neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2 - - - PLUMgrid - neutron.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2 + BigSwitch + neutron.plugins.bigswitch.plugin.NeutronRestProxyV2 Brocade neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2 - - Hyper-V - neutron.plugins.hyperv.hyperv_neutron_plugin.HyperVNeutronPlugin - - - BigSwitch - neutron.plugins.bigswitch.plugin.NeutronRestProxyV2 - Cisco neutron.plugins.cisco.network_plugin.PluginV2 - Midonet - neutron.plugins.midonet.plugin.MidonetPluginV2 + Hyper-V + neutron.plugins.hyperv.hyperv_neutron_plugin.HyperVNeutronPlugin - Nec - neutron.plugins.nec.nec_plugin.NECPluginV2 + LinuxBridge + neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2 + + + Mellanox + neutron.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin MetaPlugin neutron.plugins.metaplugin.meta_neutron_plugin.MetaPluginV2 - Mellanox - neutron.plugins.mlnx.mlnx_plugin.MellanoxEswitchPlugin + Midonet + neutron.plugins.midonet.plugin.MidonetPluginV2 + + + ml2 + neutron.plugins.ml2.plugin.Ml2Plugin + + + Nec + neutron.plugins.nec.nec_plugin.NECPluginV2 + + + OpenVSwitch + neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 + + + PLUMgrid + neutron.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2 + + + RYU + neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2 - Depending on the value of core_plugin, the start-up scripts start the daemons by using the corresponding plug-in configuration file @@ -502,7 +501,8 @@ firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewal # chkconfig neutron-plugin-openvswitch-agent on - Now, return whence you came! + Now, return to the general OVS + instructions.
@@ -633,8 +633,9 @@ bridge_mappings = physnet1:br-DATA_INTERFACE # neutron router-interface-add EXT_TO_INT_ID DEMO_NET_SUBNET_ID - Check your plug-ins special options page for remaining - steps. Then, return whence you came. + Check the special options page for your plug-in for + remaining steps. Now, return to the general + OVS instructions.
EXT_TO_INT_ID segmentation id and copy the network type option for any additional networks. - Return whence you came. + Now, return to the general OVS + instructions.
@@ -1087,7 +1089,8 @@ security_group_api=neutron - Now, return whence you came. + Now, return to the general OVS + instructions.
The following diagram shows the set up. For simplicity, all nodes should have one interface for management traffic and one or more interfaces for traffic to and from VMs. The management - network is 100.1.1.0/24 with controller node at 100.1.1.2. The - example uses the Open vSwitch plug-in and agent. + network is 100.1.1.0/24 with controller node at 100.1.1.2. The example uses the Open vSwitch + plugin and agent. You can modify this set up to make use of another supported plug-in and its agent. @@ -49,9 +49,19 @@ other node resolves to the IP of the controller node. - The nova-network service should not be - running. This is replaced by - Networking. + The nova-network service + should not be running. This is replaced by Networking. To delete a network, use nova-manage network delete: + # nova-manage network delete --help + Usage: nova-manage network delete <args> [options] + + Options: + -h, --help show this help message and exit + --fixed_range=<x.x.x.x/yy> + Network to delete + --uuid=<uuid> UUID of network to delete + Note that a network must first be disassociated from a project + using the nova network-disassociate command before it can be + deleted.