openstack-manuals/doc/admin-guide-cloud/ch_networking.xml

2457 lines
132 KiB
XML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_networking">
<?dbhtml stop-chunking?>
<title>Networking</title>
<para>Learn Networking concepts, architecture, and basic and
advanced neutron and nova command-line interface (CLI)
cloud.</para>
<section xml:id="section_networking-intro">
<title>Introduction to Networking</title>
<para>The Networking service, code-named Neutron, provides an
API that lets you define network connectivity and addressing in
the cloud. The Networking service enables operators to
leverage different networking technologies to power their
cloud networking. The Networking service also provides an
API to configure and manage a variety of network services
ranging from L3 forwarding and NAT to load balancing, edge
firewalls, and IPSEC VPN.</para>
<para>For a detailed description of the Networking API
abstractions and their attributes, see the <link
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
><citetitle>OpenStack Networking API v2.0
Reference</citetitle></link>.</para>
<section xml:id="section_networking-api">
<title>Networking API</title>
<para>Networking is a virtual network service that
provides a powerful API to define the network
connectivity and IP addressing used by devices from
other services, such as Compute.</para>
<para>The Compute API has a virtual server abstraction to
describe computing resources. Similarly, the
Networking API has virtual network, subnet, and port
abstractions to describe networking resources.</para>
<table rules="all">
<caption>Networking resources</caption>
<col width="10%"/>
<col width="90%"/>
<thead>
<tr>
<th>Resource</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold"
>Network</emphasis></td>
<td>An isolated L2 segment, analogous to VLAN
in the physical networking world.</td>
</tr>
<tr>
<td><emphasis role="bold"
>Subnet</emphasis></td>
<td>A block of v4 or v6 IP addresses and
associated configuration state.</td>
</tr>
<tr>
<td><emphasis role="bold">Port</emphasis></td>
<td>A connection point for attaching a single
device, such as the NIC of a virtual
server, to a virtual network. Also
describes the associated network
configuration, such as the MAC and IP
addresses to be used on that port.</td>
</tr>
</tbody>
</table>
<para>You can configure rich network topologies by
creating and configuring networks and subnets, and
then instructing other OpenStack services like Compute
to attach virtual devices to ports on these
networks.</para>
<para>In particular, Networking supports each tenant
having multiple private networks, and allows tenants
to choose their own IP addressing scheme (even if
those IP addresses overlap with those used by other
tenants). The Networking service:</para>
<itemizedlist>
<listitem>
<para>Enables advanced cloud networking use cases,
such as building multi-tiered web applications
and allowing applications to be migrated to
the cloud without changing IP
addresses.</para>
</listitem>
<listitem>
<para>Offers flexibility for the cloud
administrator to customize network
offerings.</para>
</listitem>
<listitem>
<para>Enables developers to extend the Networking
API. Over time, the extended functionality
becomes part of the core Networking
API.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="section_plugin-arch">
<title>Plug-in architecture</title>
<para>The original Compute network implementation assumed
a basic model of isolation through Linux VLANs and IP
tables. Networking introduces the concept of a
<emphasis role="italic">plug-in</emphasis>, which
is a back-end implementation of the Networking API. A
plug-in can use a variety of technologies to implement
the logical API requests. Some Networking plug-ins
might use basic Linux VLANs and IP tables, while
others might use more advanced technologies, such as
L2-in-L3 tunneling or OpenFlow, to provide similar
benefits.</para>
<table rules="all">
<caption>Available networking plug-ins</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Plug-in</th>
<th>Documentation</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Big Switch Plug-in
(Floodlight REST
Proxy)</emphasis></td>
<td>Documentation included in this guide and
<link
xlink:href="http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin"
>http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin</link>
</td>
</tr>
<tr>
<td><emphasis role="bold">Brocade
Plug-in</emphasis></td>
<td>Documentation included in this guide</td>
</tr>
<tr>
<td><emphasis role="bold"
>Cisco</emphasis></td>
<td><link
xlink:href="http://wiki.openstack.org/cisco-neutron"
>http://wiki.openstack.org/cisco-neutron</link></td>
</tr>
<tr>
<td><emphasis role="bold">Cloudbase Hyper-V
Plug-in</emphasis></td>
<td><link
xlink:href="http://www.cloudbase.it/quantum-hyper-v-plugin/"
>http://www.cloudbase.it/quantum-hyper-v-plugin/</link></td>
</tr>
<tr>
<td><emphasis role="bold">Linux Bridge
Plug-in</emphasis></td>
<td><link
xlink:href="http://wiki.openstack.org/Quantum-Linux-Bridge-Plugin"
>http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin</link></td>
</tr>
<tr>
<td><emphasis role="bold">Mellanox
Plug-in</emphasis></td>
<td><link
xlink:href="https://wiki.openstack.org/wiki/Mellanox-Neutron/"
>https://wiki.openstack.org/wiki/Mellanox-Neutron/</link></td>
</tr>
<tr>
<td><emphasis role="bold">Midonet
Plug-in</emphasis></td>
<td><link
xlink:href="http://www.midokura.com/"
>http://www.midokura.com/</link></td>
</tr>
<tr>
<td><emphasis role="bold">ML2 (Modular Layer
2) Plug-in</emphasis></td>
<td><link
xlink:href="https://wiki.openstack.org/wiki/Neutron/ML2"
>https://wiki.openstack.org/wiki/Neutron/ML2</link></td>
</tr>
<tr>
<td><emphasis role="bold">NEC OpenFlow
Plug-in</emphasis></td>
<td><link
xlink:href="http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin"
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link></td>
</tr>
<tr>
<td><emphasis role="bold">Nicira NVP
Plug-in</emphasis></td>
<td>Documentation included in this guide as
well as in <link
xlink:href="http://www.vmware.com/products/datacenter-virtualization/nicira.html"
>NVP Product Overview</link>, <link
xlink:href="http://www.nicira.com/support"
>NVP Product Support</link></td>
</tr>
<tr>
<td><emphasis role="bold">Open vSwitch
Plug-in</emphasis></td>
<td>Documentation included in this guide.</td>
</tr>
<tr>
<td><emphasis role="bold"
>PLUMgrid</emphasis></td>
<td>Documentation included in this guide as
well as in <link
xlink:href="https://wiki.openstack.org/wiki/PLUMgrid-Neutron"
>https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron</link></td>
</tr>
<tr>
<td><emphasis role="bold">Ryu
Plug-in</emphasis></td>
<td>Documentation included in this guide as
well as in <link
xlink:href="https://github.com/osrg/ryu/wiki/OpenStack"
>https://github.com/osrg/ryu/wiki/OpenStack</link></td>
</tr>
</tbody>
</table>
<para>Plug-ins can have different properties for hardware
requirements, features, performance, scale, or
operator tools. Because Networking supports a large
number of plug-ins, the cloud administrator can weigh
options to decide on the right networking technology
for the deployment.</para>
<para>In the Havana release, OpenStack Networking provides
the <firstterm>Modular Layer 2
(ML2)</firstterm> plug-in that can concurrently use
multiple layer 2 networking technologies that are
found in real-world data centers. It currently works
with the existing Open vSwitch, Linux Bridge, and
Hyper-v L2 agents. The ML2 framework simplifies the
addition of support for new L2 technologies and
reduces the effort that is required to add and
maintain them compared to monolithic plug-ins.</para>
<note>
<title>Plug-in deprecation notice:</title>
<para>The Open vSwitch and Linux Bridge plug-ins are
deprecated in the Havana release and will be
removed in the Icehouse release. All features have
been ported to the ML2 plug-in in the form of
mechanism drivers. ML2 currently provides Linux
Bridge, Open vSwitch and Hyper-v mechanism
drivers.</para>
</note>
<para>Not all Networking plug-ins are compatible with all
possible Compute drivers:</para>
<table rules="all">
<caption>Plug-in compatibility with Compute
drivers</caption>
<thead>
<tr>
<th>Plug-in</th>
<th>Libvirt (KVM/QEMU)</th>
<th>XenServer</th>
<th>VMware</th>
<th>Hyper-V</th>
<th>Bare-metal</th>
<th>PowerVM</th>
</tr>
</thead>
<tbody>
<tr>
<td>Big Switch / Floodlight</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Brocade</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Cisco</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Cloudbase Hyper-V</td>
<td/>
<td/>
<td/>
<td>Yes</td>
<td/>
<td/>
</tr>
<tr>
<td>Linux Bridge</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Mellanox</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Midonet</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>ML2</td>
<td>Yes</td>
<td/>
<td/>
<td>Yes</td>
<td/>
<td/>
</tr>
<tr>
<td>NEC OpenFlow</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Nicira NVP</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Open vSwitch</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Plumgrid</td>
<td>Yes</td>
<td/>
<td>Yes</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Ryu</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table>
<section xml:id="section_plugin-config">
<title>Plug-in configurations</title>
<para>For configurations options, see <link
xlink:href="http://docs.openstack.org/havana/config-reference/content/section_networking-options-reference.html"
>Networking configuration options</link> in
<citetitle>Configuration
Reference</citetitle>. These sections explain how
to configure specific plug-ins.</para>
<section xml:id="bigswitch_floodlight_plugin">
<title>Configure Big Switch, Floodlight REST Proxy
plug-in</title>
<procedure>
<title>To use the REST Proxy plug-in with
OpenStack Networking</title>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename>
and set:</para>
<programlisting language="ini">core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2</programlisting>
</step>
<step>
<para>Edit the plug-in configuration file,
<filename>/etc/neutron/plugins/bigswitch/restproxy.ini</filename>,
and specify a comma-separated list of
<systemitem>controller_ip:port</systemitem>
pairs:
<programlisting language="ini">server = &lt;controller-ip&gt;:&lt;port&gt;</programlisting>
For database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link>
in any of the <citetitle>Installation
Guides</citetitle> in the <link
xlink:href="http://docs.openstack.org"
>OpenStack Documentation
index</link>. (The link defaults to
the Ubuntu version.)</para>
</step>
<step>
<para>To apply the new settings, restart
<systemitem class="service"
>neutron-server</systemitem>:</para>
<screen><prompt>#</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="brocade_plugin">
<title>Configure Brocade plug-in</title>
<procedure>
<title>To use the Brocade plug-in with
OpenStack Networking</title>
<step>
<para>Install the Brocade modified Python
netconf client (ncclient) library which is available
at <link
xlink:href="https://github.com/brocade/ncclient">https://github.com/brocade/ncclient</link>:
<screen><prompt>$</prompt> <userinput>git clone https://www.github.com/brocade/ncclient</userinput>
<prompt>$</prompt> <userinput>cd ncclient; sudo python ./setup.py install</userinput></screen>
</para>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename>
file and set the following option:</para>
<programlisting language="ini">core_plugin = neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2</programlisting>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/brocade/brocade.ini</filename>
configuration file for the Brocade plug-in
and specify the admin user name, password,
and IP address of the Brocade switch:
</para>
<programlisting language="ini">[SWITCH]
username = <replaceable>admin</replaceable>
password = <replaceable>password</replaceable>
address = <replaceable>switch mgmt ip address</replaceable>
ostype = NOS</programlisting>
<para>
For database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link>
in any of the <citetitle>Installation
Guides</citetitle> in the <link
xlink:href="http://docs.openstack.org"
>OpenStack Documentation
index</link>. (The link defaults to
the Ubuntu version.)</para>
</step>
<step>
<para>To apply the new settings, restart the
<systemitem class="service"
>neutron-server</systemitem> service:
</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="openvswitch_plugin">
<title>Configure OVS plug-in</title>
<para>If you use the Open vSwitch (OVS) plug-in in
a deployment with multiple hosts, you will
need to use either tunneling or vlans to
isolate traffic from multiple networks.
Tunneling is easier to deploy because it does
not require configuring VLANs on network
switches.</para>
<para>This procedure uses tunneling:</para>
<procedure>
<title>To configure OpenStack Networking to
use the OVS plug-in</title>
<step>
<para>Edit
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
</filename> to specify these values
(for database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link>
in <citetitle>Installation
Guide</citetitle>):</para>
<programlisting language="ini">enable_tunneling=True
tenant_network_type=gre
tunnel_id_ranges=1:1000
# only required for nodes running agents
local_ip=&lt;data-net-IP-address-of-node&gt;</programlisting>
</step>
<step>
<para>If you use the neutron DHCP agent,
add these lines to the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file:</para>
<programlisting language="ini">dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf</programlisting>
</step>
<step>
<para>Create
<filename>/etc/neutron/dnsmasq-neutron.conf</filename>,
and add these values to lower the MTU
size on instances and prevent packet
fragmentation over the GRE
tunnel:</para>
<programlisting language="ini">dhcp-option-force=26,1400</programlisting>
</step>
<step>
<para>After performing that change on the
node running <systemitem
class="service"
>neutron-server</systemitem>,
restart <systemitem class="service"
>neutron-server</systemitem> to
apply the new settings:</para>
<screen><prompt>#</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="nvp_plugin">
<title>Configure Nicira NVP plug-in</title>
<procedure>
<title>To configure OpenStack Networking to
use the NVP plug-in</title>
<para>While the instructions in this section refer to the Nicira NVP
platform, they also apply to VMware NSX.</para>
<step>
<para>Install the NVP plug-in, as
follows:</para>
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-plugin-nicira</userinput></screen>
</step>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename>
and set:</para>
<programlisting language="ini">core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2</programlisting>
<para>Example
<filename>neutron.conf</filename>
file for NVP:</para>
<programlisting language="ini">core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2
rabbit_host = 192.168.203.10
allow_overlapping_ips = True</programlisting>
</step>
<step>
<para>To configure the NVP controller cluster for the Openstack
Networking Service, locate the <literal>[default]</literal> section
in the <filename>/etc/neutron/plugins/nicira/nvp.ini</filename>
file, and add the following entries (for database configuration, see
<link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link> in <citetitle>Installation
Guide</citetitle>): <itemizedlist>
<listitem>
<para>A set of parameters need to establish and configure
the connection with the controller cluster. Such
parameters include NVP API endpoints, access
credentials, and settings for HTTP redirects and retries
in case of connection
failures<programlisting>nvp_user = &lt;admin user name>
nvp_password = &lt;password for nvp_user>
req_timeout = &lt;timeout in seconds for NVP_requests> # default 30 seconds
http_timeout = &lt;tiemout in seconds for single HTTP request> # default 10 seconds
retries = &lt;number of HTTP request retries> # default 2
redirects = &lt;maximum allowed redirects for a HTTP request> # default 3
nvp_controllers = &lt;comma separated list of API endpoints></programlisting></para>
<para>In order to ensure correct operations
<literal>nvp_user</literal> shoud be a user with
administrator credentials on the NVP platform.</para>
<para>A controller API endpoint consists of the
controller's IP address and port; if the port is
omitted, port 443 will be used. If multiple API
endpoints are specified, it is up to the user to ensure
that all these endpoints belong to the same controller
cluster; The Openstack Networking Nicira NVP plugin does
not perform this check, and results might be
unpredictable.</para>
<para>When multiple API endpoints are specified, the plugin
will load balance requests on the various API
endpoints.</para>
</listitem>
<listitem>
<para>The UUID of the NVP Transport Zone that should be used
by default when a tenant creates a network. This value
can be retrieved from the NVP Manager's Transport Zones
page:</para>
<programlisting language="ini">default_tz_uuid = &lt;uuid_of_the_transport_zone&gt;</programlisting>
</listitem>
<listitem>
<programlisting language="ini">default_l3_gw_service_uuid = &lt;uuid_of_the_gateway_service&gt;</programlisting>
<warning>
<para>Ubuntu packaging currently does not update the
neutron init script to point to the NVP
configuration file. Instead, you must manually
update
<filename>/etc/default/neutron-server</filename>
with the following:</para>
<programlisting language="ini">NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/nicira/nvp.ini</programlisting>
</warning>
</listitem>
</itemizedlist></para>
</step>
<step>
<para>To apply the new settings, restart
<systemitem class="service"
>neutron-server</systemitem>:</para>
<screen><prompt>#</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
<para>Example <filename>nvp.ini</filename>
file:</para>
<programlisting language="ini">[DEFAULT]
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
nvp_user=admin
nvp_password=changeme
nvp_controllers=10.127.0.100,10.127.0.200:8888</programlisting>
<note>
<para>To debug <filename>nvp.ini</filename>
configuration issues, run this command
from the host that runs <systemitem
class="service"
>neutron-server</systemitem>:</para>
<screen><prompt>#</prompt> <userinput>check-nvp-config &lt;path/to/nvp.ini&gt;</userinput></screen>
<para>This command tests whether <systemitem
class="service"
>neutron-server</systemitem> can log
into all of the NVP Controllers and the
SQL server, and whether all UUID values
are correct.</para>
</note>
<section xml:id="LBaaS_and_FWaaS">
<title>Loadbalancer-as-a-Service and Firewall-as-a-Service</title>
<para>The NVP LBaaS and FWaaS services use the standard OpenStack API with the exception of requiring routed-insertion extension support.</para>
<para>Below are the main differences between the NVP implementation and the community reference implementation of these services:</para>
<orderedlist>
<listitem>
<para>The NVP LBaaS and FWaaS plugins require the routed-insertion extension, which adds the <code>router_id</code> attribute to the VIP (Virtual IP address) and firewall resources and binds these services to a logical router.</para>
</listitem>
<listitem>
<para>The community reference implementation of LBaaS only supports a one-arm model, which restricts the VIP to be on the same subnet as the backend servers. The NVP LBaaS plugin only supports a two-arm model between north-south traffic, meaning that the VIP can only be created on the external (physical) network.</para>
</listitem>
<listitem>
<para>The community reference implementation of FWaaS applies firewall rules to all logical routers in a tenant, while the NVP FWaaS plugin applies firewall rules only to one logical router according to the <code>router_id</code> of the firewall entity.</para>
</listitem>
</orderedlist>
<procedure>
<title>To configure Loadbalancer-as-a-Service and Firewall-as-a-Service with NVP:</title>
<step>
<para>Edit <filename>/etc/neutron/neutron.conf</filename> file:</para>
<programlisting language="ini">core_plugin = neutron.plugins.nicira.NeutronServicePlugin.NvpAdvancedPlugin
# Note: comment out service_plugins. LBaaS &amp; FWaaS is supported by core_plugin NvpAdvancedPlugin
# service_plugins = </programlisting>
</step>
<step>
<para>Edit <filename>/etc/neutron/plugins/nicira/nvp.ini</filename> file:</para>
<para>In addition to the original NVP configuration, the <code>default_l3_gw_service_uuid</code>
is required for the NVP Advanced Plugin and a <code>vcns</code> section must be added as
shown below.</para>
<programlisting language="ini">[DEFAULT]
nvp_password = <replaceable>admin</replaceable>
nvp_user = <replaceable>admin</replaceable>
nvp_controllers = <replaceable>10.37.1.137:443</replaceable>
default_l3_gw_service_uuid = <replaceable>aae63e9b-2e4e-4efe-81a1-92cf32e308bf</replaceable>
default_tz_uuid = <replaceable>2702f27a-869a-49d1-8781-09331a0f6b9e</replaceable>
[vcns]
# VSM management URL
manager_uri = <replaceable>https://10.24.106.219</replaceable>
# VSM admin user name
user = <replaceable>admin</replaceable>
# VSM admin password
password = <replaceable>default</replaceable>
# UUID of a logical switch on NVP which has physical network connectivity (currently using bridge transport type)
external_network = <replaceable>f2c023cf-76e2-4625-869b-d0dabcfcc638</replaceable>
# ID of deployment_container on VSM. Optional, if not specified, a default global deployment container will be used
# deployment_container_id =
# task_status_check_interval configures status check interval for vCNS asynchronous API. Default is 2000 msec.
# task_status_check_interval =</programlisting>
</step>
</procedure>
</section>
</section>
<section xml:id="PLUMgridplugin">
<title>Configure PLUMgrid plug-in</title>
<procedure>
<title>To use the PLUMgrid plug-in with
OpenStack Networking</title>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename>
and set:</para>
<programlisting language="ini">core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2</programlisting>
</step>
<step>
<para>Edit
<filename>/etc/neutron/plugins/plumgrid/plumgrid.ini</filename>
under the
<systemitem>[PLUMgridDirector]</systemitem>
section, and specify the IP address,
port, admin user name, and password of
the PLUMgrid Director:
<programlisting language="ini">[PLUMgridDirector]
director_server = "PLUMgrid-director-ip-address"
director_server_port = "PLUMgrid-director-port"
username = "PLUMgrid-director-admin-username"
password = "PLUMgrid-director-admin-password"</programlisting>
For database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link>
in <citetitle>Installation
Guide</citetitle>.</para>
</step>
<step>
<para>To apply the settings, restart
<systemitem class="service"
>neutron-server</systemitem>:</para>
<screen><prompt>#</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="ryu_plugin">
<title>Configure Ryu plug-in</title>
<procedure>
<title>To use the Ryu plug-in with OpenStack
Networking</title>
<step>
<para>Install the Ryu plug-in, as
follows:</para>
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-plugin-ryu</userinput> </screen>
</step>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename>
and set:</para>
<programlisting language="ini">core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2</programlisting>
</step>
<step>
<para>Edit
<filename>/etc/neutron/plugins/ryu/ryu.ini</filename>
(for database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link>
in <citetitle>Installation
Guide</citetitle>), and update the
following in the
<systemitem>[ovs]</systemitem>
section for the
<systemitem>ryu-neutron-agent</systemitem>: <itemizedlist>
<listitem>
<para>The
<systemitem>openflow_rest_api</systemitem>
is used to tell where Ryu is
listening for REST API. Substitute
<systemitem>ip-address</systemitem>
and
<systemitem>port-no</systemitem>
based on your Ryu setup.</para>
</listitem>
<listitem>
<para>The
<literal>ovsdb_interface</literal>
is used for Ryu to access the
<systemitem>ovsdb-server</systemitem>.
Substitute eth0 based on your set
up. The IP address is derived from
the interface name. If you want to
change this value irrespective of
the interface name,
<systemitem>ovsdb_ip</systemitem>
can be specified. If you use a
non-default port for
<systemitem>ovsdb-server</systemitem>,
it can be specified by
<systemitem>ovsdb_port</systemitem>.</para>
</listitem>
<listitem>
<para><systemitem>tunnel_interface</systemitem>
needs to be set to tell what IP
address is used for tunneling (if
tunneling isn't used, this value is
ignored). The IP address is derived
from the network interface
name.</para>
</listitem>
</itemizedlist></para>
<para>You can use the same configuration
file for many Compute nodes by using a
network interface name with a
different IP address:</para>
<programlisting language="ini">openflow_rest_api = &lt;ip-address&gt;:&lt;port-no&gt; ovsdb_interface = &lt;eth0&gt; tunnel_interface = &lt;eth0&gt;</programlisting>
</step>
<step>
<para>To apply the new settings, restart
<systemitem class="service"
>neutron-server</systemitem>:</para>
<screen><prompt>#</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
</section>
</section>
<section xml:id="install_neutron_agent">
<title>Configure neutron agents</title>
<para>Plug-ins typically have requirements for particular
software that must be run on each node that handles
data packets. This includes any node that runs
<systemitem class="service"
>nova-compute</systemitem> and nodes that run
dedicated OpenStack Networking service agents such as,
<systemitem>neutron-dhcp-agent</systemitem>,
<systemitem>neutron-l3-agent</systemitem>, or
<systemitem>neutron-lbaas-agent</systemitem> (see
below for more information about individual service
agents).</para>
<para>A data-forwarding node typically has a network
interface with an IP address on the “management
network” and another interface on the “data
network”.</para>
<para>This section shows you how to install and configure
a subset of the available plug-ins, which may include
the installation of switching software (for example,
Open vSwitch) as well as agents used to communicate
with the <systemitem class="service"
>neutron-server</systemitem> process running
elsewhere in the data center.</para>
<section xml:id="config_neutron_data_fwd_node">
<title>Configure data-forwarding nodes</title>
<section xml:id="install_neutron_agent_ovs">
<title>Node set up: OVS plug-in</title>
<para>
<note>
<para>This section also applies to the ML2 plugin when Open vSwitch is
used as a mechanism driver.</para>
</note>If you use the Open vSwitch plug-in, you must install Open vSwitch
and the <systemitem>neutron-plugin-openvswitch-agent</systemitem> agent on
each data-forwarding node:</para>
<warning>
<para>Do not install the openvswitch-brcompat
package as it breaks the security groups
functionality.</para>
</warning>
<procedure>
<title>To set up each node for the OVS
plug-in</title>
<step>
<para>Install the OVS agent package (this
pulls in the Open vSwitch software as
a dependency):</para>
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-plugin-openvswitch-agent</userinput></screen>
</step>
<step>
<para>On each node that runs the
<systemitem>neutron-plugin-openvswitch-agent</systemitem>:</para>
<itemizedlist>
<listitem>
<para>Replicate the
<filename>ovs_neutron_plugin.ini</filename>
file created in the first step onto
the node.</para>
</listitem>
<listitem>
<para>If using tunneling, the
node's
<filename>ovs_neutron_plugin.ini</filename>
file must also be updated with the
node's IP address configured on the
data network using the
<systemitem>local_ip</systemitem>
value.</para>
</listitem>
</itemizedlist>
</step>
<step>
<para>Restart Open vSwitch to properly
load the kernel module:</para>
<screen><prompt>#</prompt> <userinput>sudo service openvswitch-switch restart</userinput></screen>
</step>
<step>
<para>Restart the agent:</para>
<screen><prompt>#</prompt> <userinput>sudo service neutron-plugin-openvswitch-agent restart</userinput></screen>
</step>
<step>
<para>All nodes that run
<systemitem>neutron-plugin-openvswitch-agent</systemitem>
must have an OVS
<literal>br-int</literal> bridge. .
To create the bridge, run:</para>
<screen><prompt>#</prompt> <userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="install_neutron_agent_nvp">
<title>Node set up: Nicira NVP plug-in</title>
<para>If you use the Nicira NVP plug-in, you must
also install Open vSwitch on each
data-forwarding node. However, you do not need
to install an additional agent on each
node.</para>
<warning>
<para>It is critical that you are running an
Open vSwitch version that is compatible
with the current version of the NVP
Controller software. Do not use the Open
vSwitch version that is installed by
default on Ubuntu. Instead, use the Open
Vswitch version that is provided on the
Nicira support portal for your NVP
Controller version.</para>
</warning>
<procedure>
<title>To set up each node for the Nicira NVP
plug-in</title>
<step>
<para>Ensure each data-forwarding node has
an IP address on the "management
network," and an IP address on the
"data network" that is used for
tunneling data traffic. For full
details on configuring your forwarding
node, see the <citetitle>NVP
Administrator
Guide</citetitle>.</para>
</step>
<step>
<para>Use the <citetitle>NVP Administrator
Guide</citetitle> to add the node
as a "Hypervisor" using the NVP
Manager GUI. Even if your forwarding
node has no VMs and is only used for
services agents like
<systemitem>neutron-dhcp-agent</systemitem>
or
<systemitem>neutron-lbaas-agent</systemitem>,
it should still be added to NVP as a
Hypervisor.</para>
</step>
<step>
<para>After following the <citetitle>NVP
Administrator Guide</citetitle>,
use the page for this Hypervisor in
the NVP Manager GUI to confirm that
the node is properly connected to the
NVP Controller Cluster and that the
NVP Controller Cluster can see the
<literal>br-int</literal>
integration bridge.</para>
</step>
</procedure>
</section>
<section xml:id="install_neutron_agent_ryu">
<title>Node set up: Ryu plug-in</title>
<para>If you use the Ryu plug-in, you must install
both Open vSwitch and Ryu, in addition to the
Ryu agent package:</para>
<procedure>
<title>To set up each node for the Ryu
plug-in</title>
<step>
<para>Install Ryu (there isn't currently
an Ryu package for ubuntu):</para>
<screen><prompt>#</prompt> <userinput>sudo pip install ryu</userinput></screen>
</step>
<step>
<para>Install the Ryu agent and Open
vSwitch packages:</para>
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkms</userinput></screen>
</step>
<step>
<para>Replicate the
<filename>ovs_ryu_plugin.ini</filename>
and <filename>neutron.conf</filename>
files created in the above step on all
nodes running
<systemitem>neutron-plugin-ryu-agent</systemitem>.
</para>
</step>
<step>
<para>Restart Open vSwitch to properly
load the kernel module:</para>
<screen><prompt>#</prompt> <userinput>sudo service openvswitch-switch restart</userinput></screen>
</step>
<step>
<para>Restart the agent:</para>
<screen><prompt>#</prompt> <userinput>sudo service neutron-plugin-ryu-agent restart</userinput> </screen>
</step>
<step>
<para>All nodes running
<systemitem>neutron-plugin-ryu-agent</systemitem>
also require that an OVS bridge named
"br-int" exists on each node. To
create the bridge, run:</para>
<screen><prompt>#</prompt> <userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
</step>
</procedure>
</section>
</section>
<section xml:id="install_neutron_dhcp">
<title>Configure DHCP agent</title>
<para>The DHCP service agent is compatible with all
existing plug-ins and is required for all
deployments where VMs should automatically receive
IP addresses through DHCP.</para>
<procedure>
<title>To install and configure the DHCP
agent</title>
<step>
<para>You must configure the host running the
<systemitem>neutron-dhcp-agent</systemitem>
as a "data forwarding node" according to
the requirements for your plug-in (see
<xref linkend="install_neutron_agent"
/>).</para>
</step>
<step>
<para>Install the DHCP agent:</para>
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-dhcp-agent</userinput></screen>
</step>
<step>
<para>Finally, update any options in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file that depend on the plug-in in use
(see the sub-sections).</para>
</step>
</procedure>
<important>
<para>If you reboot a node that runs the DHCP agent, you must
run the <command>neutron-ovs-cleanup</command> command before the
<systemitem class="service">neutron-dhcp-agent</systemitem>
service starts.</para>
<para>On Red Hat-based systems, the <systemitem class="service">
neutron-ovs-cleanup</systemitem> service runs the
<command>neutron-ovs-cleanup</command>command automatically.
However, on Debian-based systems such as Ubuntu, you must
manually run this command or write your own system script
that runs on boot before the <systemitem class="service">
neutron-dhcp-agent</systemitem> service starts.</para>
</important>
<section xml:id="dhcp_agent_ovs">
<title>DHCP agent setup: OVS plug-in</title>
<para>These DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file for the OVS plug-in:</para>
<programlisting language="bash">[DEFAULT]
ovs_use_veth = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
<section xml:id="dhcp_agent_nvp">
<title>DHCP agent setup: NVP plug-in</title>
<para>These DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file for the NVP plug-in:</para>
<programlisting language="bash">[DEFAULT]
ovs_use_veth = True
enable_metadata_network = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
<section xml:id="dhcp_agent_ryu">
<title>DHCP agent setup: Ryu plug-in</title>
<para>These DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file for the Ryu plug-in:</para>
<programlisting language="bash">[DEFAULT]
ovs_use_veth = True
use_namespace = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
</section>
<section xml:id="install_neutron-l3">
<title>Configure L3 agent</title>
<para>The OpenStack Networking Service has a widely used API
extension to allow administrators and tenants to
create routers to interconnect L2 networks, and
floating IPs to make ports on private networks
publicly accessible.</para>
<para>Many plug-ins rely on the L3 service agent to
implement the L3 functionality. However, the
following plug-ins already have built-in L3
capabilities:</para>
<para>
<itemizedlist>
<listitem>
<para>Nicira NVP plug-in</para>
</listitem>
<listitem>
<para>Big Switch/Floodlight plug-in, which
supports both the open source <link
xlink:href="http://www.projectfloodlight.org/floodlight/"
>Floodlight</link> controller and
the proprietary Big Switch
controller.</para>
<note>
<para>Only the proprietary BigSwitch
controller implements L3
functionality. When using
Floodlight as your OpenFlow
controller, L3 functionality is not
available.</para>
</note>
</listitem>
<listitem>
<para>PLUMgrid plug-in</para>
</listitem>
</itemizedlist>
<warning>
<para>Do not configure or use
<filename>neutron-l3-agent</filename>
if you use one of these plug-ins.</para>
</warning>
<procedure>
<title>To install the L3 agent for all other
plug-ins</title>
<step>
<para>Install the
<systemitem>neutron-l3-agent</systemitem>
binary on the network node:</para>
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-l3-agent</userinput></screen>
</step>
<step>
<para>To uplink the node that runs
<systemitem>neutron-l3-agent</systemitem>
to the external network, create a
bridge named "br-ex" and attach the
NIC for the external network to this
bridge.</para>
<para>For example, with Open vSwitch and
NIC eth1 connected to the external
network, run:</para>
<screen><prompt>#</prompt> <userinput>sudo ovs-vsctl add-br br-ex</userinput>
<prompt>#</prompt> <userinput>sudo ovs-vsctl add-port br-ex eth1</userinput></screen>
<para>Do not manually configure an IP
address on the NIC connected to the
external network for the node running
<systemitem>neutron-l3-agent</systemitem>.
Rather, you must have a range of IP
addresses from the external network
that can be used by OpenStack
Networking for routers that uplink to
the external network. This range must
be large enough to have an IP address
for each router in the deployment, as
well as each floating IP.</para>
</step>
<step>
<para>The
<systemitem>neutron-l3-agent</systemitem>
uses the Linux IP stack and iptables
to perform L3 forwarding and NAT. In
order to support multiple routers with
potentially overlapping IP addresses,
<systemitem>neutron-l3-agent</systemitem>
defaults to using Linux network
namespaces to provide isolated
forwarding contexts. As a result, the
IP addresses of routers will not be
visible simply by running <command>ip
addr list</command> or
<command>ifconfig</command> on the
node. Similarly, you will not be able
to directly <command>ping</command>
fixed IPs.</para>
<para>To do either of these things, you
must run the command within a
particular router's network namespace.
The namespace will have the name
"qrouter-&lt;UUID of the router&gt;.
These example commands run in the
router namespace with UUID
47af3868-0fa8-4447-85f6-1304de32153b:</para>
<screen><prompt>#</prompt> <userinput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list</userinput>
<prompt>#</prompt> <userinput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping &lt;fixed-ip&gt;</userinput></screen>
</step>
</procedure>
</para>
<important>
<para>If you reboot a node that runs the L3 agent, you must run the
<command>neutron-ovs-cleanup</command> command before the <systemitem
class="service">neutron-l3-agent</systemitem> service starts.</para>
<para>On Red Hat-based systems, the <systemitem class="service"
>neutron-ovs-cleanup</systemitem> service runs the
<command>neutron-ovs-cleanup</command> command automatically. However,
on Debian-based systems such as Ubuntu, you must manually run this command
or write your own system script that runs on boot before the <systemitem
class="service">neutron-l3-agent</systemitem> service starts.</para>
</important>
</section>
<section xml:id="install_neutron-lbaas-agent">
<title>Configure LBaaS agent</title>
<para>Starting with the Havana release, the Neutron
Load-Balancer-as-a-Service (LBaaS) supports an
agent scheduling mechanism, so several
<systemitem>neutron-lbaas-agents</systemitem>
can be run on several nodes (one per one).</para>
<procedure>
<title>To install the LBaas agent and configure
the node</title>
<step>
<para>Install the agent by running:</para>
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-lbaas-agent</userinput></screen>
</step>
<step>
<para>If you are using: <itemizedlist>
<listitem>
<para>An OVS-based plug-in (OVS,
NVP, Ryu, NEC,
BigSwitch/Floodlight), you must
set:</para>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</listitem>
<listitem>
<para>A plug-in that uses
LinuxBridge, you must set:</para>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver</programlisting>
</listitem>
</itemizedlist></para>
</step>
<step>
<para>To use the reference implementation, you
must also set:</para>
<programlisting language="ini">device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver</programlisting>
</step>
<step>
<para>Set this parameter in the
<filename>neutron.conf</filename> file
on the host that runs <systemitem
class="service"
>neutron-server</systemitem>:</para>
<programlisting language="ini">service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin</programlisting>
</step>
</procedure>
</section>
<section xml:id="install_neutron-fwaas-agent">
<title>Configure FWaaS agent</title>
<para>The Firewall-as-a-Service (FWaaS) agent is
co-located with the Neutron L3 agent and does not
require any additional packages apart from those
required for the Neutron L3 agent. You can enable
the FWaaS functionality by setting the
configuration, as follows.</para>
<procedure>
<title>To configure FWaaS service and
agent</title>
<step>
<para>Set this parameter in the
<filename>neutron.conf</filename> file
on the host that runs <systemitem
class="service"
>neutron-server</systemitem>:</para>
<programlisting language="ini">service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPlugin</programlisting>
</step>
<step>
<para>To use the reference implementation, you
must also add a FWaaS driver configuration
to the <filename>neutron.conf</filename>
file on every node where the Neutron L3
agent is deployed:</para>
<programlisting language="ini">[fwaas]
driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver
enabled = True</programlisting>
</step>
</procedure>
</section>
</section>
</section>
<section xml:id="section_networking-arch">
<title>Networking architecture</title>
<para>Before you deploy Networking, it helps to understand the
Networking components and how these components interact
with each other and other OpenStack services.</para>
<section xml:id="arch_overview">
<title>Overview</title>
<para>Networking is a standalone service, just like other
OpenStack services such as Compute, Image service,
Identity service, or the Dashboard. Like those
services, a deployment of Networking often involves
deploying several processes on a variety of
hosts.</para>
<para>The Networking server uses the <systemitem
class="service">neutron-server</systemitem> daemon
to expose the Networking API and to pass user requests
to the configured Networking plug-in for additional
processing. Typically, the plug-in requires access to
a database for persistent storage (also similar to
other OpenStack services).</para>
<para>If your deployment uses a controller host to run centralized
Compute components, you can deploy the Networking server on that
same host. However, Networking is entirely standalone and can be
deployed on its own host as well. Depending on your deployment,
Networking can also include the following agents.</para>
<para>
<table rules="all">
<caption>Networking agents</caption>
<col width="30%"/>
<col width="70%"/>
<thead>
<tr>
<th>Agent</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">plug-in
agent</emphasis>
(<literal>neutron-*-agent</literal>)</td>
<td>Runs on each hypervisor to perform
local vswitch configuration. The agent
that runs depends on the plug-in that
you use, and some plug-ins do not
require an agent.</td>
</tr>
<tr>
<td><emphasis role="bold">dhcp
agent</emphasis>
(<literal>neutron-dhcp-agent</literal>)</td>
<td>Provides DHCP services to tenant
networks. Some plug-ins use this
agent.</td>
</tr>
<tr>
<td><emphasis role="bold">l3
agent</emphasis>
(<literal>neutron-l3-agent</literal>)</td>
<td>Provides L3/NAT forwarding to provide
external network access for VMs on
tenant networks. Some plug-ins use
this agent.</td>
</tr>
<tr>
<td><emphasis role="bold">l3 metering agent</emphasis>
(<literal>neutron-metering-agent</literal>)</td>
<td>Provides L3 traffic measurements for tenant networks.</td>
</tr>
</tbody>
</table>
</para>
<para>These agents interact with the main neutron process
through RPC (for example, rabbitmq or qpid) or through
the standard Networking API. Further:</para>
<itemizedlist>
<listitem>
<para>Networking relies on the Identity service
(Keystone) for the authentication and
authorization of all API requests.</para>
</listitem>
<listitem>
<para>Compute (Nova) interacts with Networking
through calls to its standard API.  As part of
creating a VM, the <systemitem class="service"
>nova-compute</systemitem> service
communicates with the Networking API to plug
each virtual NIC on the VM into a particular
network. </para>
</listitem>
<listitem>
<para>The Dashboard (Horizon) integrates with the
Networking API, enabling administrators and
tenant users to create and manage network
services through a web-based GUI.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="networking-services">
<title>Place services on physical hosts</title>
<para>Like other OpenStack services, Networking enables
cloud administrators to run one or more services on
one or more physical devices. At one extreme, the
cloud administrator can run all service daemons on a
single physical host for evaluation purposes.
Alternatively the cloud administrator can run each
service on its own physical host and, in some cases,
can replicate services across multiple hosts for
redundancy. For more information, see the <citetitle
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml"
>OpenStack Configuration
Reference</citetitle>.</para>
<para>A standard architecture includes a cloud controller
host, a network gateway host, and a set of hypervisors
that run virtual machines. The cloud controller and
network gateway can be on the same host. However, if
you expect VMs to send significant traffic to or from
the Internet, a dedicated network gateway host helps
avoid CPU contention between the <systemitem
class="service">neutron-l3-agent</systemitem> and
other OpenStack services that forward packets.</para>
</section>
<section xml:id="network-connectivity">
<title>Network connectivity for physical hosts</title>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="../common/figures/Neutron-PhysNet-Diagram.png"
/>
</imageobject>
</mediaobject>
<para>A standard Networking set up has one or more of the
following distinct physical data center
networks.</para>
<para>
<table rules="all">
<caption>General distinct physical data center
networks</caption>
<col width="20%"/>
<col width="80%"/>
<thead>
<tr>
<th>Network</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Management
network</emphasis></td>
<td>Provides internal communication
between OpenStack Components. IP
addresses on this network should be
reachable only within the data
center.</td>
</tr>
<tr>
<td><emphasis role="bold">Data
network</emphasis></td>
<td>Provides VM data communication within
the cloud deployment. The IP
addressing requirements of this
network depend on the Networking
plug-in that is used.</td>
</tr>
<tr>
<td><emphasis role="bold">External
network</emphasis></td>
<td>Provides VMs with Internet access in
some deployment scenarios. Anyone on
the Internet can reach IP addresses on
this network.</td>
</tr>
<tr>
<td><emphasis role="bold">API
network</emphasis></td>
<td>Exposes all OpenStack APIs, including
the Networking API, to tenants. IP
addresses on this network should be
reachable by anyone on the
Internet. The API network might be the
same as the external network, because
it is possible to create an
external-network subnet that is
allocated IP ranges that use less than
the full range of IP addresses in an
IP block.</td>
</tr>
</tbody>
</table>
</para>
</section>
</section>
<xi:include href="section_networking-config-identity.xml"/>
<xi:include href="section_networking-scenarios.xml"/>
<xi:include href="section_networking-adv-config.xml"/>
<xi:include href="section_networking-multi-dhcp-agents.xml"/>
<section xml:id="section_networking-use">
<title>Use Networking</title>
<para>You can start and stop OpenStack Networking services
using the <systemitem>service</systemitem> command. For
example:</para>
<screen><prompt>#</prompt> <userinput>sudo service neutron-server stop</userinput>
<prompt>#</prompt> <userinput>sudo service neutron-server status</userinput>
<prompt>#</prompt> <userinput>sudo service neutron-server start</userinput>
<prompt>#</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
<para>Log files are in the
<systemitem>/var/log/neutron</systemitem>
directory.</para>
<para>Configuration files are in the
<systemitem>/etc/neutron</systemitem>
directory.</para>
<para>You can use Networking in the following ways:</para>
<itemizedlist>
<listitem>
<para>Expose the Networking API to cloud tenants,
which enables them to build rich network
topologies.</para>
</listitem>
<listitem>
<para>Have the cloud administrator, or an automated
administrative tool, create network connectivity
on behalf of tenants.</para>
</listitem>
</itemizedlist>
<para>A tenant or cloud administrator can both perform the
following procedures.</para>
<section xml:id="api_features">
<title>Core Networking API features</title>
<para>After you install and run Networking, tenants and
administrators can perform create-read-update-delete
(CRUD) API networking operations by using the
Networking API directly or the neutron command-line
interface (CLI). The neutron CLI is a wrapper around
the Networking API. Every Networking API call has a
corresponding neutron command.</para>
<para>The CLI includes a number of options. For details,
refer to the <link
xlink:href="http://docs.openstack.org/user-guide/content/"
><citetitle>OpenStack End User
Guide</citetitle></link>.</para>
<section xml:id="api_abstractions">
<title>API abstractions</title>
<para>The Networking v2.0 API provides control over
both L2 network topologies and the IP addresses
used on those networks (IP Address Management or
IPAM). There is also an extension to cover basic
L3 forwarding and NAT, which provides capabilities
similar to <command>nova-network</command>.</para>
<para><table rules="all">
<caption>API abstractions</caption>
<col width="10%"/>
<col width="90%"/>
<thead>
<tr>
<th>Abstraction</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold"
>Network</emphasis></td>
<td>An isolated L2 network segment
(similar to a VLAN) that forms the
basis for describing the L2 network
topology available in an Networking
deployment.</td>
</tr>
<tr>
<td><emphasis role="bold"
>Subnet</emphasis></td>
<td>Associates a block of IP addresses
and other network configuration,
such as, default gateways or
dns-servers, with an Networking
network. Each subnet represents an
IPv4 or IPv6 address block and, if
needed, each Networking network can
have multiple subnets.</td>
</tr>
<tr>
<td><emphasis role="bold"
>Port</emphasis></td>
<td>Represents an attachment port to a
L2 Networking network. When a port
is created on the network, by
default it is allocated an
available fixed IP address out of
one of the designated subnets for
each IP version (if one exists).
When the port is destroyed, its
allocated addresses return to the
pool of available IPs on the
subnet. Users of the Networking API
can either choose a specific IP
address from the block, or let
Networking choose the first
available IP address.</td>
</tr>
</tbody>
</table></para>
<?hard-pagebreak?>
<para>This table summarizes the attributes available
for each networking abstraction. For information
about API abstraction and operations, see the
<link
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
>Networking API v2.0 Reference</link>.</para>
<table rules="all">
<caption>Network attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><option>admin_state_up</option></td>
<td>bool</td>
<td>True</td>
<td>Administrative state of the network.
If specified as False (down), this
network does not forward packets.
</td>
</tr>
<tr>
<td><option>id</option></td>
<td>uuid-str</td>
<td>Generated</td>
<td>UUID for this network.</td>
</tr>
<tr>
<td><option>name</option></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this network;
is not required to be unique.</td>
</tr>
<tr>
<td><option>shared</option></td>
<td>bool</td>
<td>False</td>
<td>Specifies whether this network
resource can be accessed by any
tenant. The default policy setting
restricts usage of this attribute to
administrative users only.</td>
</tr>
<tr>
<td><option>status</option></td>
<td>string</td>
<td>N/A</td>
<td>Indicates whether this network is
currently operational.</td>
</tr>
<tr>
<td><option>subnets</option></td>
<td>list(uuid-str)</td>
<td>Empty list</td>
<td>List of subnets associated with this
network.</td>
</tr>
<tr>
<td><option>tenant_id</option></td>
<td>uuid-str</td>
<td>N/A</td>
<td>Tenant owner of the network. Only
administrative users can set the
tenant identifier; this cannot be
changed using authorization policies.
</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Subnet attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><option>allocation_pools</option></td>
<td>list(dict)</td>
<td>Every address in
<option>cidr</option>, excluding
<option>gateway_ip</option> (if
configured).</td>
<td><para>List of cidr sub-ranges that are
available for dynamic allocation to
ports. Syntax:</para>
<programlisting language="json">[ { "start":"10.0.0.2",
"end": "10.0.0.254"} ]</programlisting>
</td>
</tr>
<tr>
<td><option>cidr</option></td>
<td>string</td>
<td>N/A</td>
<td>IP range for this subnet, based on the
IP version.</td>
</tr>
<tr>
<td><option>dns_nameservers</option></td>
<td>list(string)</td>
<td>Empty list</td>
<td>List of DNS name servers used by hosts
in this subnet.</td>
</tr>
<tr>
<td><option>enable_dhcp</option></td>
<td>bool</td>
<td>True</td>
<td>Specifies whether DHCP is enabled for
this subnet.</td>
</tr>
<tr>
<td><option>gateway_ip</option></td>
<td>string</td>
<td>First address in <option>cidr</option>
</td>
<td>Default gateway used by devices in
this subnet.</td>
</tr>
<tr>
<td><option>host_routes</option></td>
<td>list(dict)</td>
<td>Empty list</td>
<td>Routes that should be used by devices
with IPs from this subnet (not
including local subnet route).</td>
</tr>
<tr>
<td><option>id</option></td>
<td>uuid-string</td>
<td>Generated</td>
<td>UUID representing this subnet.</td>
</tr>
<tr>
<td><option>ip_version</option></td>
<td>int</td>
<td>4</td>
<td>IP version.</td>
</tr>
<tr>
<td><option>name</option></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this subnet
(might not be unique).</td>
</tr>
<tr>
<td><option>network_id</option></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Network with which this subnet is
associated.</td>
</tr>
<tr>
<td><option>tenant_id</option></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Owner of network. Only administrative
users can set the tenant identifier;
this cannot be changed using
authorization policies.</td>
</tr>
</tbody>
</table>
<?hard-pagebreak?>
<table rules="all">
<caption>Port attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><option>admin_state_up</option></td>
<td>bool</td>
<td>true</td>
<td>Administrative state of this port. If
specified as False (down), this port
does not forward packets.</td>
</tr>
<tr>
<td><option>device_id</option></td>
<td>string</td>
<td>None</td>
<td>Identifies the device using this port
(for example, a virtual server's ID).
</td>
</tr>
<tr>
<td><option>device_owner</option></td>
<td>string</td>
<td>None</td>
<td>Identifies the entity using this port
(for example, a dhcp agent).</td>
</tr>
<tr>
<td><option>fixed_ips</option></td>
<td>list(dict)</td>
<td>Automatically allocated from pool</td>
<td>Specifies IP addresses for this port;
associates the port with the subnets
containing the listed IP addresses.
</td>
</tr>
<tr>
<td><option>id</option></td>
<td>uuid-string</td>
<td>Generated</td>
<td>UUID for this port.</td>
</tr>
<tr>
<td><option>mac_address</option></td>
<td>string</td>
<td>Generated</td>
<td>Mac address to use on this port.</td>
</tr>
<tr>
<td><option>name</option></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this port
(might not be unique).</td>
</tr>
<tr>
<td><option>network_id</option></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Network with which this port is
associated.</td>
</tr>
<tr>
<td><option>status</option></td>
<td>string</td>
<td>N/A</td>
<td>Indicates whether the network is
currently operational.</td>
</tr>
<tr>
<td><option>tenant_id</option></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Owner of the network. Only
administrative users can set the
tenant identifier; this cannot be
changed using authorization policies.
</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="basic_operations">
<title>Basic Networking operations</title>
<para>To learn about advanced capabilities that are
available through the neutron command-line
interface (CLI), read the networking section in
the <link
xlink:href="http://docs.openstack.org/user-guide/content/index.html"
> OpenStack End User Guide</link>.</para>
<para>This table shows example neutron commands that
enable you to complete basic Networking
operations:</para>
<table rules="all">
<caption>Basic Networking operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Creates a network.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-create net1</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet that is associated
with net1.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Lists ports for a specified
tenant.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list</userinput></screen></td>
</tr>
<tr>
<td>Lists ports for a specified tenant and
displays the <option>id</option>,
<option>fixed_ips</option>, and
<option>device_owner</option>
columns.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list -c id -c fixed_ips -c device_owner</userinput></screen>
</td>
</tr>
<tr>
<td>Shows information for a specified
port.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-show <replaceable>port-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
<note>
<para>The <option>device_owner</option> field
describes who owns the port. A port whose
<option>device_owner</option> begins
with:</para>
<itemizedlist>
<listitem>
<para><literal>network</literal> is
created by Networking.</para>
</listitem>
<listitem>
<para><literal>compute</literal> is
created by Compute.</para>
</listitem>
</itemizedlist>
</note>
</section>
<section xml:id="admin_api_config">
<title>Administrative operations</title>
<para>The cloud administrator can run any
<command>neutron</command> command on behalf
of tenants by specifying an Identity
<option>tenant_id</option> in the command, as
follows:</para>
<screen><prompt>#</prompt> <userinput>neutron net-create --tenant-id=<replaceable>tenant-id</replaceable> <replaceable>network-name</replaceable></userinput></screen>
<para>For example:</para>
<screen><prompt>#</prompt> <userinput>neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1</userinput></screen>
<note>
<para>To view all tenant IDs in Identity, run the
following command as an Identity Service admin
user:</para>
<screen><prompt>#</prompt> <userinput>keystone tenant-list</userinput></screen>
</note>
</section>
<?hard-pagebreak?>
<section xml:id="advanced_networking">
<title>Advanced Networking operations</title>
<para>This table shows example neutron commands that
enable you to complete advanced Networking
operations:</para>
<table rules="all">
<caption>Advanced Networking operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Creates a network that all tenants can
use.</td>
<td><screen><prompt>#</prompt> <userinput>neutron net-create --shared public-net</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with a specified
gateway IP address.</td>
<td><screen><prompt>#</prompt> <userinput>neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet that has no gateway
IP address.</td>
<td><screen><prompt>#</prompt> <userinput>neutron subnet-create --no-gateway net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with DHCP
disabled.</td>
<td><screen><prompt>#</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with a specified set
of host routes.</td>
<td><screen><prompt>#</prompt> <userinput>neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with a specified set
of dns name servers.</td>
<td><screen><prompt>#</prompt> <userinput>neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8</userinput></screen></td>
</tr>
<tr>
<td>Displays all ports and IPs allocated
on a network.</td>
<td><screen><prompt>#</prompt> <userinput>neutron port-list --network_id <replaceable>net-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
</section>
</section>
<?hard-pagebreak?>
<section xml:id="using_nova_with_neutron">
<title>Use Compute with Networking</title>
<section xml:id="basic_workflow_with_nova">
<title>Basic Compute and Networking operations</title>
<para>This table shows example neutron and nova
commands that enable you to complete basic Compute
and Networking operations:</para>
<table rules="all">
<caption>Basic Compute and Networking
operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Checks available networks.</td>
<td><screen><prompt>#</prompt> <userinput>neutron net-list</userinput></screen></td>
</tr>
<tr>
<td>Boots a VM with a single NIC on a
selected Networking network.</td>
<td><screen><prompt>#</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic net-id=<replaceable>net-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
<tr>
<td><para>Searches for ports with a
<option>device_id</option> that
matches the Compute instance UUID.
See <xref
linkend="network_compute_note"
/>.</para>
</td>
<td><screen><prompt>#</prompt> <userinput>neutron port-list --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Searches for ports, but shows only the
<option>mac_address</option> for
the port.</td>
<td><screen><prompt>#</prompt> <userinput>neutron port-list --field mac_address --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Temporarily disables a port from
sending traffic.</td>
<td><screen><prompt>#</prompt> <userinput>neutron port-update <replaceable>port-id</replaceable> --admin_state_up=False</userinput></screen></td>
</tr>
</tbody>
</table>
<note>
<para>The <option>device_id</option> can also be a
logical router ID.</para>
</note>
<note xml:id="network_compute_note">
<title>Create and delete VMs</title>
<itemizedlist>
<listitem>
<para>When you boot a Compute VM, a port
on the network that corresponds to the
VM NIC is automatically created and
associated with the default security
group. You can configure <link
linkend="enabling_ping_and_ssh"
>security group rules</link> to
enable users to access the VM.</para>
</listitem>
<listitem>
<para>When you delete a Compute VM, the
underlying Networking port is
automatically deleted.</para>
</listitem>
</itemizedlist>
</note>
</section>
<section xml:id="advanced_vm_creation">
<title>Advanced VM creation operations</title>
<para>This table shows example nova and neutron
commands that enable you to complete advanced VM
creation operations:</para>
<table rules="all">
<caption>Advanced VM creation operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Boots a VM with multiple NICs.</td>
<td><screen><prompt>#</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic net-id=<replaceable>net1-id</replaceable> --nic net-id=<replaceable>net2-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Boots a VM with a specific IP address.
First, create an Networking port with
a specific IP address. Then, boot a VM
specifying a <option>port-id</option>
rather than a
<option>net-id</option>.</td>
<td><screen><prompt>#</prompt> <userinput>neutron port-create --fixed-ip subnet_id=<replaceable>subnet-id</replaceable>,ip_address=<replaceable>IP</replaceable> <replaceable>net-id</replaceable></userinput>
<prompt>#</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic port-id=<replaceable>port-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
<tr>
<td>Boots a VM that connects to all
networks that are accessible to the
tenant who submits the request
(without the <option>--nic</option>
option).</td>
<td><screen><prompt>#</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
</tbody>
</table>
<note>
<para>Networking does not currently support the
<command>v4-fixed-ip</command> parameter
of the <command>--nic</command> option for the
<command>nova</command> command.</para>
</note>
</section>
<section xml:id="enabling_ping_and_ssh">
<title>Enable ping and SSH on VMs (security
groups)</title>
<para>You must configure security group rules
depending on the type of plug-in you are using. If
you are using a plug-in that:</para>
<itemizedlist>
<listitem>
<para>Implements Networking security groups,
you can configure security group rules
directly by using <command>neutron
security-group-rule-create</command>.
This example enables
<command>ping</command> and
<command>ssh</command> access to your
VMs.</para>
<screen><prompt>#</prompt> <userinput>neutron security-group-rule-create --protocol icmp \
--direction ingress default</userinput></screen>
<screen><prompt>#</prompt> <userinput>neutron security-group-rule-create --protocol tcp --port-range-min 22 \
--port-range-max 22 --direction ingress default</userinput></screen>
</listitem>
<listitem>
<para>Does not implement Networking security
groups, you can configure security group
rules by using the <command>nova
secgroup-add-rule</command> or
<command>euca-authorize</command>
command. These <command>nova</command>
commands enable <command>ping</command>
and <command>ssh</command> access to your
VMs.</para>
<screen><prompt>#</prompt> <userinput>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0</userinput>
<prompt>#</prompt> <userinput>nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
</listitem>
</itemizedlist>
<note>
<para>If your plug-in implements Networking
security groups, you can also leverage Compute
security groups by setting
<code>security_group_api = neutron</code>
in the <filename>nova.conf</filename> file.
After you set this option, all Compute
security group commands are proxied to
Networking.</para>
</note>
</section>
</section>
</section>
<xi:include href="section_networking_adv_features.xml"/>
<xi:include href="section_networking_adv_operational_features.xml"/>
<section xml:id="section_networking_auth">
<title>Authentication and authorization</title>
<para>Networking uses the Identity Service as the default
authentication service. When the Identity Service is
enabled, users who submit requests to the Networking
service must provide an authentication token in
<literal>X-Auth-Token</literal> request header. Users
obtain this token by authenticating with the Identity
Service endpoint. For more information about
authentication with the Identity Service, see <link
xlink:href="http://docs.openstack.org/api/openstack-identity-service/2.0/content/"
><citetitle>OpenStack Identity Service API v2.0
Reference</citetitle></link>. When the Identity
Service is enabled, it is not mandatory to specify the
tenant ID for resources in create requests because the
tenant ID is derived from the authentication token.</para>
<note>
<para>The default authorization settings only allow
administrative users to create resources on behalf of
a different tenant. Networking uses information
received from Identity to authorize user requests.
Networking handles two kind of authorization
policies:</para>
</note>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Operation-based</emphasis>
policies specify access criteria for specific
operations, possibly with fine-grained control
over specific attributes;</para>
</listitem>
<listitem>
<para><emphasis role="bold">Resource-based</emphasis>
policies specify whether access to specific
resource is granted or not according to the
permissions configured for the resource (currently
available only for the network resource). The
actual authorization policies enforced in
Networking might vary from deployment to
deployment.</para>
</listitem>
</itemizedlist>
<para>The policy engine reads entries from the
<filename>policy.json</filename> file. The actual
location of this file might vary from distribution to
distribution. Entries can be updated while the system is
running, and no service restart is required. Every time
the policy file is updated, the policies are automatically
reloaded. Currently the only way of updating such policies
is to edit the policy file. In this section, the terms
<emphasis role="italic">policy</emphasis> and
<emphasis role="italic">rule</emphasis> refer to
objects that are specified in the same way in the policy
file. There are no syntax differences between a rule and a
policy. A policy is something that is matched directly
from the Networking policy engine. A rule is an element in
a policy, which is evaluated. For instance in
<code>create_subnet:
[["admin_or_network_owner"]]</code>, <emphasis
role="italic">create_subnet</emphasis> is a policy,
and <emphasis role="italic"
>admin_or_network_owner</emphasis> is a rule.</para>
<para>Policies are triggered by the Networking policy engine
whenever one of them matches an Networking API operation
or a specific attribute being used in a given operation.
For instance the <code>create_subnet</code> policy is
triggered every time a <code>POST /v2.0/subnets</code>
request is sent to the Networking server; on the other
hand <code>create_network:shared</code> is triggered every
time the <emphasis role="italic">shared</emphasis>
attribute is explicitly specified (and set to a value
different from its default) in a <code>POST
/v2.0/networks</code> request. It is also worth
mentioning that policies can be also related to specific
API extensions; for instance
<code>extension:provider_network:set</code> is be
triggered if the attributes defined by the Provider
Network extensions are specified in an API request.</para>
<para>An authorization policy can be composed by one or more
rules. If more rules are specified, evaluation policy
succeeds if any of the rules evaluates successfully; if an
API operation matches multiple policies, then all the
policies must evaluate successfully. Also, authorization
rules are recursive. Once a rule is matched, the rule(s)
can be resolved to another rule, until a terminal rule is
reached.</para>
<para>The Networking policy engine currently defines the
following kinds of terminal rules:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Role-based
rules</emphasis> evaluate successfully if the
user who submits the request has the specified
role. For instance <code>"role:admin"</code> is
successful if the user who submits the request is
an administrator.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Field-based rules
</emphasis>evaluate successfully if a field of the
resource specified in the current request matches
a specific value. For instance
<code>"field:networks:shared=True"</code> is
successful if the <literal>shared</literal>
attribute of the <literal>network</literal>
resource is set to true.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Generic rules</emphasis>
compare an attribute in the resource with an
attribute extracted from the user's security
credentials and evaluates successfully if the
comparison is successful. For instance
<code>"tenant_id:%(tenant_id)s"</code> is
successful if the tenant identifier in the
resource is equal to the tenant identifier of the
user submitting the request.</para>
</listitem>
</itemizedlist>
<para>This extract is from the default
<filename>policy.json</filename> file:</para>
<programlisting language="bash">{
[1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"shared": [["field:networks:shared=True"]],
[2] "default": [["rule:admin_or_owner"]],
"create_subnet": [["rule:admin_or_network_owner"]],
"get_subnet": [["rule:admin_or_owner"], ["rule:shared"]],
"update_subnet": [["rule:admin_or_network_owner"]],
"delete_subnet": [["rule:admin_or_network_owner"]],
"create_network": [],
[3] "get_network": [["rule:admin_or_owner"], ["rule:shared"]],
[4] "create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [],
[5] "create_port:mac_address": [["rule:admin_or_network_owner"]],
"create_port:fixed_ips": [["rule:admin_or_network_owner"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_or_owner"]],
"delete_port": [["rule:admin_or_owner"]]
}</programlisting>
<para>[1] is a rule which evaluates successfully if the
current user is an administrator or the owner of the
resource specified in the request (tenant identifier is
equal).</para>
<para>[2] is the default policy which is always evaluated if
an API operation does not match any of the policies in
<filename>policy.json</filename>.</para>
<para>[3] This policy evaluates successfully if either
<emphasis role="italic">admin_or_owner</emphasis>, or
<emphasis role="italic">shared</emphasis> evaluates
successfully.</para>
<para>[4] This policy restricts the ability to manipulate the
<emphasis role="italic">shared</emphasis> attribute
for a network to administrators only.</para>
<para>[5] This policy restricts the ability to manipulate the
<emphasis role="italic">mac_address</emphasis>
attribute for a port only to administrators and the owner
of the network where the port is attached.</para>
<para>In some cases, some operations are restricted to
administrators only. This example shows you how to modify
a policy file to permit tenants to define networks and see
their resources and permit administrative users to perform
all other operations:</para>
<programlisting language="bash">{
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"default": [["rule:admin_only"]],
"create_subnet": [["rule:admin_only"]],
"get_subnet": [["rule:admin_or_owner"]],
"update_subnet": [["rule:admin_only"]],
"delete_subnet": [["rule:admin_only"]],
"create_network": [],
"get_network": [["rule:admin_or_owner"]],
"create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [["rule:admin_only"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_only"]],
"delete_port": [["rule:admin_only"]]
}</programlisting>
</section>
<section xml:id="section_high_avail">
<title>High availability</title>
<para>The use of high-availability in a Networking deployment
helps prevent individual node failures. In general, you
can run <systemitem class="service"
>neutron-server</systemitem> and <systemitem
class="service">neutron-dhcp-agent</systemitem> in an
active-active fashion. You can run the <systemitem
class="service">neutron-l3-agent</systemitem> service
as active/passive, which avoids IP conflicts with respect
to gateway IP addresses.</para>
<section xml:id="ha_pacemaker">
<title>Networking high availability with Pacemaker</title>
<para>You can run some Networking services into a cluster
(Active / Passive or Active / Active for Networking
Server only) with Pacemaker.</para>
<para>Download the latest resources agents:</para>
<itemizedlist>
<listitem>
<para>neutron-server: <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-server"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
<listitem>
<para>neutron-dhcp-agent : <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-dhcp"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
<listitem>
<para>neutron-l3-agent : <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-l3"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
</itemizedlist>
<note xmlns:db="http://docbook.org/ns/docbook">
<para>For information about how to build a cluster,
see <link
xlink:href="http://www.clusterlabs.org/wiki/Documentation"
>Pacemaker documentation</link>.</para>
</note>
</section>
</section>
<section xml:id="section_pagination_and_sorting_support">
<title>Plug-in pagination and sorting support</title>
<table rules="all">
<caption>Plug-ins that support native pagination and
sorting</caption>
<thead>
<tr>
<th>Plug-in</th>
<th>Support Native Pagination</th>
<th>Support Native Sorting</th>
</tr>
</thead>
<tbody>
<tr>
<td>ML2</td>
<td>True</td>
<td>True</td>
</tr>
<tr>
<td>Open vSwitch</td>
<td>True</td>
<td>True</td>
</tr>
<tr>
<td>Linux Bridge</td>
<td>True</td>
<td>True</td>
</tr>
</tbody>
</table>
</section>
</chapter>