Merge "Split section_networking_introduction.xml"

This commit is contained in:
Jenkins 2014-07-16 07:24:15 +00:00 committed by Gerrit Code Review
commit bcba6610c1
4 changed files with 1022 additions and 1010 deletions

@ -9,6 +9,8 @@
<command>neutron</command> and <command>nova</command> command-line interface (CLI)
commands.</para>
<xi:include href="networking/section_networking_introduction.xml"/>
<xi:include href="networking/section_networking_config-plugins.xml"/>
<xi:include href="networking/section_networking_config-agents.xml"/>
<xi:include href="networking/section_networking_arch.xml"/>
<xi:include href="networking/section_networking-config-identity.xml"/>
<xi:include href="networking/section_networking-scenarios.xml"/>

@ -0,0 +1,525 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="install_neutron_agent">
<title>Configure neutron agents</title>
<para>Plug-ins typically have requirements for particular
software that must be run on each node that handles data
packets. This includes any node that runs <systemitem
class="service">nova-compute</systemitem> and nodes
that run dedicated OpenStack Networking service agents
such as <systemitem class="service">neutron-dhcp-agent</systemitem>,
<systemitem class="service">neutron-l3-agent</systemitem>,
<systemitem class="service">neutron-metering-agent</systemitem> or
<systemitem class="service">neutron-lbaas-agent</systemitem>.</para>
<para>A data-forwarding node typically has a network interface
with an IP address on the management network and another
interface on the data network.</para>
<para>This section shows you how to install and configure a
subset of the available plug-ins, which might include the
installation of switching software (for example, Open
vSwitch) and as agents used to communicate with the
<systemitem class="service"
>neutron-server</systemitem> process running elsewhere
in the data center.</para>
<section xml:id="config_neutron_data_fwd_node">
<title>Configure data-forwarding nodes</title>
<section xml:id="install_neutron_agent_ovs">
<title>Node set up: OVS plug-in</title>
<para>
<note>
<para>This section also applies to the ML2
plug-in when you use Open vSwitch as a
mechanism driver.</para>
</note>If you use the Open vSwitch plug-in, you
must install Open vSwitch and the
<systemitem class="service">neutron-plugin-openvswitch-agent</systemitem>
agent on each data-forwarding node:</para>
<warning>
<para>Do not install the
<package>openvswitch-brcompat</package>
package because it prevents the security group
functionality from operating correctly.</para>
</warning>
<procedure>
<title>To set up each node for the OVS
plug-in</title>
<step>
<para>Install the OVS agent package:</para>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-plugin-openvswitch-agent</userinput></screen>
<para>This action also installs the Open
vSwitch software as a dependency.</para>
</step>
<step>
<para>On each node that runs the
<systemitem class="service">neutron-plugin-openvswitch-agent</systemitem>,
complete these steps:</para>
<itemizedlist>
<listitem>
<para>Replicate the
<filename>ovs_neutron_plugin.ini</filename>
file that you created on the
node.</para>
</listitem>
<listitem>
<para>If you use tunneling, update the
<filename>ovs_neutron_plugin.ini</filename>
file for the node with the IP
address that is configured on the
data network for the node by using
the
<option>local_ip</option>
value.</para>
</listitem>
</itemizedlist>
</step>
<step>
<para>Restart Open vSwitch to properly load
the kernel module:</para>
<screen><prompt>#</prompt> <userinput>service openvswitch-switch restart</userinput></screen>
</step>
<step>
<para>Restart the agent:</para>
<screen><prompt>#</prompt> <userinput>service neutron-plugin-openvswitch-agent restart</userinput></screen>
</step>
<step>
<note><para>The next step only applies to releases prior to Juno.</para></note>
<para>All nodes that run
<systemitem class="service">neutron-plugin-openvswitch-agent</systemitem>
must have an OVS <literal>br-int</literal>
bridge.</para>
<para>To create the bridge, run this
command:</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-int</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="install_neutron_agent_nsx">
<title>Node set up: NSX plug-in</title>
<para>If you use the NSX plug-in, you must also
install Open vSwitch on each data-forwarding node.
However, you do not need to install an additional
agent on each node.</para>
<warning>
<para>It is critical that you run an Open vSwitch
version that is compatible with the current
version of the NSX Controller software. Do not
use the Open vSwitch version that is installed
by default on Ubuntu. Instead, use the Open
vSwitch version that is provided on the VMware
support portal for your NSX Controller
version.</para>
</warning>
<procedure>
<title>To set up each node for the NSX
plug-in</title>
<step>
<para>Ensure that each data-forwarding node
has an IP address on the management
network, and an IP address on the "data
network" that is used for tunneling data
traffic. For full details on configuring
your forwarding node, see the
<citetitle>NSX Administrator
Guide</citetitle>.</para>
</step>
<step>
<para>Use the <citetitle>NSX Administrator
Guide</citetitle> to add the node as a
Hypervisor by using the NSX Manager GUI.
Even if your forwarding node has no VMs
and is only used for services agents like
<systemitem class="service">neutron-dhcp-agent</systemitem>
or
<systemitem class="service">neutron-lbaas-agent</systemitem>,
it should still be added to NSX as a
Hypervisor.</para>
</step>
<step>
<para>After following the <citetitle>NSX
Administrator Guide</citetitle>, use
the page for this Hypervisor in the NSX
Manager GUI to confirm that the node is
properly connected to the NSX Controller
Cluster and that the NSX Controller
Cluster can see the
<literal>br-int</literal> integration
bridge.</para>
</step>
</procedure>
</section>
<section xml:id="install_neutron_agent_ryu">
<title>Node set up: Ryu plug-in</title>
<para>If you use the Ryu plug-in, you must install
both Open vSwitch and Ryu, in addition to the Ryu
agent package.</para>
<procedure>
<title>To set up each node for the Ryu
plug-in</title>
<step>
<para>Install Ryu:</para>
<screen><prompt>#</prompt> <userinput>pip install ryu</userinput></screen>
<para>Currently, no Ryu package exists for
Ubuntu.</para>
</step>
<step>
<para>Install the Ryu agent and Open vSwitch
packages:</para>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkms</userinput></screen>
</step>
<step>
<para>Replicate the
<filename>ovs_ryu_plugin.ini</filename>
and <filename>neutron.conf</filename>
files created in the above step on all
nodes running
<systemitem class="service">neutron-plugin-ryu-agent</systemitem>.</para>
</step>
<step>
<para>Restart Open vSwitch to properly load
the kernel module:</para>
<screen><prompt>#</prompt> <userinput>service openvswitch-switch restart</userinput></screen>
</step>
<step>
<para>Restart the agent:</para>
<screen><prompt>#</prompt> <userinput>service neutron-plugin-ryu-agent restart</userinput> </screen>
</step>
<step>
<para>All nodes that run
<systemitem class="service">neutron-plugin-ryu-agent</systemitem>
must also have an OVS bridge named
<literal>br-int</literal> on each
node.</para>
<para>To create the bridge, run this
command:</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-int</userinput></screen>
</step>
</procedure>
</section>
</section>
<section xml:id="install_neutron_dhcp">
<title>Configure DHCP agent</title>
<para>The DHCP service agent is compatible with all
existing plug-ins and is required for all deployments
where VMs should automatically receive IP addresses
through DHCP.</para>
<procedure>
<title>To install and configure the DHCP agent</title>
<step>
<para>You must configure the host running the
<systemitem class="service">neutron-dhcp-agent</systemitem>
as a data forwarding node according to the
requirements for your plug-in. See <xref
linkend="install_neutron_agent"/>.</para>
</step>
<step>
<para>Install the DHCP agent:</para>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-dhcp-agent</userinput></screen>
</step>
<step>
<para>Finally, update any options in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file that depend on the plug-in in use. See
the sub-sections.</para>
</step>
</procedure>
<important>
<para>If you reboot a node that runs the DHCP agent,
you must run the
<command>neutron-ovs-cleanup</command> command
before the <systemitem class="service"
>neutron-dhcp-agent</systemitem> service
starts.</para>
<para>On Red Hat, SUSE, and Ubuntu based systems, the
<systemitem class="service"
>neutron-ovs-cleanup</systemitem> service runs
the <command>neutron-ovs-cleanup</command> command
automatically. However, on Debian-based systems
(including Ubuntu in releases earlier than
Icehouse), you must manually run this command or
write your own system script that runs on boot
before the <systemitem class="service"
>neutron-dhcp-agent</systemitem> service
starts.</para>
</important>
<section xml:id="dhcp_agent_ovs">
<title>DHCP agent setup: OVS plug-in</title>
<para>These DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file for the OVS plug-in:</para>
<programlisting language="bash">[DEFAULT]
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
<section xml:id="dhcp_agent_nsx">
<title>DHCP agent setup: NSX plug-in</title>
<para>These DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file for the NSX plug-in:</para>
<programlisting language="bash">[DEFAULT]
enable_metadata_network = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
<section xml:id="dhcp_agent_ryu">
<title>DHCP agent setup: Ryu plug-in</title>
<para>These DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file for the Ryu plug-in:</para>
<programlisting language="bash">[DEFAULT]
use_namespace = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
</section>
<section xml:id="install_neutron-l3">
<title>Configure L3 agent</title>
<para>The OpenStack Networking Service has a widely used
API extension to allow administrators and tenants to
create routers to interconnect L2 networks, and
floating IPs to make ports on private networks
publicly accessible.</para>
<para>Many plug-ins rely on the L3 service agent to
implement the L3 functionality. However, the following
plug-ins already have built-in L3 capabilities:</para>
<itemizedlist>
<listitem>
<para>Big Switch/Floodlight plug-in, which
supports both the open source <link
xlink:href="http://www.projectfloodlight.org/floodlight/"
>Floodlight</link> controller and the
proprietary Big Switch controller.</para>
<note>
<para>Only the proprietary BigSwitch
controller implements L3 functionality.
When using Floodlight as your OpenFlow
controller, L3 functionality is not
available.</para>
</note>
</listitem>
<listitem>
<para>IBM SDN-VE plug-in</para>
</listitem>
<listitem>
<para>NSX plug-in</para>
</listitem>
<listitem>
<para>PLUMgrid plug-in</para>
</listitem>
</itemizedlist>
<warning>
<para>Do not configure or use
<systemitem class="service">neutron-l3-agent</systemitem> if you
use one of these plug-ins.</para>
</warning>
<procedure>
<title>To install the L3 agent for all other
plug-ins</title>
<step>
<para>Install the
<systemitem class="service">neutron-l3-agent</systemitem>
binary on the network node:</para>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-l3-agent</userinput></screen>
</step>
<step>
<para>To uplink the node that runs
<systemitem class="service">neutron-l3-agent</systemitem>
to the external network, create a bridge named
"br-ex" and attach the NIC for the external
network to this bridge.</para>
<para>For example, with Open vSwitch and NIC eth1
connected to the external network, run:</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-ex</userinput>
<prompt>#</prompt> <userinput>ovs-vsctl add-port br-ex eth1</userinput></screen>
<para>Do not manually configure an IP address on
the NIC connected to the external network for
the node running
<systemitem class="service">neutron-l3-agent</systemitem>.
Rather, you must have a range of IP addresses
from the external network that can be used by
OpenStack Networking for routers that uplink
to the external network. This range must be
large enough to have an IP address for each
router in the deployment, as well as each
floating IP.</para>
</step>
<step>
<para>The
<systemitem class="service">neutron-l3-agent</systemitem>
uses the Linux IP stack and iptables to
perform L3 forwarding and NAT. In order to
support multiple routers with potentially
overlapping IP addresses,
<systemitem class="service">neutron-l3-agent</systemitem>
defaults to using Linux network namespaces to
provide isolated forwarding contexts. As a
result, the IP addresses of routers are not
visible simply by running the <command>ip addr
list</command> or
<command>ifconfig</command> command on the
node. Similarly, you cannot directly
<command>ping</command> fixed IPs.</para>
<para>To do either of these things, you must run
the command within a particular network
namespace for the router. The namespace has
the name "qrouter-&lt;UUID of the router&gt;.
These example commands run in the router
namespace with UUID
47af3868-0fa8-4447-85f6-1304de32153b:</para>
<screen><prompt>#</prompt> <userinput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list</userinput></screen>
<screen><prompt>#</prompt> <userinput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping &lt;fixed-ip&gt;</userinput></screen>
</step>
</procedure>
<important>
<para>If you reboot a node that runs the L3 agent, you
must run the
<command>neutron-ovs-cleanup</command> command
before the <systemitem class="service"
>neutron-l3-agent</systemitem> service
starts.</para>
<para>On Red Hat, SUSE and Ubuntu based systems, the
<systemitem class="service"
>neutron-ovs-cleanup</systemitem> service runs
the <command>neutron-ovs-cleanup</command> command
automatically. However, on Debian-based systems
(including Ubuntu prior to Icehouse), you must
manually run this command or write your own system
script that runs on boot before the <systemitem
class="service">neutron-l3-agent</systemitem>
service starts.</para>
</important>
</section>
<section xml:id="install_neutron-metering-agent">
<title>Configure metering agent</title>
<para>Starting with the Havana release, the Neutron
Metering resides beside
<systemitem class="service">neutron-l3-agent</systemitem>.</para>
<procedure>
<title>To install the metering agent and configure the
node</title>
<step>
<para>Install the agent by running:</para>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-metering-agent</userinput></screen>
<note>
<title>Package name prior to Icehouse</title>
<para>In releases of neutron prior to
Icehouse, this package was named
<package>neutron-plugin-metering-agent</package>.</para>
</note>
</step>
<step>
<para>If you use one of the following plugins, you
need to configure the metering agent with
these lines as well:</para>
<itemizedlist>
<listitem>
<para>An OVS-based plug-in such as OVS,
NSX, Ryu, NEC,
BigSwitch/Floodlight:</para>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</listitem>
<listitem>
<para>A plug-in that uses
LinuxBridge:</para>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver</programlisting>
</listitem>
</itemizedlist>
</step>
<step>
<para>To use the reference implementation, you
must set:</para>
<programlisting language="ini">driver = neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver</programlisting>
</step>
<step>
<para>Set this parameter in the
<filename>neutron.conf</filename> file on
the host that runs <systemitem class="service"
>neutron-server</systemitem>:</para>
<programlisting language="ini">service_plugins = metering</programlisting>
</step>
</procedure>
</section>
<section xml:id="install_neutron-lbaas-agent">
<title>Configure Load-Balancer-as-a-Service
(LBaaS)</title>
<para>Configure Load-Balancer-as-a-Service (LBaas) with
the Open vSwitch or Linux Bridge plug-in. The Open
vSwitch LBaaS driver is required when enabling LBaaS
for OVS-based plug-ins, including BigSwitch,
Floodlight, NEC, NSX, and Ryu.</para>
<procedure>
<title>To configure LBaas with Open vSwitch or Linux
Bridge plug-in</title>
<step>
<para>Install the agent:</para>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-lbaas-agent</userinput></screen>
</step>
<step>
<para>Enable the
<productname>HAProxy</productname> plug-in
by using the <option>service_provider</option>
option in the
<filename>/etc/neutron/neutron.conf</filename>
file:</para>
<programlisting language="ini">service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default</programlisting>
</step>
<step>
<para>Enable the load-balancing plug-in by using the
<option>service_plugin</option> option in the
<filename>/etc/neutron/neutron.conf</filename>
file:</para>
<programlisting language="ini">service_plugins = lbaas</programlisting>
</step>
<step>
<para>Enable the
<productname>HAProxy</productname> load
balancer in the
<filename>/etc/neutron/lbaas_agent.ini</filename>
file:</para>
<programlisting language="ini">device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver</programlisting>
</step>
<step>
<para>Select the required driver in the
<filename>/etc/neutron/lbaas_agent.ini</filename>
file:</para>
<para>Enable the Open vSwitch LBaaS driver:</para>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
<para>Or, enable the Linux Bridge LBaaS
driver:</para>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver</programlisting>
<para>Apply the settings by restarting the
<systemitem class="service">neutron-server</systemitem>
and
<systemitem class="service">neutron-lbaas-agent</systemitem>
services.</para>
<note>
<title>Upgrade from Havana to Icehouse</title>
<para>In the Icehouse release, LBaaS
server-agent communications changed. If
you transition from Havana to Icehouse,
make sure to upgrade both server and agent
sides before you use the load balancing
service.</para>
</note>
</step>
<step>
<para>Enable Load Balancing in the
<guimenu>Project</guimenu> section of the
dashboard:</para>
<para>Change the <option>enable_lb</option> option
to <parameter>True</parameter> in the
<filename>/etc/openstack-dashboard/local_settings</filename>
file:</para>
<programlisting language="python">OPENSTACK_NEUTRON_NETWORK = {'enable_lb': True,</programlisting>
<para>Apply the settings by restarting the
<systemitem>httpd</systemitem> service.
You can now view the Load Balancer management
options in the <guimenu>Project</guimenu> view
in the dashboard.</para>
</step>
</procedure>
</section>
</section>

@ -0,0 +1,495 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="section_plugin-config">
<title>Plug-in configurations</title>
<para>For configurations options, see <link
xlink:href="http://docs.openstack.org/icehouse/config-reference/content/section_networking-options-reference.html"
>Networking configuration options</link> in
<citetitle>Configuration Reference</citetitle>.
These sections explain how to configure specific
plug-ins.</para>
<section xml:id="bigswitch_floodlight_plugin">
<title>Configure Big Switch (Floodlight REST Proxy) plug-in</title>
<procedure>
<title>To use the REST Proxy plug-in with
OpenStack Networking</title>
<step>
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename>
file and add this line:</para>
<programlisting language="ini">core_plugin = bigswitch</programlisting>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/bigswitch/restproxy.ini</filename>
file for the plug-in and specify a
comma-separated list of
<systemitem>controller_ip:port</systemitem>
pairs:</para>
<programlisting language="ini">server = &lt;controller-ip&gt;:&lt;port&gt;</programlisting>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
>Install Networking Services</link> in
the <citetitle>Installation
Guide</citetitle> in the <link
xlink:href="http://docs.openstack.org"
>OpenStack Documentation index</link>.
(The link defaults to the Ubuntu
version.)</para>
</step>
<step>
<para>Restart <systemitem class="service"
>neutron-server</systemitem> to apply
the settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="brocade_plugin">
<title>Configure Brocade plug-in</title>
<procedure>
<title>To use the Brocade plug-in with OpenStack
Networking</title>
<step>
<para>Install the Brocade-modified Python
netconf client (ncclient) library, which
is available at <link
xlink:href="https://github.com/brocade/ncclient"
>https://github.com/brocade/ncclient</link>:</para>
<screen><prompt>$</prompt> <userinput>git clone https://www.github.com/brocade/ncclient</userinput></screen>
<para>As <systemitem class="username"
>root</systemitem>, run this
command:</para>
<screen><prompt>#</prompt> <userinput>cd ncclient;python setup.py install</userinput></screen>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename>
file and set the following option:</para>
<programlisting language="ini">core_plugin = brocade</programlisting>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/brocade/brocade.ini</filename>
file for the Brocade plug-in and specify
the admin user name, password, and IP
address of the Brocade switch:</para>
<programlisting language="ini">[SWITCH]
username = <replaceable>admin</replaceable>
password = <replaceable>password</replaceable>
address = <replaceable>switch mgmt ip address</replaceable>
ostype = NOS</programlisting>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
>Install Networking Services</link> in
any of the <citetitle>Installation
Guides</citetitle> in the <link
xlink:href="http://docs.openstack.org"
>OpenStack Documentation index</link>.
(The link defaults to the Ubuntu
version.)</para>
</step>
<step>
<para>Restart the <systemitem class="service"
>neutron-server</systemitem> service
to apply the settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="openvswitch_plugin">
<title>Configure OVS plug-in</title>
<para>If you use the Open vSwitch (OVS) plug-in in a
deployment with multiple hosts, you must use
either tunneling or vlans to isolate traffic from
multiple networks. Tunneling is easier to deploy
because it does not require that you configure
VLANs on network switches.</para>
<para>This procedure uses tunneling:</para>
<procedure>
<title>To configure OpenStack Networking to use
the OVS plug-in</title>
<step>
<para>Edit
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>
file to specify these values:</para>
<programlisting language="ini">enable_tunneling=True
tenant_network_type=gre
tunnel_id_ranges=1:1000
# only required for nodes running agents
local_ip=&lt;data-net-IP-address-of-node&gt;</programlisting>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
>Install Networking Services</link> in
<citetitle>Installation
Guide</citetitle>.</para>
</step>
<step>
<para>If you use the neutron DHCP agent, add
these lines to the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file:</para>
<programlisting language="ini">dnsmasq_config_file=/etc/neutron/dnsmasq/dnsmasq-neutron.conf</programlisting>
<para>Restart the DHCP service to apply the
settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-dhcp-agent restart</userinput></screen>
</step>
<step>
<para>To lower the MTU size on instances and
prevent packet fragmentation over the GRE
tunnel, create the
<filename>/etc/neutron/dnsmasq/dnsmasq-neutron.conf</filename>
file and add these values:</para>
<programlisting language="ini">dhcp-option-force=26,1400</programlisting>
</step>
<step>
<para>Restart the <systemitem
class="service">neutron-server</systemitem>
service to apply the settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="nsx_plugin">
<title>Configure NSX plug-in</title>
<procedure>
<title>To configure OpenStack Networking to use
the NSX plug-in</title>
<para>While the instructions in this section refer
to the VMware NSX platform, this is formerly
known as Nicira NVP.</para>
<step>
<para>Install the NSX plug-in:</para>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-plugin-vmware</userinput></screen>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename>
file and set this line:</para>
<programlisting language="ini">core_plugin = vmware</programlisting>
<para>Example
<filename>neutron.conf</filename> file
for NSX:</para>
<programlisting language="ini">core_plugin = vmware
rabbit_host = 192.168.203.10
allow_overlapping_ips = True</programlisting>
</step>
<step>
<para>To configure the NSX controller cluster
for OpenStack Networking, locate the
<literal>[default]</literal> section
in the
<filename>/etc/neutron/plugins/vmware/nsx.ini</filename>
file and add the following entries:</para>
<itemizedlist>
<listitem>
<para>To establish and configure the
connection with the controller
cluster you must set some
parameters, including NSX API
endpoints, access credentials, and
settings for HTTP redirects and
retries in case of connection
failures:</para>
<programlisting language="ini">nsx_user = &lt;admin user name>
nsx_password = &lt;password for nsx_user>
req_timeout = &lt;timeout in seconds for NSX_requests> # default 30 seconds
http_timeout = &lt;tiemout in seconds for single HTTP request> # default 10 seconds
retries = &lt;number of HTTP request retries> # default 2
redirects = &lt;maximum allowed redirects for a HTTP request> # default 3
nsx_controllers = &lt;comma separated list of API endpoints></programlisting>
<para>To ensure correct operations,
the <literal>nsx_user</literal>
user must have administrator
credentials on the NSX
platform.</para>
<para>A controller API endpoint
consists of the IP address and port
for the controller; if you omit the
port, port 443 is used. If multiple
API endpoints are specified, it is
up to the user to ensure that all
these endpoints belong to the same
controller cluster. The OpenStack
Networking VMware NSX plug-in does
not perform this check, and results
might be unpredictable.</para>
<para>When you specify multiple API
endpoints, the plug-in
load-balances requests on the
various API endpoints.</para>
</listitem>
<listitem>
<para>The UUID of the NSX Transport
Zone that should be used by default
when a tenant creates a network.
You can get this value from the NSX
Manager's Transport Zones
page:</para>
<programlisting language="ini">default_tz_uuid = &lt;uuid_of_the_transport_zone&gt;</programlisting>
</listitem>
<listitem>
<programlisting language="ini">default_l3_gw_service_uuid = &lt;uuid_of_the_gateway_service&gt;</programlisting>
<warning>
<para>Ubuntu packaging currently
does not update the Neutron init
script to point to the NSX
configuration file. Instead, you
must manually update
<filename>/etc/default/neutron-server</filename>
to add this line:</para>
<programlisting language="ini">NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/vmware/nsx.ini</programlisting>
</warning>
</listitem>
</itemizedlist>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
>Install Networking Services</link> in
<citetitle>Installation
Guide</citetitle>.</para>
</step>
<step>
<para>Restart <systemitem class="service"
>neutron-server</systemitem> to apply
settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
<para>Example <filename>nsx.ini</filename>
file:</para>
<programlisting language="ini">[DEFAULT]
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
nsx_user=admin
nsx_password=changeme
nsx_controllers=10.127.0.100,10.127.0.200:8888</programlisting>
<note>
<para>To debug <filename>nsx.ini</filename>
configuration issues, run this command from
the host that runs <systemitem class="service"
>neutron-server</systemitem>:</para>
<screen><prompt>#</prompt> <userinput>neutron-check-nsx-config &lt;path/to/nsx.ini&gt;</userinput></screen>
<para>This command tests whether <systemitem
class="service"
>neutron-server</systemitem> can log into
all of the NSX Controllers and the SQL server,
and whether all UUID values are
correct.</para>
</note>
<section xml:id="LBaaS_and_FWaaS">
<title>Load-Balancer-as-a-Service and
Firewall-as-a-Service</title>
<para>The NSX LBaaS and FWaaS services use the
standard OpenStack API with the exception of
requiring routed-insertion extension
support.</para>
<para>The NSX implementation and the community
reference implementation of these services
differ, as follows:</para>
<orderedlist>
<listitem>
<para>The NSX LBaaS and FWaaS plug-ins
require the routed-insertion
extension, which adds the
<code>router_id</code> attribute to
the VIP (Virtual IP address) and
firewall resources and binds these
services to a logical router.</para>
</listitem>
<listitem>
<para>The community reference
implementation of LBaaS only supports
a one-arm model, which restricts the
VIP to be on the same subnet as the
back-end servers. The NSX LBaaS
plug-in only supports a two-arm model
between north-south traffic, which
means that you can create the VIP on
only the external (physical)
network.</para>
</listitem>
<listitem>
<para>The community reference
implementation of FWaaS applies
firewall rules to all logical routers
in a tenant, while the NSX FWaaS
plug-in applies firewall rules only to
one logical router according to the
<code>router_id</code> of the
firewall entity.</para>
</listitem>
</orderedlist>
<procedure>
<title>To configure Load-Balancer-as-a-Service
and Firewall-as-a-Service with NSX</title>
<step>
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename>
file:</para>
<programlisting language="ini">core_plugin = neutron.plugins.vmware.plugin.NsxServicePlugin
# Note: comment out service_plug-ins. LBaaS &amp; FWaaS is supported by core_plugin NsxServicePlugin
# service_plugins = </programlisting>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/vmware/nsx.ini</filename>
file:</para>
<para>In addition to the original NSX
configuration, the
<code>default_l3_gw_service_uuid</code>
is required for the NSX Advanced
plug-in and you must add a
<code>vcns</code> section:</para>
<programlisting language="ini">[DEFAULT]
nsx_password = <replaceable>admin</replaceable>
nsx_user = <replaceable>admin</replaceable>
nsx_controllers = <replaceable>10.37.1.137:443</replaceable>
default_l3_gw_service_uuid = <replaceable>aae63e9b-2e4e-4efe-81a1-92cf32e308bf</replaceable>
default_tz_uuid = <replaceable>2702f27a-869a-49d1-8781-09331a0f6b9e</replaceable>
[vcns]
# VSM management URL
manager_uri = <replaceable>https://10.24.106.219</replaceable>
# VSM admin user name
user = <replaceable>admin</replaceable>
# VSM admin password
password = <replaceable>default</replaceable>
# UUID of a logical switch on NSX which has physical network connectivity (currently using bridge transport type)
external_network = <replaceable>f2c023cf-76e2-4625-869b-d0dabcfcc638</replaceable>
# ID of deployment_container on VSM. Optional, if not specified, a default global deployment container is used
# deployment_container_id =
# task_status_check_interval configures status check interval for vCNS asynchronous API. Default is 2000 msec.
# task_status_check_interval =</programlisting>
</step>
<step>
<para>Restart the <systemitem
class="service">neutron-server</systemitem>
service to apply the settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
</section>
<section xml:id="PLUMgridplugin">
<title>Configure PLUMgrid plug-in</title>
<procedure>
<title>To use the PLUMgrid plug-in with OpenStack
Networking</title>
<step>
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename>
file and set this line:</para>
<programlisting language="ini">core_plugin = plumgrid</programlisting>
</step>
<step>
<para>Edit the
<systemitem>[PLUMgridDirector]</systemitem>
section in the
<filename>/etc/neutron/plugins/plumgrid/plumgrid.ini</filename>
file and specify the IP address, port,
admin user name, and password of the
PLUMgrid Director:</para>
<programlisting language="ini">[PLUMgridDirector]
director_server = "PLUMgrid-director-ip-address"
director_server_port = "PLUMgrid-director-port"
username = "PLUMgrid-director-admin-username"
password = "PLUMgrid-director-admin-password"</programlisting>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
>Install Networking Services</link> in
the <citetitle>Installation
Guide</citetitle>.</para>
</step>
<step>
<para>Restart the <systemitem class="service"
>neutron-server</systemitem> service to apply
the settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="ryu_plugin">
<title>Configure Ryu plug-in</title>
<procedure>
<title>To use the Ryu plug-in with OpenStack
Networking</title>
<step>
<para>Install the Ryu plug-in:</para>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-plugin-ryu</userinput> </screen>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename>
file and set this line:</para>
<programlisting language="ini">core_plugin = ryu</programlisting>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/ryu/ryu.ini</filename>
file and update these options in the
<literal>[ovs]</literal> section
for the
<systemitem class="service">ryu-neutron-agent</systemitem>:</para>
<itemizedlist>
<listitem>
<para><option>openflow_rest_api</option>.
Defines where Ryu is listening for
REST API. Substitute
<option>ip-address</option>
and
<option>port-no</option>
based on your Ryu setup.</para>
</listitem>
<listitem>
<para><option>ovsdb_interface</option>.
Enables Ryu to access the
<systemitem>ovsdb-server</systemitem>.
Substitute <literal>eth0</literal>
based on your setup. The IP address
is derived from the interface name.
If you want to change this value
irrespective of the interface name,
you can specify
<option>ovsdb_ip</option>.
If you use a non-default port for
<systemitem>ovsdb-server</systemitem>,
you can specify
<option>ovsdb_port</option>.</para>
</listitem>
<listitem>
<para><option>tunnel_interface</option>.
Defines which IP address is used
for tunneling. If you do not use
tunneling, this value is ignored.
The IP address is derived from the
network interface name.</para>
</listitem>
</itemizedlist>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
>Install Networking Services</link> in
<citetitle>Installation
Guide</citetitle>.</para>
<para>You can use the same configuration file
for many compute nodes by using a network
interface name with a different IP
address:</para>
<programlisting language="ini">openflow_rest_api = &lt;ip-address&gt;:&lt;port-no&gt; ovsdb_interface = &lt;eth0&gt; tunnel_interface = &lt;eth0&gt;</programlisting>
</step>
<step>
<para>Restart the <systemitem class="service"
>neutron-server</systemitem> to apply
the settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
</section>

File diff suppressed because it is too large Load Diff