Merge "More fixes for NSX plugin guide"

This commit is contained in:
Jenkins
2015-07-03 07:58:12 +00:00
committed by Gerrit Code Review

View File

@@ -80,11 +80,10 @@ ostype = NOS</programlisting>
</procedure>
</section>
<section xml:id="nsx_plugin">
<title>Configure NSX plug-in</title>
<title>Configure NSX-mh plug-in</title>
<procedure>
<title>To configure OpenStack Networking to use the NSX plug-in</title>
<para>While the instructions in this section refer to the VMware NSX platform, this is
formerly known as Nicira NVP.</para>
<title>Configuring OpenStack Networking to use the NSX multi hypervisor plug-in</title>
<para>The instructions in this section refer to the VMware NSX-mh platform, formerly known as Nicira NVP.</para>
<step>
<para>Install the NSX plug-in:</para>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-plugin-vmware</userinput></screen>
@@ -93,22 +92,22 @@ ostype = NOS</programlisting>
<para>Edit the <filename>/etc/neutron/neutron.conf</filename> file and set this
line:</para>
<programlisting language="ini">core_plugin = vmware</programlisting>
<para>Example <filename>neutron.conf</filename> file for NSX:</para>
<para>Example <filename>neutron.conf</filename> file for NSX-mh integration:</para>
<programlisting language="ini">core_plugin = vmware
rabbit_host = 192.168.203.10
allow_overlapping_ips = True</programlisting>
</step>
<step>
<para>To configure the NSX controller cluster for OpenStack Networking, locate the
<para>To configure the NSX-mh controller cluster for OpenStack Networking, locate the
<literal>[default]</literal> section in the
<filename>/etc/neutron/plugins/vmware/nsx.ini</filename> file and add the
following entries:</para>
<itemizedlist>
<listitem>
<para>To establish and configure the connection with the controller cluster
you must set some parameters, including NSX API endpoints, access
credentials, and settings for HTTP redirects and retries in case of
connection failures:</para>
you must set some parameters, including NSX-mh API endpoints, access
credentials, and optionally specify settings for HTTP timeouts, redirects
and retries in case of connection failures:</para>
<programlisting language="ini">nsx_user = <replaceable>ADMIN_USER_NAME</replaceable>
nsx_password = <replaceable>NSX_USER_PASSWORD</replaceable>
http_timeout = <replaceable>HTTP_REQUEST_TIMEOUT</replaceable> # (seconds) default 75 seconds
@@ -116,28 +115,29 @@ retries = <replaceable>HTTP_REQUEST_RETRIES</replaceable> # default 2
redirects = <replaceable>HTTP_REQUEST_MAX_REDIRECTS</replaceable> # default 2
nsx_controllers = <replaceable>API_ENDPOINT_LIST</replaceable> # comma-separated list</programlisting>
<para>To ensure correct operations, the <literal>nsx_user</literal> user
must have administrator credentials on the NSX platform.</para>
must have administrator credentials on the NSX-mh platform.</para>
<para>A controller API endpoint consists of the IP address and port for the
controller; if you omit the port, port 443 is used. If multiple API
endpoints are specified, it is up to the user to ensure that all these
endpoints belong to the same controller cluster. The OpenStack
Networking VMware NSX plug-in does not perform this check, and results
Networking VMware NSX-mh plug-in does not perform this check, and results
might be unpredictable.</para>
<para>When you specify multiple API endpoints, the plug-in load-balances
<para>When you specify multiple API endpoints, the plug-in takes care of load balancing
requests on the various API endpoints.</para>
</listitem>
<listitem>
<para>The UUID of the NSX transport zone that should be used by default when
<para>The UUID of the NSX-mh transport zone that should be used by default when
a tenant creates a network. You can get this value from the
<guilabel>Transport Zones</guilabel> page for the NSX
Manager:</para>
<guilabel>Transport Zones</guilabel> page for the NSX-mh manager:</para>
<para>Alternatively the transport zone identfier can be retrieved by query the NSX-mh
API: <literal>/ws.v1/transport-zone</literal></para>
<programlisting language="ini">default_tz_uuid = <replaceable>TRANSPORT_ZONE_UUID</replaceable></programlisting>
</listitem>
<listitem>
<programlisting language="ini">default_l3_gw_service_uuid = <replaceable>GATEWAY_SERVICE_UUID</replaceable></programlisting>
<warning>
<para>Ubuntu packaging currently does not update the Neutron init script
to point to the NSX configuration file. Instead, you must manually
<para>Ubuntu packaging currently does not update the neutron init script
to point to the NSX-mh configuration file. Instead, you must manually
update <filename>/etc/default/neutron-server</filename> to add this
line:</para>
<programlisting language="ini">NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/vmware/nsx.ini</programlisting>
@@ -153,6 +153,11 @@ nsx_controllers = <replaceable>API_ENDPOINT_LIST</replaceable> # comma-separated
<para>Restart <systemitem class="service">neutron-server</systemitem> to apply
settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
<warning>
<para>The neutron NSX-mh plug-in does not implement initial re-synchronization of Neutron resources.
Therefore resources that might already exist in the database when Neutron is switched to the
NSX-mh plug-in will not be created on the NSX-mh backend upon restart.</para>
</warning>
</step>
</procedure>
<para>Example <filename>nsx.ini</filename> file:</para>
@@ -167,84 +172,9 @@ nsx_controllers=10.127.0.100,10.127.0.200:8888</programlisting>
the host that runs <systemitem class="service">neutron-server</systemitem>:</para>
<screen><prompt>#</prompt> <userinput>neutron-check-nsx-config <replaceable>PATH_TO_NSX.INI</replaceable></userinput></screen>
<para>This command tests whether <systemitem class="service">neutron-server</systemitem>
can log into all of the NSX Controllers and the SQL server, and whether all UUID
can log into all of the NSX-mh controllers and the SQL server, and whether all UUID
values are correct.</para>
</note>
<section xml:id="LBaaS_and_FWaaS">
<title>Load-Balancer-as-a-Service and Firewall-as-a-Service</title>
<para>The NSX LBaaS and FWaaS services use the standard OpenStack API with the exception
of requiring routed-insertion extension support.</para>
<para>The NSX implementation and the community reference implementation of these
services differ, as follows:</para>
<orderedlist>
<listitem>
<para>The NSX LBaaS and FWaaS plug-ins require the routed-insertion extension,
which adds the <code>router_id</code> attribute to the VIP (Virtual IP
address) and firewall resources and binds these services to a logical
router.</para>
</listitem>
<listitem>
<para>The community reference implementation of LBaaS only supports a one-arm
model, which restricts the VIP to be on the same subnet as the back-end
servers. The NSX LBaaS plug-in only supports a two-arm model between
north-south traffic, which means that you can create the VIP on only the
external (physical) network.</para>
</listitem>
<listitem>
<para>The community reference implementation of FWaaS applies firewall rules to
all logical routers in a tenant, while the NSX FWaaS plug-in applies
firewall rules only to one logical router according to the
<code>router_id</code> of the firewall entity.</para>
</listitem>
</orderedlist>
<procedure>
<title>To configure Load-Balancer-as-a-Service and Firewall-as-a-Service with
NSX</title>
<step>
<para>Edit the <filename>/etc/neutron/neutron.conf</filename> file:</para>
<programlisting language="ini">core_plugin = neutron.plugins.vmware.plugin.NsxServicePlugin
# Note: comment out service_plug-ins. LBaaS &amp; FWaaS is supported by core_plugin NsxServicePlugin
# service_plugins = </programlisting>
</step>
<step>
<para>Edit the <filename>/etc/neutron/plugins/vmware/nsx.ini</filename>
file:</para>
<para>In addition to the original NSX configuration, the
<code>default_l3_gw_service_uuid</code> is required for the NSX Advanced
plug-in and you must add a <code>vcns</code> section:</para>
<programlisting language="ini">[DEFAULT]
nsx_password = <replaceable>ADMIN</replaceable>
nsx_user = <replaceable>ADMIN</replaceable>
nsx_controllers = <replaceable>10.37.1.137:443</replaceable>
default_l3_gw_service_uuid = <replaceable>aae63e9b-2e4e-4efe-81a1-92cf32e308bf</replaceable>
default_tz_uuid = <replaceable>2702f27a-869a-49d1-8781-09331a0f6b9e</replaceable>
[vcns]
# VSM management URL
manager_uri = <replaceable>https://10.24.106.219</replaceable>
# VSM admin user name
user = <replaceable>ADMIN</replaceable>
# VSM admin password
password = <replaceable>DEFAULT</replaceable>
# UUID of a logical switch on NSX which has physical network connectivity (currently using bridge transport type)
external_network = <replaceable>f2c023cf-76e2-4625-869b-d0dabcfcc638</replaceable>
# ID of deployment_container on VSM. Optional, if not specified, a default global deployment container is used
# deployment_container_id =
# task_status_check_interval configures status check interval for vCNS asynchronous API. Default is 2000 msec.
# task_status_check_interval =</programlisting>
</step>
<step>
<para>Restart the <systemitem class="service">neutron-server</systemitem>
service to apply the settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
</section>
<section xml:id="PLUMgridplugin">
<title>Configure PLUMgrid plug-in</title>